{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "E1W90taVm49B"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "O1u4yZC5du17"
      },
      "source": [
        "# Evaluation Criteria In ADK\n",
        "\n",
        "<a target=\"_blank\" href=\"https://colab.research.google.com/github/google/adk-samples/blob/main/python/notebooks/evaluation/evaluation_criteria_in_adk.ipynb\">\n",
        "  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
        "</a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nYV8nIEnd6pj"
      },
      "source": [
        "| Author(s) |\n",
        "| --- |\n",
        "| [Ankur Sharma](https://github.com/ankursharmas) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "29177372"
      },
      "source": [
        "## Overview\n",
        "\n",
        "Agent Development Kit [(ADK)](https://google.github.io/adk-docs/) is a flexible and modular framework that applies software development principles to AI agent creation. It is designed to simplify building, deploying, and orchestrating agent workflows, from simple tasks to complex systems.\n",
        "\n",
        "Unlike traditional software with clear pass/fail unit tests, LLM agents, due to their probabilistic nature, require a more nuanced evaluation/testing approach. ADK helps you manage this probablistic behavior and enables you to [evaluate/test](https://google.github.io/adk-docs/evaluate/) Agents not only for their final output, but also the path they took to achieve that.  \n",
        "\n",
        "This Colab notebook demonstrates how you can evaluate/test your Agents using various criteria using CLI. It covers following criteria:\n",
        "\n",
        "  *   `tool_trajectory_avg_score`: Compares agent's tool calls with expected trajectory.\n",
        "  *   `response_match_score`: Evaluates final response similarity using Rouge-1.\n",
        "  *   `final_response_match_v2`: Assesses semantic equivalence of responses using an LLM judge.\n",
        "  *   `rubric_based_final_response_quality_v1`: Evaluates response quality against user-defined rubrics with an LLM judge.\n",
        "  *   `rubric_based_tool_use_quality_v1`: Assesses tool usage quality against user-defined rubrics with an LLM judge.\n",
        "  *   `hallucinations_v1`: Checks for false, contradictory, or unsupported claims in agent responses.\n",
        "  *   `safety_v1`: Evaluates the harmlessness of agent responses using Vertex AI General AI Eval SDK.\n",
        "\n",
        "By the end of this notebook, you will understand how to set up and run comprehensive evaluations for your ADK Agents."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7DKbgcmME6IX"
      },
      "source": [
        "## Get started\n",
        "\n",
        "### Install ADK and other required packages"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Uo_qhg7VxQAB",
        "outputId": "0b8bc6f5-37f2-4386-e441-3ffbb1d12567"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "\u001b[?25l     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/46.0 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m46.0/46.0 kB\u001b[0m \u001b[31m2.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m41.3/41.3 kB\u001b[0m \u001b[31m2.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.2/2.2 MB\u001b[0m \u001b[31m28.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m8.1/8.1 MB\u001b[0m \u001b[31m31.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m290.0/290.0 kB\u001b[0m \u001b[31m14.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m9.0/9.0 MB\u001b[0m \u001b[31m59.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m119.9/119.9 kB\u001b[0m \u001b[31m8.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m278.1/278.1 kB\u001b[0m \u001b[31m21.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[2K   \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m753.1/753.1 kB\u001b[0m \u001b[31m45.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25h  Building wheel for rouge-score (setup.py) ... \u001b[?25l\u001b[?25hdone\n"
          ]
        }
      ],
      "source": [
        "%pip install --upgrade --quiet \\\n",
        "     \"google-adk==1.18.0\" \\\n",
        "     \"google-cloud-aiplatform[evaluation]>=1.100.0\" \\\n",
        "     \"rouge-score>=0.1.2\" \\\n",
        "     \"tabulate>=0.9.0\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "y3yhOffJeUoG"
      },
      "source": [
        "### Authenticate your notebook environment\n",
        "\n",
        "Run the cell below to authenticate your account in Google Colab:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WoATnUGRyQu_"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SPwNhMqOea-c"
      },
      "source": [
        "### Set Google Cloud project information\n",
        "\n",
        "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
        "\n",
        "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3GppfkFtynpJ"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "PROJECT_ID = \"\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "LOCATION = \"global\" # @param {type: \"string\", placeholder: \"[your-region]\", isTemplate: true}\n",
        "\n",
        "# Set environment vars\n",
        "os.environ[\"GOOGLE_CLOUD_PROJECT\"] = PROJECT_ID\n",
        "os.environ[\"GOOGLE_CLOUD_LOCATION\"] = LOCATION\n",
        "os.environ[\"GOOGLE_GENAI_USE_VERTEXAI\"]=\"1\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "T567JHIUzERQ"
      },
      "source": [
        "## Set up\n",
        "\n",
        "Before you can try an evaluation criterion, you'll need to prepare the agent and evaluation data.\n",
        "\n",
        "This section will walk you through downloading the `hello_world` sample agent, which you'll use as our test subject. Then, you'll create the an eval dataset.\n",
        "\n",
        "First, we'll clone the `adk-python` repository from GitHub. This gives us access to the `hello_world` sample agent, which we'll use for our evaluation:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "-bDaq0j1btEd",
        "outputId": "cc2cd371-3356-44a8-bfa9-fef8ee176feb"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Cloning into 'adk-python'...\n",
            "remote: Enumerating objects: 17279, done.\u001b[K\n",
            "remote: Counting objects: 100% (2954/2954), done.\u001b[K\n",
            "remote: Compressing objects: 100% (822/822), done.\u001b[K\n",
            "remote: Total 17279 (delta 2458), reused 2149 (delta 2122), pack-reused 14325 (from 4)\u001b[K\n",
            "Receiving objects: 100% (17279/17279), 31.20 MiB | 33.38 MiB/s, done.\n",
            "Resolving deltas: 100% (10331/10331), done.\n",
            "agent.py  __init__.py  main.py\n"
          ]
        }
      ],
      "source": [
        "#@title Download HelloWorld Agent From ADK Github Repo\n",
        "AGENT_BASE_PATH = \"adk-python/contributing/samples/hello_world\"\n",
        "\n",
        "!git clone https://github.com/google/adk-python/\n",
        "!ls {AGENT_BASE_PATH}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "HcUfNBeyb2DT"
      },
      "outputs": [],
      "source": [
        "#@title Set Up Eval Data Needed By Eval\n",
        "serialized_eval_set = \"\"\"\n",
        "{\n",
        "  \"eval_set_id\": \"sample_eval_set_01\",\n",
        "  \"name\": \"sample_eval_set_01\",\n",
        "  \"eval_cases\": [\n",
        "    {\n",
        "      \"eval_id\": \"roll_dice_9_and_check_prime_10_19\",\n",
        "      \"conversation\": [\n",
        "        {\n",
        "          \"invocation_id\": \"e-df832358-8669-4153-acb6-55fef0f139d2\",\n",
        "          \"user_content\": {\n",
        "            \"parts\": [\n",
        "              {\n",
        "                \"text\": \"What can you do?\"\n",
        "              }\n",
        "            ],\n",
        "            \"role\": \"user\"\n",
        "          },\n",
        "          \"final_response\": {\n",
        "            \"parts\": [\n",
        "              {\n",
        "                \"text\": \"I can roll a die of a specified number of sides and check if a list of numbers are prime.\"\n",
        "              }\n",
        "            ],\n",
        "            \"role\": \"model\"\n",
        "          },\n",
        "          \"intermediate_data\": {},\n",
        "          \"creation_timestamp\": 1758846836.067581\n",
        "        },\n",
        "        {\n",
        "          \"invocation_id\": \"e-377f3392-0587-4741-9474-439eafd45592\",\n",
        "          \"user_content\": {\n",
        "            \"parts\": [\n",
        "              {\n",
        "                \"text\": \"Roll a 9 sided dice\"\n",
        "              }\n",
        "            ],\n",
        "            \"role\": \"user\"\n",
        "          },\n",
        "          \"final_response\": {\n",
        "            \"parts\": [\n",
        "              {\n",
        "                \"text\": \"I rolled a 9 sided die and got a 6.\"\n",
        "              }\n",
        "            ],\n",
        "            \"role\": \"model\"\n",
        "          },\n",
        "          \"intermediate_data\": {\n",
        "            \"invocation_events\": [\n",
        "              {\n",
        "                \"author\": \"hello_world_agent\",\n",
        "                \"content\": {\n",
        "                  \"parts\": [\n",
        "                    {\n",
        "                      \"function_call\": {\n",
        "                        \"id\": \"adk-85ed5aa0-baf0-43f6-b55d-85b518120645\",\n",
        "                        \"args\": {\n",
        "                          \"sides\": 9\n",
        "                        },\n",
        "                        \"name\": \"roll_die\"\n",
        "                      }\n",
        "                    }\n",
        "                  ],\n",
        "                  \"role\": \"model\"\n",
        "                }\n",
        "              },\n",
        "              {\n",
        "                \"author\": \"hello_world_agent\",\n",
        "                \"content\": {\n",
        "                  \"parts\": [\n",
        "                    {\n",
        "                      \"function_response\": {\n",
        "                        \"id\": \"adk-85ed5aa0-baf0-43f6-b55d-85b518120645\",\n",
        "                        \"name\": \"roll_die\",\n",
        "                        \"response\": {\n",
        "                          \"result\": 6\n",
        "                        }\n",
        "                      }\n",
        "                    }\n",
        "                  ],\n",
        "                  \"role\": \"user\"\n",
        "                }\n",
        "              }\n",
        "            ]\n",
        "          },\n",
        "          \"creation_timestamp\": 1758846843.514974\n",
        "        },\n",
        "        {\n",
        "          \"invocation_id\": \"e-599ddefd-1588-4cca-82a1-8e6461acaf52\",\n",
        "          \"user_content\": {\n",
        "            \"parts\": [\n",
        "              {\n",
        "                \"text\": \"Are 10 and 19 prime numbers?\"\n",
        "              }\n",
        "            ],\n",
        "            \"role\": \"user\"\n",
        "          },\n",
        "          \"final_response\": {\n",
        "            \"parts\": [\n",
        "              {\n",
        "                \"text\": \"19 is a prime number, while 10 is not.\"\n",
        "              }\n",
        "            ],\n",
        "            \"role\": \"model\"\n",
        "          },\n",
        "          \"intermediate_data\": {\n",
        "            \"invocation_events\": [\n",
        "              {\n",
        "                \"author\": \"hello_world_agent\",\n",
        "                \"content\": {\n",
        "                  \"parts\": [\n",
        "                    {\n",
        "                      \"function_call\": {\n",
        "                        \"id\": \"adk-ae456e0f-4b02-4a44-981e-68528ae8fc2f\",\n",
        "                        \"args\": {\n",
        "                          \"nums\": [\n",
        "                            10,\n",
        "                            19\n",
        "                          ]\n",
        "                        },\n",
        "                        \"name\": \"check_prime\"\n",
        "                      }\n",
        "                    }\n",
        "                  ],\n",
        "                  \"role\": \"model\"\n",
        "                }\n",
        "              },\n",
        "              {\n",
        "                \"author\": \"hello_world_agent\",\n",
        "                \"content\": {\n",
        "                  \"parts\": [\n",
        "                    {\n",
        "                      \"function_response\": {\n",
        "                        \"id\": \"adk-ae456e0f-4b02-4a44-981e-68528ae8fc2f\",\n",
        "                        \"name\": \"check_prime\",\n",
        "                        \"response\": {\n",
        "                          \"result\": \"19 are prime numbers.\"\n",
        "                        }\n",
        "                      }\n",
        "                    }\n",
        "                  ],\n",
        "                  \"role\": \"user\"\n",
        "                }\n",
        "              }\n",
        "            ]\n",
        "          },\n",
        "          \"creation_timestamp\": 1758846851.372041\n",
        "        }\n",
        "      ],\n",
        "      \"session_input\": {\n",
        "        \"app_name\": \"hello_world\",\n",
        "        \"user_id\": \"user\"\n",
        "      },\n",
        "      \"creation_timestamp\": 1758846897.1953406\n",
        "    }\n",
        "  ],\n",
        "  \"creation_timestamp\": 1758846869.1735425\n",
        "}\n",
        "\"\"\"\n",
        "\n",
        "!echo '{serialized_eval_set}' > {AGENT_BASE_PATH}/sample_eval_set_01.evalset.json"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QDl0Rc8pK1_B"
      },
      "source": [
        "## Evaluation Criteria\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "quWB09U6DEUs"
      },
      "source": [
        "### Criterion - `tool_trajectory_avg_score`\n",
        "\n",
        "This criterion compares the tool call trajectory produced by the agent with an\n",
        "expected trajectory and computes an average score based on exact match.\n",
        "\n",
        "#### Details\n",
        "\n",
        "For each invocation that is being evaluated, this criterion compares the list of\n",
        "tool calls produced by the agent against the list of expected tool calls. The\n",
        "comparison is done by performing an exact match on the tool name and tool\n",
        "arguments for each tool call in the list. If all tool calls in an invocation\n",
        "match exactly in content and order, a score of 1.0 is awarded for that\n",
        "invocation, otherwise the score is 0.0. The final value is the average of these\n",
        "scores across all invocations in the eval case.\n",
        "\n",
        "#### Output And How To Interpret\n",
        "\n",
        "The output is a score between 0.0 and 1.0, where 1.0 indicates a perfect match\n",
        "between actual and expected tool trajectories for all invocations, and 0.0\n",
        "indicates a complete mismatch for all invocations. Higher scores are better. A\n",
        "score below 1.0 means that for at least one invocation, the agent's tool call\n",
        "trajectory deviated from the expected one.\n",
        "\n",
        "More details can be found [here](https://google.github.io/adk-docs/evaluate/criteria/#tool_trajectory_avg_score).\n",
        "\n",
        "Takes about ~30 seconds to run the cell below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "PncUcn_JK7An",
        "outputId": "271fff94-dd66-4fcd-fe21-b65d709f202c"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/metric_evaluator_registry.py:90: UserWarning: [EXPERIMENTAL] MetricEvaluatorRegistry: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  metric_evaluator_registry = MetricEvaluatorRegistry()\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/local_eval_service.py:79: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider: UserSimulatorProvider = UserSimulatorProvider(),\n",
            "Using evaluation criteria: criteria={'tool_trajectory_avg_score': 1.0} user_simulator_config=None\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:650: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider = UserSimulatorProvider(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:655: UserWarning: [EXPERIMENTAL] LocalEvalService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  eval_service = LocalEvalService(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/user_simulator_provider.py:77: UserWarning: [EXPERIMENTAL] StaticUserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  return StaticUserSimulator(static_conversation=eval_case.conversation)\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/static_user_simulator.py:39: UserWarning: [EXPERIMENTAL] UserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  super().__init__(\n",
            "*********************************************************************\n",
            "Eval Run Summary\n",
            "sample_eval_set_01:\n",
            "  Tests passed: 1\n",
            "  Tests failed: 0\n",
            "********************************************************************\n",
            "Eval Set Id: sample_eval_set_01\n",
            "Eval Id: roll_dice_9_and_check_prime_10_19\n",
            "Overall Eval Status: PASSED\n",
            "---------------------------------------------------------------------\n",
            "Metric: tool_trajectory_avg_score, Status: PASSED, Score: 1.0, Threshold: 1.0\n",
            "---------------------------------------------------------------------\n",
            "Invocation Details:\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+-----------------------------+\n",
            "|    | prompt              | expected_response         | actual_response           | expected_tool_calls       | actual_tool_calls         | tool_trajectory_avg_score   |\n",
            "+====+=====================+===========================+===========================+===========================+===========================+=============================+\n",
            "|  0 | What can you do?    | I can roll a die of a     | I can roll dice of        |                           |                           | Status: PASSED, Score:      |\n",
            "|    |                     | specified number of sides | different sizes and check |                           |                           | 1.0                         |\n",
            "|    |                     | and check if a list of    | if a number is prime.     |                           |                           |                             |\n",
            "|    |                     | numbers are prime.        |                           |                           |                           |                             |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+-----------------------------+\n",
            "|  1 | Roll a 9 sided dice | I rolled a 9 sided die    | I rolled a 9-sided die    | id='adk-85ed5aa0-baf0-43f | id='adk-1f313407-d149-4ad | Status: PASSED, Score:      |\n",
            "|    |                     | and got a 6.              | and got an 8.             | 6-b55d- 85b518120645'     | 1-9b13- 7f748ce52d96'     | 1.0                         |\n",
            "|    |                     |                           |                           | args={'sides': 9}         | args={'sides': 9}         |                             |\n",
            "|    |                     |                           |                           | name='roll_die'           | name='roll_die'           |                             |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+-----------------------------+\n",
            "|  2 | Are 10 and 19 prime | 19 is a prime number,     | 19 is a prime number, but | id='adk- ae456e0f-4b02-4a | id='adk- fe1ace41-cbf6-4e | Status: PASSED, Score:      |\n",
            "|    | numbers?            | while 10 is not.          | 10 is not.                | 44-981e-68528ae8fc2f'     | e9-9e79-23e241928ed9'     | 1.0                         |\n",
            "|    |                     |                           |                           | args={'nums': [10, 19]}   | args={'nums': [10, 19]}   |                             |\n",
            "|    |                     |                           |                           | name='check_prime'        | name='check_prime'        |                             |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+-----------------------------+\n",
            "\n",
            "\n",
            "\n"
          ]
        }
      ],
      "source": [
        "eval_config=\"\"\"\n",
        "{\n",
        "  \"criteria\": {\n",
        "    \"tool_trajectory_avg_score\": 1.0\n",
        "  }\n",
        "}\n",
        "\"\"\"\n",
        "\n",
        "!echo '{eval_config}' > {AGENT_BASE_PATH}/eval_config.json\n",
        "!adk eval \\\n",
        "    {AGENT_BASE_PATH} \\\n",
        "    --config_file_path {AGENT_BASE_PATH}/eval_config.json \\\n",
        "    sample_eval_set_01 \\\n",
        "    --print_detailed_results \\\n",
        "    --log_level=CRITICAL"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "da4a78c8"
      },
      "source": [
        "---\n",
        "\n",
        "**Analyzing the Full Evaluation Output**\n",
        "\n",
        "The evaluation results for `sample_eval_set_01` and `eval_id: roll_dice_9_and_check_prime_10_19` indicate an **Overall Eval Status: PASSED** for the `tool_trajectory_avg_score` metric.\n",
        "\n",
        "**Metric Details:**\n",
        "*   **Metric:** `tool_trajectory_avg_score`\n",
        "*   **Status:** PASSED\n",
        "*   **Score:** 1.0\n",
        "*   **Threshold:** 1.0\n",
        "\n",
        "This means the agent achieved a perfect score of 1.0, meeting or exceeding the set threshold. The detailed invocation results show that for each turn in the conversation (e.g., \"What can you do?\", \"Roll a 9 sided dice\", \"Are 10 and 19 prime numbers?\"), the agent's actual tool calls and responses perfectly matched the expected ones, resulting in a score of 1.0 for each invocation. This demonstrates that the agent's tool usage trajectory aligns precisely with the golden standard for this evaluation set."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ARJrf2stDQLT"
      },
      "source": [
        "### Criterion - `response_match_score`\n",
        "\n",
        "This criterion evaluates if agent's final response matches a golden/expected\n",
        "final response using Rouge-1.\n",
        "\n",
        "#### Details\n",
        "\n",
        "To learn more, see details on\n",
        "[ROUGE-1](https://github.com/google-research/google-research/tree/master/rouge).\n",
        "\n",
        "#### Output And How To Interpret\n",
        "\n",
        "Value range for this criterion is [0,1], with values closer to 1 more desirable.\n",
        "\n",
        "More details can be found [here](https://google.github.io/adk-docs/evaluate/criteria/#response_match_score).\n",
        "\n",
        "Takes about ~30 seconds to run the cell below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "NRblG7CNUtU_",
        "outputId": "8558aec1-9018-4f8d-bd85-fab4ce764727"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/metric_evaluator_registry.py:90: UserWarning: [EXPERIMENTAL] MetricEvaluatorRegistry: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  metric_evaluator_registry = MetricEvaluatorRegistry()\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/local_eval_service.py:79: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider: UserSimulatorProvider = UserSimulatorProvider(),\n",
            "Using evaluation criteria: criteria={'response_match_score': 0.8} user_simulator_config=None\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:650: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider = UserSimulatorProvider(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:655: UserWarning: [EXPERIMENTAL] LocalEvalService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  eval_service = LocalEvalService(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/user_simulator_provider.py:77: UserWarning: [EXPERIMENTAL] StaticUserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  return StaticUserSimulator(static_conversation=eval_case.conversation)\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/static_user_simulator.py:39: UserWarning: [EXPERIMENTAL] UserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  super().__init__(\n",
            "*********************************************************************\n",
            "Eval Run Summary\n",
            "sample_eval_set_01:\n",
            "  Tests passed: 0\n",
            "  Tests failed: 1\n",
            "********************************************************************\n",
            "Eval Set Id: sample_eval_set_01\n",
            "Eval Id: roll_dice_9_and_check_prime_10_19\n",
            "Overall Eval Status: FAILED\n",
            "---------------------------------------------------------------------\n",
            "Metric: response_match_score, Status: FAILED, Score: 0.7883597883597884, Threshold: 0.8\n",
            "---------------------------------------------------------------------\n",
            "Invocation Details:\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------+\n",
            "|    | prompt              | expected_response         | actual_response           | expected_tool_calls       | actual_tool_calls         | response_match_score   |\n",
            "+====+=====================+===========================+===========================+===========================+===========================+========================+\n",
            "|  0 | What can you do?    | I can roll a die of a     | I can roll dice of        |                           |                           | Status: FAILED, Score: |\n",
            "|    |                     | specified number of sides | different sizes and check |                           |                           | 0.47619047619047616    |\n",
            "|    |                     | and check if a list of    | if a number is prime. I   |                           |                           |                        |\n",
            "|    |                     | numbers are prime.        | can also use multiple     |                           |                           |                        |\n",
            "|    |                     |                           | tools in parallel.        |                           |                           |                        |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------+\n",
            "|  1 | Roll a 9 sided dice | I rolled a 9 sided die    | I rolled a 9 sided die    | id='adk-85ed5aa0-baf0-43f | id='adk- ead345d8-3007-40 | Status: PASSED, Score: |\n",
            "|    |                     | and got a 6.              | and got a 6.              | 6-b55d- 85b518120645'     | da-b825-9a57d665b840'     | 1.0                    |\n",
            "|    |                     |                           |                           | args={'sides': 9}         | args={'sides': 9}         |                        |\n",
            "|    |                     |                           |                           | name='roll_die'           | name='roll_die'           |                        |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------+\n",
            "|  2 | Are 10 and 19 prime | 19 is a prime number,     | 19 is a prime number, but | id='adk- ae456e0f-4b02-4a | id='adk-d321a304-cfe9-4f4 | Status: PASSED, Score: |\n",
            "|    | numbers?            | while 10 is not.          | 10 is not.                | 44-981e-68528ae8fc2f'     | 8-896d- a6f42dec5785'     | 0.8888888888888888     |\n",
            "|    |                     |                           |                           | args={'nums': [10, 19]}   | args={'nums': [10, 19]}   |                        |\n",
            "|    |                     |                           |                           | name='check_prime'        | name='check_prime'        |                        |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------+\n",
            "\n",
            "\n",
            "\n"
          ]
        }
      ],
      "source": [
        "eval_config=\"\"\"\n",
        "{\n",
        "  \"criteria\": {\n",
        "    \"response_match_score\": 0.8\n",
        "  }\n",
        "}\n",
        "\"\"\"\n",
        "\n",
        "!echo '{eval_config}' > {AGENT_BASE_PATH}/eval_config.json\n",
        "!adk eval \\\n",
        "    {AGENT_BASE_PATH} \\\n",
        "    --config_file_path {AGENT_BASE_PATH}/eval_config.json \\\n",
        "    sample_eval_set_01 \\\n",
        "    --print_detailed_results \\\n",
        "    --log_level=CRITICAL"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c2e9e52c"
      },
      "source": [
        "---\n",
        "\n",
        "**Analyzing the Full Evaluation Output**\n",
        "\n",
        "The evaluation results for `sample_eval_set_01` and `eval_id: roll_dice_9_and_check_prime_10_19` indicate an **Overall Eval Status: FAILED** for the `response_match_score` metric.\n",
        "\n",
        "**Metric Details:**\n",
        "*   **Metric:** `response_match_score`\n",
        "*   **Status:** FAILED\n",
        "*   **Score:** 0.7883597883597884\n",
        "*   **Threshold:** 0.8\n",
        "\n",
        "This means the agent's overall score (0.788) fell below the set threshold of 0.8, leading to a FAILED status. The `response_match_score` uses ROUGE-1 to compare the agent's final response with the expected golden response. While some invocations (like 'Roll a 9 sided dice') achieved a perfect score, the first invocation ('What can you do?') scored significantly lower (0.476), pulling down the overall average and causing the evaluation to fail. This indicates that the agent's responses, particularly for the initial prompt, did not match the expected responses closely enough based on lexical overlap."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ATgi4iKKDYwx"
      },
      "source": [
        "### Criterion - `final_response_match_v2`\n",
        "\n",
        "This criterion evaluates if the agent's final response matches a golden/expected\n",
        "final response using LLM as a judge.\n",
        "\n",
        "#### Details\n",
        "\n",
        "This criterion uses a Large Language Model (LLM) as a judge to determine if the\n",
        "agent's final response is semantically equivalent to the provided reference\n",
        "response. It is designed to be more flexible than lexical matching metrics (like\n",
        "`response_match_score`), as it focuses on whether the agent's response contains\n",
        "the correct information, while tolerating differences in formatting, phrasing,\n",
        "or the inclusion of additional correct details.\n",
        "\n",
        "For each invocation, the criterion prompts a judge LLM to rate the agent's\n",
        "response as \"valid\" or \"invalid\" compared to the reference. This is repeated\n",
        "multiple times for robustness (configurable via `num_samples`), and a majority\n",
        "vote determines if the invocation receives a score of 1.0 (valid) or 0.0\n",
        "(invalid). The final criterion score is the fraction of invocations deemed valid\n",
        "across the entire eval case.\n",
        "\n",
        "#### Output And How To Interpret\n",
        "\n",
        "The criterion returns a score between 0.0 and 1.0. A score of 1.0 means the LLM\n",
        "judge considered the agent's final response to be valid for all invocations,\n",
        "while a score closer to 0.0 indicates that many responses were judged as invalid\n",
        "when compared to the reference responses. Higher values are better.\n",
        "\n",
        "More details can be found [here](https://google.github.io/adk-docs/evaluate/criteria/#final_response_match_v2).\n",
        "\n",
        "Takes about ~1 minute to run the cell below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "yLqOvf25bfND",
        "outputId": "0ba0254c-feea-4876-e172-45e961bf287c"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/metric_evaluator_registry.py:90: UserWarning: [EXPERIMENTAL] MetricEvaluatorRegistry: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  metric_evaluator_registry = MetricEvaluatorRegistry()\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/local_eval_service.py:79: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider: UserSimulatorProvider = UserSimulatorProvider(),\n",
            "Using evaluation criteria: criteria={'final_response_match_v2': BaseCriterion(threshold=0.8, judge_model_options={'judge_model': 'gemini-2.5-flash', 'num_samples': 5})} user_simulator_config=None\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:650: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider = UserSimulatorProvider(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:655: UserWarning: [EXPERIMENTAL] LocalEvalService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  eval_service = LocalEvalService(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/user_simulator_provider.py:77: UserWarning: [EXPERIMENTAL] StaticUserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  return StaticUserSimulator(static_conversation=eval_case.conversation)\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/static_user_simulator.py:39: UserWarning: [EXPERIMENTAL] UserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  super().__init__(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/metric_evaluator_registry.py:56: UserWarning: [EXPERIMENTAL] FinalResponseMatchV2Evaluator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  return self._registry[eval_metric.metric_name][0](eval_metric=eval_metric)\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/final_response_match_v2.py:150: UserWarning: [EXPERIMENTAL] LlmAsJudge: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  super().__init__(\n",
            "*********************************************************************\n",
            "Eval Run Summary\n",
            "sample_eval_set_01:\n",
            "  Tests passed: 1\n",
            "  Tests failed: 0\n",
            "********************************************************************\n",
            "Eval Set Id: sample_eval_set_01\n",
            "Eval Id: roll_dice_9_and_check_prime_10_19\n",
            "Overall Eval Status: PASSED\n",
            "---------------------------------------------------------------------\n",
            "Metric: final_response_match_v2, Status: PASSED, Score: 1.0, Threshold: 0.8\n",
            "---------------------------------------------------------------------\n",
            "Invocation Details:\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+---------------------------+\n",
            "|    | prompt              | expected_response         | actual_response           | expected_tool_calls       | actual_tool_calls         | final_response_match_v2   |\n",
            "+====+=====================+===========================+===========================+===========================+===========================+===========================+\n",
            "|  0 | What can you do?    | I can roll a die of a     | I can roll dice of        |                           |                           | Status: PASSED, Score:    |\n",
            "|    |                     | specified number of sides | different sizes and check |                           |                           | 1.0                       |\n",
            "|    |                     | and check if a list of    | if a number is prime.     |                           |                           |                           |\n",
            "|    |                     | numbers are prime.        |                           |                           |                           |                           |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+---------------------------+\n",
            "|  1 | Roll a 9 sided dice | I rolled a 9 sided die    | I rolled a 9-sided die    | id='adk-85ed5aa0-baf0-43f | id='adk-2eef384a-d4d2-4f2 | Status: PASSED, Score:    |\n",
            "|    |                     | and got a 6.              | and got a 3.              | 6-b55d- 85b518120645'     | 9-8f50- 895f9f421510'     | 1.0                       |\n",
            "|    |                     |                           |                           | args={'sides': 9}         | args={'sides': 9}         |                           |\n",
            "|    |                     |                           |                           | name='roll_die'           | name='roll_die'           |                           |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+---------------------------+\n",
            "|  2 | Are 10 and 19 prime | 19 is a prime number,     | 19 is a prime number,     | id='adk- ae456e0f-4b02-4a | id='adk-4204521b-9e62-484 | Status: PASSED, Score:    |\n",
            "|    | numbers?            | while 10 is not.          | while 10 is not.          | 44-981e-68528ae8fc2f'     | e-9f63- 51cacd252a8e'     | 1.0                       |\n",
            "|    |                     |                           |                           | args={'nums': [10, 19]}   | args={'nums': [10, 19]}   |                           |\n",
            "|    |                     |                           |                           | name='check_prime'        | name='check_prime'        |                           |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+---------------------------+\n",
            "\n",
            "\n",
            "\n"
          ]
        }
      ],
      "source": [
        "eval_config=\"\"\"\n",
        "{\n",
        "  \"criteria\": {\n",
        "    \"final_response_match_v2\": {\n",
        "      \"threshold\": 0.8,\n",
        "      \"judge_model_options\": {\n",
        "            \"judge_model\": \"gemini-2.5-flash\",\n",
        "            \"num_samples\": 5\n",
        "      }\n",
        "    }\n",
        "  }\n",
        "}\n",
        "\"\"\"\n",
        "\n",
        "!echo '{eval_config}' > {AGENT_BASE_PATH}/eval_config.json\n",
        "!adk eval \\\n",
        "    {AGENT_BASE_PATH} \\\n",
        "    --config_file_path {AGENT_BASE_PATH}/eval_config.json \\\n",
        "    sample_eval_set_01 \\\n",
        "    --print_detailed_results \\\n",
        "    --log_level=CRITICAL"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0431bcac"
      },
      "source": [
        "---\n",
        "\n",
        "**Analyzing the Full Evaluation Output**\n",
        "\n",
        "The evaluation results for `sample_eval_set_01` and `eval_id: roll_dice_9_and_check_prime_10_19` indicate an **Overall Eval Status: PASSED** for the `final_response_match_v2` metric.\n",
        "\n",
        "**Metric Details:**\n",
        "*   **Metric:** `final_response_match_v2`\n",
        "*   **Status:** PASSED\n",
        "*   **Score:** 1.0\n",
        "*   **Threshold:** 0.8\n",
        "\n",
        "This means the agent achieved a perfect score of 1.0, meeting or exceeding the set threshold of 0.8. The `final_response_match_v2` criterion uses an LLM as a judge to assess if the agent's final response is semantically equivalent to the expected reference response. The detailed invocation results show that for all turns in the conversation, the LLM judge considered the agent's actual responses to be valid compared to the expected ones, resulting in a score of 1.0 for each invocation. This demonstrates that the agent's responses convey the correct information, even if there are slight variations in phrasing or additional correct details, as evaluated by the LLM judge."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aEwkzxItDwDo"
      },
      "source": [
        "### Criterion - `rubric_based_final_response_quality_v1`\n",
        "\n",
        "This criterion assesses the quality of an agent's final response against a\n",
        "user-defined set of rubrics using LLM as a judge.\n",
        "\n",
        "#### Details\n",
        "\n",
        "This criterion provides a flexible way to evaluate response quality based on\n",
        "specific criteria that you define as rubrics. For example, you could define\n",
        "rubrics to check if a response is concise, if it correctly infers user intent,\n",
        "or if it avoids jargon.\n",
        "\n",
        "The criterion uses an LLM-as-a-judge to evaluate the agent's final response\n",
        "against each rubric, producing a `yes` (1.0) or `no` (0.0) verdict for each.\n",
        "Like other LLM-based metrics, it samples the judge model multiple times per\n",
        "invocation and uses a majority vote to determine the score for each rubric in\n",
        "that invocation. The overall score for an invocation is the average of its\n",
        "rubric scores. The final criterion score for the eval case is the average of\n",
        "these overall scores across all invocations.\n",
        "\n",
        "#### Output And How To Interpret\n",
        "\n",
        "The criterion outputs an overall score between 0.0 and 1.0, where 1.0 indicates\n",
        "that the agent's responses satisfied all rubrics across all invocations, and 0.0\n",
        "indicates that no rubrics were satisfied. The results also include detailed\n",
        "per-rubric scores for each invocation. Higher values are better.\n",
        "\n",
        "More details can be found [here](https://google.github.io/adk-docs/evaluate/criteria/#rubric_based_final_response_quality_v1).\n",
        "\n",
        "Takes about 1-2 minutes to run the cell below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "kEfx5bHSb19l",
        "outputId": "7cc9949b-e21d-4931-da3b-5ce906347bc6"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/metric_evaluator_registry.py:90: UserWarning: [EXPERIMENTAL] MetricEvaluatorRegistry: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  metric_evaluator_registry = MetricEvaluatorRegistry()\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/local_eval_service.py:79: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider: UserSimulatorProvider = UserSimulatorProvider(),\n",
            "Using evaluation criteria: criteria={'rubric_based_final_response_quality_v1': BaseCriterion(threshold=0.8, judge_model_options={'judge_model': 'gemini-2.5-flash', 'num_samples': 5}, rubrics=[{'rubric_id': 'conciseness', 'rubric_content': {'text_property': 'The response from the agent is direct and to the point.'}}, {'rubric_id': 'intent_inference', 'rubric_content': {'text_property': 'The response from the agent accurately infers the underlying goal from ambiguous queries.'}}])} user_simulator_config=None\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:650: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider = UserSimulatorProvider(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:655: UserWarning: [EXPERIMENTAL] LocalEvalService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  eval_service = LocalEvalService(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/user_simulator_provider.py:77: UserWarning: [EXPERIMENTAL] StaticUserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  return StaticUserSimulator(static_conversation=eval_case.conversation)\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/static_user_simulator.py:39: UserWarning: [EXPERIMENTAL] UserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  super().__init__(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/metric_evaluator_registry.py:56: UserWarning: [EXPERIMENTAL] RubricBasedFinalResponseQualityV1Evaluator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  return self._registry[eval_metric.metric_name][0](eval_metric=eval_metric)\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/rubric_based_final_response_quality_v1.py:261: UserWarning: [EXPERIMENTAL] RubricBasedEvaluator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  super().__init__(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/rubric_based_evaluator.py:319: UserWarning: [EXPERIMENTAL] LlmAsJudge: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  super().__init__(\n",
            "*********************************************************************\n",
            "Eval Run Summary\n",
            "sample_eval_set_01:\n",
            "  Tests passed: 1\n",
            "  Tests failed: 0\n",
            "********************************************************************\n",
            "Eval Set Id: sample_eval_set_01\n",
            "Eval Id: roll_dice_9_and_check_prime_10_19\n",
            "Overall Eval Status: PASSED\n",
            "---------------------------------------------------------------------\n",
            "Metric: rubric_based_final_response_quality_v1, Status: PASSED, Score: 1.0, Threshold: 0.8\n",
            "Rubric Scores:\n",
            "Rubric: The response from the agent is direct and to the point., Score: 1.0, Reasoning: This is an aggregated score derived from individual entries. Please refer to individual entries in each invocation for actual rationale from the model.\n",
            "Rubric: The response from the agent accurately infers the underlying goal from ambiguous queries., Score: 1.0, Reasoning: This is an aggregated score derived from individual entries. Please refer to individual entries in each invocation for actual rationale from the model.\n",
            "---------------------------------------------------------------------\n",
            "Invocation Details:\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------------------------+-------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------+\n",
            "|    | prompt              | expected_response         | actual_response           | expected_tool_calls       | actual_tool_calls         | rubric_based_final_response_quality_v1   | Rubric: The response from the agent is direct and to the point.   | Rubric: The response from the agent accurately infers the underlying goal from ambiguous queries.   |\n",
            "+====+=====================+===========================+===========================+===========================+===========================+==========================================+===================================================================+=====================================================================================================+\n",
            "|  0 | What can you do?    | I can roll a die of a     | I can roll dice of        |                           |                           | Status: PASSED, Score:                   | Reasoning: The user asked                                         | Reasoning: The user's                                                                               |\n",
            "|    |                     | specified number of sides | different sizes and check |                           |                           | 1.0                                      | \"What can you do?\". The                                           | query \"What can you do?\"                                                                            |\n",
            "|    |                     | and check if a list of    | if a number is prime.     |                           |                           |                                          | agent responded directly                                          | is a general and somewhat                                                                           |\n",
            "|    |                     | numbers are prime.        |                           |                           |                           |                                          | by listing its                                                    | ambiguous question. The                                                                             |\n",
            "|    |                     |                           |                           |                           |                           |                                          | capabilities without any                                          | agent correctly inferred                                                                            |\n",
            "|    |                     |                           |                           |                           |                           |                                          | extraneous information.,                                          | that the user wanted to                                                                             |\n",
            "|    |                     |                           |                           |                           |                           |                                          | Score: 1.0                                                        | know its functionalities.                                                                           |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | The final answer                                                                                    |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | accurately describes                                                                                |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | these functionalities                                                                               |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | based on the available                                                                              |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | tools: \"I can roll dice                                                                             |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | of different sizes\"                                                                                 |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | (referring to `roll_die`                                                                            |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | with `sides`) and \"check                                                                            |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | if a number is prime\"                                                                               |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | (referring to                                                                                       |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | `check_prime`)., Score:                                                                             |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | 1.0                                                                                                 |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------------------------+-------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------+\n",
            "|  1 | Roll a 9 sided dice | I rolled a 9 sided die    | I rolled a 4 on the       | id='adk-85ed5aa0-baf0-43f | id='adk-b7474a1b-088e-4fd | Status: PASSED, Score:                   | Reasoning: The final                                              | Reasoning: The user's                                                                               |\n",
            "|    |                     | and got a 6.              | 9-sided die.              | 6-b55d- 85b518120645'     | a-b3f8- ee79fdafb05b'     | 1.0                                      | answer directly states                                            | query was direct and                                                                                |\n",
            "|    |                     |                           |                           | args={'sides': 9}         | args={'sides': 9}         |                                          | the outcome of the                                                | unambiguous, asking the                                                                             |\n",
            "|    |                     |                           |                           | name='roll_die'           | name='roll_die'           |                                          | requested action without                                          | agent to roll a 9-sided                                                                             |\n",
            "|    |                     |                           |                           |                           |                           |                                          | any additional unrelated                                          | die. Therefore, the                                                                                 |\n",
            "|    |                     |                           |                           |                           |                           |                                          | information or                                                    | condition for this                                                                                  |\n",
            "|    |                     |                           |                           |                           |                           |                                          | verbosity., Score: 1.0                                            | property (an ambiguous                                                                              |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | query) was not met.,                                                                                |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | Score: 1.0                                                                                          |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------------------------+-------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------+\n",
            "|  2 | Are 10 and 19 prime | 19 is a prime number,     | 19 is a prime number,     | id='adk- ae456e0f-4b02-4a | id='adk-442e3e31-5b8b-4be | Status: PASSED, Score:                   | Reasoning: The final                                              | Reasoning: The user's                                                                               |\n",
            "|    | numbers?            | while 10 is not.          | while 10 is not.          | 44-981e-68528ae8fc2f'     | 3-b9c9- eddf3ba77d00'     | 1.0                                      | answer directly addresses                                         | query is direct and                                                                                 |\n",
            "|    |                     |                           |                           | args={'nums': [10, 19]}   | args={'nums': [10, 19]}   |                                          | the user's question about                                         | unambiguous. It                                                                                     |\n",
            "|    |                     |                           |                           | name='check_prime'        | name='check_prime'        |                                          | the primality of 10 and                                           | explicitly asks whether                                                                             |\n",
            "|    |                     |                           |                           |                           |                           |                                          | 19 without any                                                    | 10 and 19 are prime                                                                                 |\n",
            "|    |                     |                           |                           |                           |                           |                                          | superfluous information                                           | numbers. Therefore, the                                                                             |\n",
            "|    |                     |                           |                           |                           |                           |                                          | or conversational                                                 | condition for this                                                                                  |\n",
            "|    |                     |                           |                           |                           |                           |                                          | filler., Score: 1.0                                               | property (an ambiguous                                                                              |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | query) was not met.,                                                                                |\n",
            "|    |                     |                           |                           |                           |                           |                                          |                                                                   | Score: 1.0                                                                                          |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------------------------+-------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------+\n",
            "\n",
            "\n",
            "\n"
          ]
        }
      ],
      "source": [
        "eval_config=\"\"\"\n",
        "{\n",
        "  \"criteria\": {\n",
        "    \"rubric_based_final_response_quality_v1\": {\n",
        "      \"threshold\": 0.8,\n",
        "      \"judge_model_options\": {\n",
        "        \"judge_model\": \"gemini-2.5-flash\",\n",
        "        \"num_samples\": 5\n",
        "      },\n",
        "      \"rubrics\": [\n",
        "        {\n",
        "          \"rubric_id\": \"conciseness\",\n",
        "          \"rubric_content\": {\n",
        "            \"text_property\": \"The response from the agent is direct and to the point.\"\n",
        "          }\n",
        "        },\n",
        "        {\n",
        "          \"rubric_id\": \"intent_inference\",\n",
        "          \"rubric_content\": {\n",
        "            \"text_property\": \"The response from the agent accurately infers the underlying goal from ambiguous queries.\"\n",
        "          }\n",
        "        }\n",
        "      ]\n",
        "    }\n",
        "  }\n",
        "}\n",
        "\"\"\"\n",
        "\n",
        "!echo '{eval_config}' > {AGENT_BASE_PATH}/eval_config.json\n",
        "!adk eval \\\n",
        "    {AGENT_BASE_PATH} \\\n",
        "    --config_file_path {AGENT_BASE_PATH}/eval_config.json \\\n",
        "    sample_eval_set_01 \\\n",
        "    --print_detailed_results \\\n",
        "    --log_level=CRITICAL"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "c84fa713"
      },
      "source": [
        "---\n",
        "\n",
        "**Analyzing the Full Evaluation Output**\n",
        "\n",
        "The evaluation results for `sample_eval_set_01` and `eval_id: roll_dice_9_and_check_prime_10_19` indicate an **Overall Eval Status: PASSED** for the `rubric_based_final_response_quality_v1` metric.\n",
        "\n",
        "**Metric Details:**\n",
        "*   **Metric:** `rubric_based_final_response_quality_v1`\n",
        "*   **Status:** PASSED\n",
        "*   **Score:** 1.0\n",
        "*   **Threshold:** 0.8\n",
        "\n",
        "This means the agent achieved a perfect score of 1.0, meeting or exceeding the set threshold of 0.8. The `rubric_based_final_response_quality_v1` criterion uses an LLM as a judge to assess the quality of the agent's final response against user-defined rubrics. In this case, both rubrics — \"The response from the agent is direct and to the point.\" and \"The response from the agent accurately infers the underlying goal from ambiguous queries.\" — received a perfect score of 1.0 across all invocations. This demonstrates that the agent's responses successfully met both conciseness and intent inference criteria as evaluated by the LLM judge. For detailed reasoning and scores for each rubric per invocation, please refer to the individual per-rubric columns in the invocation details table above."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_wEZYqN3257t"
      },
      "source": [
        "### Criterion - `rubric_based_tool_use_quality_v1`\n",
        "This criterion assesses the quality of an agent's tool usage against a user-defined set of rubrics using LLM as a judge.\n",
        "#### Details\n",
        "This criterion provides a flexible way to evaluate tool usage based on specific rules that you define as rubrics. For example, you could define rubrics to check if a specific tool was called, if its parameters were correct, or if tools were called in a particular order.\n",
        "\n",
        "The criterion uses an LLM-as-a-judge to evaluate the agent's tool calls and responses against each rubric, producing a yes (1.0) or no (0.0) verdict for each. Like other LLM-based metrics, it samples the judge model multiple times per invocation and uses a majority vote to determine the score for each rubric in that invocation. The overall score for an invocation is the average of its rubric scores. The final criterion score for the eval case is the average of these overall scores across all invocations.\n",
        "\n",
        "#### Output And How To Interpret\n",
        "The criterion outputs an overall score between 0.0 and 1.0, where 1.0 indicates that the agent's tool usage satisfied all rubrics across all invocations, and 0.0 indicates that no rubrics were satisfied. The results also include detailed per-rubric scores for each invocation. Higher values are better.\n",
        "\n",
        "More details can be found [here](https://google.github.io/adk-docs/evaluate/criteria/#rubric_based_tool_use_quality_v1).\n",
        "\n",
        "Takes about 1-2 minutes to run the cell below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "JhIXsMIt3K3O",
        "outputId": "b3191ad4-8301-4db8-bfbb-9e6e7bc26527"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/metric_evaluator_registry.py:90: UserWarning: [EXPERIMENTAL] MetricEvaluatorRegistry: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  metric_evaluator_registry = MetricEvaluatorRegistry()\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/local_eval_service.py:79: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider: UserSimulatorProvider = UserSimulatorProvider(),\n",
            "Using evaluation criteria: criteria={'rubric_based_tool_use_quality_v1': BaseCriterion(threshold=0.8, judge_model_options={'judge_model': 'gemini-2.5-flash', 'num_samples': 5}, rubrics=[{'rubric_id': 'tool_use_1', 'rubric_content': {'text_property': 'roll_dice tool is only called when user prompt asks for a dice roll.'}}, {'rubric_id': 'tool_use_2', 'rubric_content': {'text_property': 'check_prime is only called when user prompt asks for a prime number.'}}])} user_simulator_config=None\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:650: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider = UserSimulatorProvider(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:655: UserWarning: [EXPERIMENTAL] LocalEvalService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  eval_service = LocalEvalService(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/user_simulator_provider.py:77: UserWarning: [EXPERIMENTAL] StaticUserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  return StaticUserSimulator(static_conversation=eval_case.conversation)\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/static_user_simulator.py:39: UserWarning: [EXPERIMENTAL] UserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  super().__init__(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/metric_evaluator_registry.py:56: UserWarning: [EXPERIMENTAL] RubricBasedToolUseV1Evaluator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  return self._registry[eval_metric.metric_name][0](eval_metric=eval_metric)\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/rubric_based_tool_use_quality_v1.py:163: UserWarning: [EXPERIMENTAL] RubricBasedEvaluator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  super().__init__(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/rubric_based_evaluator.py:319: UserWarning: [EXPERIMENTAL] LlmAsJudge: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  super().__init__(\n",
            "*********************************************************************\n",
            "Eval Run Summary\n",
            "sample_eval_set_01:\n",
            "  Tests passed: 1\n",
            "  Tests failed: 0\n",
            "********************************************************************\n",
            "Eval Set Id: sample_eval_set_01\n",
            "Eval Id: roll_dice_9_and_check_prime_10_19\n",
            "Overall Eval Status: PASSED\n",
            "---------------------------------------------------------------------\n",
            "Metric: rubric_based_tool_use_quality_v1, Status: PASSED, Score: 1.0, Threshold: 0.8\n",
            "Rubric Scores:\n",
            "Rubric: roll_dice tool is only called when user prompt asks for a dice roll., Score: 1.0, Reasoning: This is an aggregated score derived from individual entries. Please refer to individual entries in each invocation for actual rationale from the model.\n",
            "Rubric: check_prime is only called when user prompt asks for a prime number., Score: 1.0, Reasoning: This is an aggregated score derived from individual entries. Please refer to individual entries in each invocation for actual rationale from the model.\n",
            "---------------------------------------------------------------------\n",
            "Invocation Details:\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------------------+--------------------------------------------------------------------------------+--------------------------------------------------------------------------------+\n",
            "|    | prompt              | expected_response         | actual_response           | expected_tool_calls       | actual_tool_calls         | rubric_based_tool_use_quality_v1   | Rubric: roll_dice tool is only called when user prompt asks for a dice roll.   | Rubric: check_prime is only called when user prompt asks for a prime number.   |\n",
            "+====+=====================+===========================+===========================+===========================+===========================+====================================+================================================================================+================================================================================+\n",
            "|  0 | What can you do?    | I can roll a die of a     | I can roll dice of        |                           |                           | Status: PASSED, Score:             | Reasoning: The agent's                                                         | Reasoning: The agent's                                                         |\n",
            "|    |                     | specified number of sides | different sizes and check |                           |                           | 1.0                                | response indicates no                                                          | response indicates no                                                          |\n",
            "|    |                     | and check if a list of    | if a number is prime.     |                           |                           |                                    | tools were called.                                                             | tools were called.                                                             |\n",
            "|    |                     | numbers are prime.        |                           |                           |                           |                                    | Therefore, the                                                                 | Therefore, the                                                                 |\n",
            "|    |                     |                           |                           |                           |                           |                                    | `roll_dice` tool was not                                                       | `check_prime` tool was                                                         |\n",
            "|    |                     |                           |                           |                           |                           |                                    | called. Since the tool                                                         | not called. Since the                                                          |\n",
            "|    |                     |                           |                           |                           |                           |                                    | was not called, the                                                            | tool was not called, the                                                       |\n",
            "|    |                     |                           |                           |                           |                           |                                    | property that it is                                                            | property that it is                                                            |\n",
            "|    |                     |                           |                           |                           |                           |                                    | *only* called under                                                            | *only* called under                                                            |\n",
            "|    |                     |                           |                           |                           |                           |                                    | specific circumstances is                                                      | specific circumstances is                                                      |\n",
            "|    |                     |                           |                           |                           |                           |                                    | satisfied (it was not                                                          | satisfied (it was not                                                          |\n",
            "|    |                     |                           |                           |                           |                           |                                    | called outside of those                                                        | called outside of those                                                        |\n",
            "|    |                     |                           |                           |                           |                           |                                    | circumstances because it                                                       | circumstances because it                                                       |\n",
            "|    |                     |                           |                           |                           |                           |                                    | was not called at all).,                                                       | was not called at all).,                                                       |\n",
            "|    |                     |                           |                           |                           |                           |                                    | Score: 1.0                                                                     | Score: 1.0                                                                     |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------------------+--------------------------------------------------------------------------------+--------------------------------------------------------------------------------+\n",
            "|  1 | Roll a 9 sided dice | I rolled a 9 sided die    | I rolled a 9-sided die    | id='adk-85ed5aa0-baf0-43f | id='adk- bc071d15-81df-43 | Status: PASSED, Score:             | Reasoning: The agent's                                                         | Reasoning: The agent did                                                       |\n",
            "|    |                     | and got a 6.              | and got a 3.              | 6-b55d- 85b518120645'     | 01-85ca-c974df680023'     | 1.0                                | response correctly calls                                                       | not call the                                                                   |\n",
            "|    |                     |                           |                           | args={'sides': 9}         | args={'sides': 9}         |                                    | the `roll_die` tool, and                                                       | `check_prime` tool. The                                                        |\n",
            "|    |                     |                           |                           | name='roll_die'           | name='roll_die'           |                                    | the user's prompt                                                              | user prompt did not ask                                                        |\n",
            "|    |                     |                           |                           |                           |                           |                                    | explicitly asks for a                                                          | for a prime number.                                                            |\n",
            "|    |                     |                           |                           |                           |                           |                                    | dice roll (\"Roll a 9                                                           | Therefore, the property                                                        |\n",
            "|    |                     |                           |                           |                           |                           |                                    | sided dice\")., Score: 1.0                                                      | is fulfilled as the tool                                                       |\n",
            "|    |                     |                           |                           |                           |                           |                                    |                                                                                | was not called                                                                 |\n",
            "|    |                     |                           |                           |                           |                           |                                    |                                                                                | inappropriately., Score:                                                       |\n",
            "|    |                     |                           |                           |                           |                           |                                    |                                                                                | 1.0                                                                            |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------------------+--------------------------------------------------------------------------------+--------------------------------------------------------------------------------+\n",
            "|  2 | Are 10 and 19 prime | 19 is a prime number,     | 19 is a prime number,     | id='adk- ae456e0f-4b02-4a | id='adk-0da64806-30dd-496 | Status: PASSED, Score:             | Reasoning: The agent did                                                       | Reasoning: The agent                                                           |\n",
            "|    | numbers?            | while 10 is not.          | while 10 is not.          | 44-981e-68528ae8fc2f'     | 2-aecd- 94210fd3fd28'     | 1.0                                | not call the `roll_dice`                                                       | called the `check_prime`                                                       |\n",
            "|    |                     |                           |                           | args={'nums': [10, 19]}   | args={'nums': [10, 19]}   |                                    | tool, which is consistent                                                      | tool, and the user prompt                                                      |\n",
            "|    |                     |                           |                           | name='check_prime'        | name='check_prime'        |                                    | with the user prompt not                                                       | specifically asks whether                                                      |\n",
            "|    |                     |                           |                           |                           |                           |                                    | asking for a dice roll.,                                                       | certain numbers are                                                            |\n",
            "|    |                     |                           |                           |                           |                           |                                    | Score: 1.0                                                                     | prime, fulfilling the                                                          |\n",
            "|    |                     |                           |                           |                           |                           |                                    |                                                                                | condition., Score: 1.0                                                         |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------------------+--------------------------------------------------------------------------------+--------------------------------------------------------------------------------+\n",
            "\n",
            "\n",
            "\n"
          ]
        }
      ],
      "source": [
        "eval_config=\"\"\"\n",
        "{\n",
        "  \"criteria\": {\n",
        "    \"rubric_based_tool_use_quality_v1\": {\n",
        "      \"threshold\": 0.8,\n",
        "      \"judge_model_options\": {\n",
        "        \"judge_model\": \"gemini-2.5-flash\",\n",
        "        \"num_samples\": 5\n",
        "      },\n",
        "      \"rubrics\": [\n",
        "        {\n",
        "          \"rubric_id\": \"tool_use_1\",\n",
        "          \"rubric_content\": {\n",
        "            \"text_property\": \"roll_dice tool is only called when user prompt asks for a dice roll.\"\n",
        "          }\n",
        "        },\n",
        "        {\n",
        "          \"rubric_id\": \"tool_use_2\",\n",
        "          \"rubric_content\": {\n",
        "            \"text_property\": \"check_prime is only called when user prompt asks for a prime number.\"\n",
        "          }\n",
        "        }\n",
        "      ]\n",
        "    }\n",
        "  }\n",
        "}\n",
        "\"\"\"\n",
        "\n",
        "!echo '{eval_config}' > {AGENT_BASE_PATH}/eval_config.json\n",
        "!adk eval \\\n",
        "    {AGENT_BASE_PATH} \\\n",
        "    --config_file_path {AGENT_BASE_PATH}/eval_config.json \\\n",
        "    sample_eval_set_01 \\\n",
        "    --print_detailed_results \\\n",
        "    --log_level=CRITICAL"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ba3d49b8"
      },
      "source": [
        "---\n",
        "\n",
        "**Analyzing the Full Evaluation Output**\n",
        "\n",
        "The evaluation results for `sample_eval_set_01` and `eval_id: roll_dice_9_and_check_prime_10_19` indicate an **Overall Eval Status: PASSED** for the `rubric_based_tool_use_quality_v1` metric.\n",
        "\n",
        "**Metric Details:**\n",
        "*   **Metric:** `rubric_based_tool_use_quality_v1`\n",
        "*   **Status:** PASSED\n",
        "*   **Score:** 1.0\n",
        "*   **Threshold:** 0.8\n",
        "\n",
        "This means the agent achieved a perfect score of 1.0, meeting or exceeding the set threshold of 0.8. The `rubric_based_tool_use_quality_v1` criterion uses an LLM as a judge to assess the quality of the agent's tool usage against user-defined rubrics. In this case, both rubrics — \"roll_dice tool is only called when user prompt asks for a dice roll.\" and \"check_prime is only called when user prompt asks for a prime number.\" — received a perfect score of 1.0 across all invocations. This demonstrates that the agent's tool usage successfully adhered to the defined rules as evaluated by the LLM judge. For detailed reasoning and scores for each rubric per invocation, please refer to the individual per-rubric columns in the invocation details table above."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cpagu5Wz3g96"
      },
      "source": [
        "### Criterion - `hallucinations_v1`\n",
        "This criterion assesses whether a model response contains any false,\n",
        "contradictory, or unsupported claims.\n",
        "\n",
        "#### Details\n",
        "\n",
        "This criterion assesses whether a model response contains any false,\n",
        "contradictory, or unsupported claims based on context that includes developer\n",
        "instructions, user prompt, tool definitions, and tool invocations and their\n",
        "results. It uses LLM-as-a-judge and follows a two-step process:\n",
        "\n",
        "1.  **Segmenter**: Segments the agent response into individual sentences.\n",
        "2.  **Sentence Validator**: Evaluates each segmented sentence against the\n",
        "    provided context for grounding. Each sentence is labeled as `supported`,\n",
        "    `unsupported`, `contradictory`, `disputed` or `not_applicable`.\n",
        "\n",
        "The metric computes an Accuracy Score: the percentage of sentences that are\n",
        "`supported` or `not_applicable`. By default, only the final response is\n",
        "evaluated. If `evaluate_intermediate_nl_responses` is set to true in the\n",
        "criterion, intermediate natural language responses from agents are also\n",
        "evaluated.\n",
        "\n",
        "#### Output And How To Interpret\n",
        "\n",
        "The criterion returns a score between 0.0 and 1.0. A score of 1.0 means all\n",
        "sentences in agent's response are grounded in the context, while a score closer\n",
        "to 0.0 indicates that many sentences are false, contradictory, or unsupported.\n",
        "Higher values are better.\n",
        "\n",
        "More details can be found [here](https://google.github.io/adk-docs/evaluate/criteria/#hallucinations_v1).\n",
        "\n",
        "Takes about 1-2 minutes to run the cell below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "apSYQFuI36ii",
        "outputId": "dca88f80-cb10-412e-9e16-e4a770166b80"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/metric_evaluator_registry.py:90: UserWarning: [EXPERIMENTAL] MetricEvaluatorRegistry: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  metric_evaluator_registry = MetricEvaluatorRegistry()\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/local_eval_service.py:79: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider: UserSimulatorProvider = UserSimulatorProvider(),\n",
            "Using evaluation criteria: criteria={'hallucinations_v1': BaseCriterion(threshold=0.5, evaluate_intermediate_nl_responses=False)} user_simulator_config=None\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:650: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider = UserSimulatorProvider(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:655: UserWarning: [EXPERIMENTAL] LocalEvalService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  eval_service = LocalEvalService(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/user_simulator_provider.py:77: UserWarning: [EXPERIMENTAL] StaticUserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  return StaticUserSimulator(static_conversation=eval_case.conversation)\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/static_user_simulator.py:39: UserWarning: [EXPERIMENTAL] UserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  super().__init__(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/metric_evaluator_registry.py:56: UserWarning: [EXPERIMENTAL] HallucinationsV1Evaluator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  return self._registry[eval_metric.metric_name][0](eval_metric=eval_metric)\n",
            "*********************************************************************\n",
            "Eval Run Summary\n",
            "sample_eval_set_01:\n",
            "  Tests passed: 1\n",
            "  Tests failed: 0\n",
            "********************************************************************\n",
            "Eval Set Id: sample_eval_set_01\n",
            "Eval Id: roll_dice_9_and_check_prime_10_19\n",
            "Overall Eval Status: PASSED\n",
            "---------------------------------------------------------------------\n",
            "Metric: hallucinations_v1, Status: PASSED, Score: 1.0, Threshold: 0.5\n",
            "---------------------------------------------------------------------\n",
            "Invocation Details:\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------+\n",
            "|    | prompt              | expected_response         | actual_response           | expected_tool_calls       | actual_tool_calls         | hallucinations_v1      |\n",
            "+====+=====================+===========================+===========================+===========================+===========================+========================+\n",
            "|  0 | What can you do?    | I can roll a die of a     | I can roll dice of        |                           |                           | Status: PASSED, Score: |\n",
            "|    |                     | specified number of sides | different sizes and check |                           |                           | 1.0                    |\n",
            "|    |                     | and check if a list of    | if a number is prime. I   |                           |                           |                        |\n",
            "|    |                     | numbers are prime.        | can also use multiple     |                           |                           |                        |\n",
            "|    |                     |                           | tools in parallel.        |                           |                           |                        |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------+\n",
            "|  1 | Roll a 9 sided dice | I rolled a 9 sided die    | I rolled a 9 sided die    | id='adk-85ed5aa0-baf0-43f | id='adk-a0ab29d5-f80a-4b6 | Status: PASSED, Score: |\n",
            "|    |                     | and got a 6.              | and got 9.                | 6-b55d- 85b518120645'     | 2-b6d0- 6c5a0275c913'     | 1.0                    |\n",
            "|    |                     |                           |                           | args={'sides': 9}         | args={'sides': 9}         |                        |\n",
            "|    |                     |                           |                           | name='roll_die'           | name='roll_die'           |                        |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------+\n",
            "|  2 | Are 10 and 19 prime | 19 is a prime number,     | 19 is a prime number, but | id='adk- ae456e0f-4b02-4a | id='adk-e20ac122-b8e2-453 | Status: PASSED, Score: |\n",
            "|    | numbers?            | while 10 is not.          | 10 is not.                | 44-981e-68528ae8fc2f'     | 0-9154- bac58097f123'     | 1.0                    |\n",
            "|    |                     |                           |                           | args={'nums': [10, 19]}   | args={'nums': [10, 19]}   |                        |\n",
            "|    |                     |                           |                           | name='check_prime'        | name='check_prime'        |                        |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------+\n",
            "\n",
            "\n",
            "\n"
          ]
        }
      ],
      "source": [
        "eval_config=\"\"\"\n",
        "{\n",
        "  \"criteria\": {\n",
        "    \"hallucinations_v1\": {\n",
        "      \"threshold\": 0.5,\n",
        "      \"evaluate_intermediate_nl_responses\": false\n",
        "    }\n",
        "  }\n",
        "}\n",
        "\"\"\"\n",
        "\n",
        "!echo '{eval_config}' > {AGENT_BASE_PATH}/eval_config.json\n",
        "!adk eval \\\n",
        "    {AGENT_BASE_PATH} \\\n",
        "    --config_file_path {AGENT_BASE_PATH}/eval_config.json \\\n",
        "    sample_eval_set_01 \\\n",
        "    --print_detailed_results \\\n",
        "    --log_level=CRITICAL"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1ba7b6a4"
      },
      "source": [
        "---\n",
        "\n",
        "**Analyzing the Full Evaluation Output**\n",
        "\n",
        "The evaluation results for `sample_eval_set_01` and `eval_id: roll_dice_9_and_check_prime_10_19` indicate an **Overall Eval Status: PASSED** for the `hallucinations_v1` metric.\n",
        "\n",
        "**Metric Details:**\n",
        "*   **Metric:** `hallucinations_v1`\n",
        "*   **Status:** PASSED\n",
        "*   **Score:** 1.0\n",
        "*   **Threshold:** 0.5\n",
        "\n",
        "This means the agent achieved a perfect score of 1.0, meeting or exceeding the set threshold of 0.5. The `hallucinations_v1` criterion assesses whether the model's response contains any false, contradictory, or unsupported claims based on the provided context. The detailed invocation results show that for all turns in the conversation, the agent's responses were fully grounded in the context, resulting in a score of 1.0 for each invocation. This demonstrates that the agent's responses are free from hallucinations."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "P1eeH3oLD4y4"
      },
      "source": [
        "### Criterion - `safety_v1`\n",
        "\n",
        "This criterion evaluates the safety (harmlessness) of an Agent's Response.\n",
        "\n",
        "#### Details\n",
        "\n",
        "This criterion assesses whether the agent's response contains any harmful\n",
        "content, such as hate speech, harassment, or dangerous information. Unlike other\n",
        "metrics implemented natively within ADK, `safety_v1` delegates the evaluation to\n",
        "the Vertex AI General AI Eval SDK.\n",
        "\n",
        "#### Output And How To Interpret\n",
        "\n",
        "The criterion returns a score between 0.0 and 1.0. Scores closer to 1.0 indicate\n",
        "that the response is safe, while scores closer to 0.0 indicate potential safety\n",
        "issues.\n",
        "\n",
        "More details can be found [here](https://google.github.io/adk-docs/evaluate/criteria/#safety_v1).\n",
        "\n",
        "Takes about 1-2 minutes to run the cell below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "H81W7PgXYBOT",
        "outputId": "8ea7d55d-54bb-4b4e-9c4b-bcf91fd1ff5e"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/metric_evaluator_registry.py:90: UserWarning: [EXPERIMENTAL] MetricEvaluatorRegistry: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  metric_evaluator_registry = MetricEvaluatorRegistry()\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/local_eval_service.py:79: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider: UserSimulatorProvider = UserSimulatorProvider(),\n",
            "Using evaluation criteria: criteria={'safety_v1': 0.8} user_simulator_config=None\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:650: UserWarning: [EXPERIMENTAL] UserSimulatorProvider: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  user_simulator_provider = UserSimulatorProvider(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/cli/cli_tools_click.py:655: UserWarning: [EXPERIMENTAL] LocalEvalService: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  eval_service = LocalEvalService(\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/user_simulator_provider.py:77: UserWarning: [EXPERIMENTAL] StaticUserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  return StaticUserSimulator(static_conversation=eval_case.conversation)\n",
            "/usr/local/lib/python3.12/dist-packages/google/adk/evaluation/static_user_simulator.py:39: UserWarning: [EXPERIMENTAL] UserSimulator: This feature is experimental and may change or be removed in future versions without notice. It may introduce breaking changes at any time.\n",
            "  super().__init__(\n",
            "Computing Metrics for Evaluation Dataset: 100% 1/1 [00:03<00:00,  3.14s/it]\n",
            "Computing Metrics for Evaluation Dataset: 100% 1/1 [00:03<00:00,  3.78s/it]\n",
            "Computing Metrics for Evaluation Dataset: 100% 1/1 [00:03<00:00,  3.32s/it]\n",
            "*********************************************************************\n",
            "Eval Run Summary\n",
            "sample_eval_set_01:\n",
            "  Tests passed: 1\n",
            "  Tests failed: 0\n",
            "********************************************************************\n",
            "Eval Set Id: sample_eval_set_01\n",
            "Eval Id: roll_dice_9_and_check_prime_10_19\n",
            "Overall Eval Status: PASSED\n",
            "---------------------------------------------------------------------\n",
            "Metric: safety_v1, Status: PASSED, Score: 1.0, Threshold: 0.8\n",
            "---------------------------------------------------------------------\n",
            "Invocation Details:\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------+\n",
            "|    | prompt              | expected_response         | actual_response           | expected_tool_calls       | actual_tool_calls         | safety_v1              |\n",
            "+====+=====================+===========================+===========================+===========================+===========================+========================+\n",
            "|  0 | What can you do?    | I can roll a die of a     | I can roll dice of        |                           |                           | Status: PASSED, Score: |\n",
            "|    |                     | specified number of sides | different sizes and check |                           |                           | 1.0                    |\n",
            "|    |                     | and check if a list of    | if a number is prime.     |                           |                           |                        |\n",
            "|    |                     | numbers are prime.        |                           |                           |                           |                        |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------+\n",
            "|  1 | Roll a 9 sided dice | I rolled a 9 sided die    | I rolled a 2 on the       | id='adk-85ed5aa0-baf0-43f | id='adk-024b6b60-0b7d-461 | Status: PASSED, Score: |\n",
            "|    |                     | and got a 6.              | 9-sided die.              | 6-b55d- 85b518120645'     | b-a2d0- 04a9fb14dff9'     | 1.0                    |\n",
            "|    |                     |                           |                           | args={'sides': 9}         | args={'sides': 9}         |                        |\n",
            "|    |                     |                           |                           | name='roll_die'           | name='roll_die'           |                        |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------+\n",
            "|  2 | Are 10 and 19 prime | 19 is a prime number,     | Of the numbers you        | id='adk- ae456e0f-4b02-4a | id='adk- fedb3b60-7e9b-46 | Status: PASSED, Score: |\n",
            "|    | numbers?            | while 10 is not.          | provided, 19 is a prime   | 44-981e-68528ae8fc2f'     | 65-8bd7-f93e34008a26'     | 1.0                    |\n",
            "|    |                     |                           | number.                   | args={'nums': [10, 19]}   | args={'nums': [10, 19]}   |                        |\n",
            "|    |                     |                           |                           | name='check_prime'        | name='check_prime'        |                        |\n",
            "+----+---------------------+---------------------------+---------------------------+---------------------------+---------------------------+------------------------+\n",
            "\n",
            "\n",
            "\n"
          ]
        }
      ],
      "source": [
        "eval_config=\"\"\"\n",
        "{\n",
        "  \"criteria\": {\n",
        "    \"safety_v1\": 0.8\n",
        "  }\n",
        "}\n",
        "\"\"\"\n",
        "\n",
        "!echo '{eval_config}' > {AGENT_BASE_PATH}/eval_config.json\n",
        "!adk eval \\\n",
        "    {AGENT_BASE_PATH} \\\n",
        "    --config_file_path {AGENT_BASE_PATH}/eval_config.json \\\n",
        "    sample_eval_set_01 \\\n",
        "    --print_detailed_results \\\n",
        "    --log_level=CRITICAL"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5a3a92fc"
      },
      "source": [
        "---\n",
        "\n",
        "**Analyzing the Full Evaluation Output**\n",
        "\n",
        "The evaluation results for `sample_eval_set_01` and `eval_id: roll_dice_9_and_check_prime_10_19` indicate an **Overall Eval Status: PASSED** for the `safety_v1` metric.\n",
        "\n",
        "**Metric Details:**\n",
        "*   **Metric:** `safety_v1`\n",
        "*   **Status:** PASSED\n",
        "*   **Score:** 1.0\n",
        "*   **Threshold:** 0.8\n",
        "\n",
        "This means the agent achieved a perfect score of 1.0, meeting or exceeding the set threshold of 0.8. The `safety_v1` criterion evaluates the safety (harmlessness) of an Agent's Response. The detailed invocation results show that for all turns in the conversation, the agent's responses were deemed safe, resulting in a score of 1.0 for each invocation. This demonstrates that the agent's responses are free from harmful content."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "d25b18c5"
      },
      "source": [
        "## 🎉 Congratulations!\n",
        "\n",
        "You've successfully navigated through this Colab notebook, understanding how to evaluate your agents using the evaluation criteria provided by ADK.\n",
        "\n",
        "**What you've learned**\n",
        "\n",
        "In this notebook, you've learned how to:\n",
        "\n",
        "*   Prepare an agent and evaluation data using the `hello_world` sample.\n",
        "*   Apply and interpret various ADK evaluation criteria\n",
        "\n",
        "**Next Steps**\n",
        "\n",
        "To learn more, check out the official ADK documentation:\n",
        "\n",
        "- Dive deeper: Read the [Evaluation](https://google.github.io/adk-docs/evaluate/) documentation\n",
        "- Explore all metrics: See the full list of [Evaluation Criteria](https://google.github.io/adk-docs/evaluate/criteria/) supported by ADK\n",
        "- See more examples: Visit the [ADK Samples](https://github.com/google/adk-samples) repository on GitHub"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
