{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "f705f4be70e9"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ouFs9L8c5cx5"
      },
      "source": [
        "## End-to-End Evaluation of Multi-Agent Systems on Vertex AI with Cloud Run Deployment"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "d6581a815af6"
      },
      "source": [
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/a2aproject/a2a-samples/blob/main/notebooks/multi_agents_eval_with_cloud_run_deployment.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2Fa2aproject%2Fa2a-samples%2Fmain%2Fnotebooks%2Fmulti_agents_eval_with_cloud_run_deployment.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/a2aproject/a2a-samples/main/notebooks/multi_agents_eval_with_cloud_run_deployment.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/bigquery/import?url=https://github.com/a2aproject/a2a-samples/blob/main/notebooks/multi_agents_eval_with_cloud_run_deployment.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/bigquery/v1/32px.svg\" alt=\"BigQuery Studio logo\"><br> Open in BigQuery Studio\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/a2aproject/a2a-samples/blob/main/notebooks/multi_agents_eval_with_cloud_run_deployment.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.svgrepo.com/download/217753/github.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "l1NpzAqE5cx6"
      },
      "source": [
        "This notebook demonstrates how to quickly deploy A2A Agents into Cloud run and  evaluate **A2A+ADK Multi-Agents** using Vertex AI Evaluation services.\n",
        "\n",
        "**Summary**:\n",
        "1. **Deploying A2A Agents to Cloud Run**: Learn how to containerize and deploy your Python-based A2A agents to Cloud Run, enabling them to communicate with each other through a secure and scalable architecture.\n",
        "2. **Orchestration with a Hosting Agent**: See how to create a central \"hosting\" agent that orchestrates the interactions between the deployed A2A agents, routing user requests to the appropriate specialized agent.\n",
        "3. Leveraging Vertex AI for Evaluation: Discover how to use Vertex AI's evaluation services to rigorously assess the performance of your multi-agent system. We'll cover how to:\n",
        "  - Define evaluation datasets with prompts and expected tool calls (trajectories).\n",
        "  - Run evaluation tasks to measure trajectory-based metrics like *trajectory_exact_match, trajectory_precision*, and *trajectory_recall*.\n",
        "  - Evaluate the final generated responses for coherence and safety.\n",
        "4. **Custom Evaluation Metrics**: Learn how to create custom metrics to evaluate specific aspects of your agent's behavior, such as whether the final response logically follows from the sequence of tool calls.\n",
        "\n",
        "\n",
        "\n",
        "**Prerequisites:**\n",
        "1.  **Google Cloud Project:** You need a Google Cloud Project with the Vertex AI API enabled.\n",
        "2.  **Authentication:** You need to be authenticated to Google Cloud. In a Colab environment, this is usually handled by running `from google.colab import auth` and `auth.authenticate_user()`.\n",
        "3.  **Agent Logic:** The Airbnb A2A Agent and Weather A2A Agent are imported from github into this colab and deployed to Cloud run directly. The logic for the Hosting/Routing Agent (e.g., a `HostingAgentExecutor` class) are defined or importable within this notebook. This executor should have a method like `async def execute(self, message_payload: a2a.types.MessagePayload) -> a2a.types.Message:`."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5HABRB6L5cx6"
      },
      "source": [
        "## Preparation\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "G3zYsOfjiwk9"
      },
      "source": [
        "### Setup and Installs"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2o7_ic4K5cx6"
      },
      "outputs": [],
      "source": [
        "%pip install google-cloud-aiplatform httpx \"a2a-sdk==0.3.0\" --quiet\n",
        "%pip install --upgrade --quiet  'google-adk'\n",
        "%pip install \"langchain-google-genai==2.1.5\" --quiet\n",
        "%pip install \"langchain-mcp-adapters==0.1.0\" --quiet\n",
        "%pip install \"langchain-google-vertexai==2.0.24\" --quiet\n",
        "%pip install \"langgraph==0.4.5\" --quiet"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qxQocCV05cx6"
      },
      "outputs": [],
      "source": [
        "import asyncio\n",
        "import json\n",
        "import logging\n",
        "import os\n",
        "import random\n",
        "import string\n",
        "import subprocess\n",
        "import uuid\n",
        "\n",
        "from collections.abc import Callable\n",
        "from typing import Any, TypeAlias\n",
        "\n",
        "import httpx\n",
        "import pandas as pd\n",
        "import plotly.graph_objects as go\n",
        "\n",
        "from IPython.display import HTML, Markdown, display\n",
        "from a2a.client import A2ACardResolver, A2AClient\n",
        "from a2a.types import (\n",
        "    AgentCard,\n",
        "    MessageSendParams,\n",
        "    Part,\n",
        "    SendMessageRequest,\n",
        "    SendMessageResponse,\n",
        "    SendMessageSuccessResponse,\n",
        "    Task,\n",
        "    TaskArtifactUpdateEvent,\n",
        "    TaskStatusUpdateEvent,\n",
        ")\n",
        "from a2a.utils.constants import AGENT_CARD_WELL_KNOWN_PATH\n",
        "from dotenv import load_dotenv\n",
        "from google.adk import Agent\n",
        "from google.adk.agents.callback_context import CallbackContext\n",
        "from google.adk.agents.readonly_context import ReadonlyContext\n",
        "\n",
        "# Build agent with adk\n",
        "from google.adk.events import Event\n",
        "from google.adk.runners import Runner\n",
        "from google.adk.sessions import InMemorySessionService\n",
        "from google.adk.tools.tool_context import ToolContext\n",
        "from google.cloud import aiplatform\n",
        "from google.colab import auth\n",
        "\n",
        "# Evaluate agent\n",
        "from google.genai import types\n",
        "from vertexai.preview.evaluation import EvalTask\n",
        "from vertexai.preview.evaluation.metrics import (\n",
        "    PointwiseMetric,\n",
        "    PointwiseMetricPromptTemplate,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ogSu_6Pe5cx7"
      },
      "source": [
        "### 2. Configuration"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "_c3_Me9D5cx7"
      },
      "outputs": [],
      "source": [
        "\"\"\"Notebook for evaluating multi-agent systems on Vertex AI with Cloud Run.\"\"\"\n",
        "# --- Google Cloud Configuration ---\n",
        "# You might need to authenticate gcloud first if you haven't already\n",
        "\n",
        "PROJECT_ID = ''  # @param {type:\"string\"}\n",
        "PROJECT_NUM = ''  # @param {type:\"string\"}\n",
        "LOCATION = 'us-central1'  # @param {type:\"string\"}\n",
        "\n",
        "# --- Authentication (for Colab) ---\n",
        "if not PROJECT_ID:\n",
        "    raise ValueError('Please set your PROJECT_ID.')\n",
        "\n",
        "try:\n",
        "    auth.authenticate_user()\n",
        "    print('Colab user authenticated.')\n",
        "except Exception as e:\n",
        "    print(\n",
        "        f'Not in a Colab environment or auth failed: {e}. Assuming local gcloud auth.'\n",
        "    )\n",
        "\n",
        "aiplatform.init(project=PROJECT_ID, location=LOCATION)\n",
        "os.environ['GOOGLE_CLOUD_PROJECT'] = PROJECT_ID\n",
        "os.environ['GOOGLE_CLOUD_LOCATION'] = LOCATION\n",
        "os.environ['GOOGLE_GENAI_USE_VERTEXAI'] = 'True'"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "obMo8ht8-b0J"
      },
      "outputs": [],
      "source": [
        "EXPERIMENT_NAME = '[Your experiement name]'  # @param {type:\"string\"}\n",
        "BUCKET_NAME = '[Your bucket name]'  # @param {type: \"string\"}\n",
        "BUCKET_URI = f'gs://{BUCKET_NAME}'"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IiDcHqDqid6Q"
      },
      "source": [
        "## Deploy A2A Agents to Cloud Run"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "hYCieF0IxIjC"
      },
      "outputs": [],
      "source": [
        "# @title github pull to get A2A samples\n",
        "# Download the a2a-samples from github so we deploy the A2A airbnb and weather agent samples\n",
        "!git clone https://github.com/a2aproject/a2a-samples.git --depth 1 -b main"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "VrX5UlCHz-vY"
      },
      "outputs": [],
      "source": [
        "# Basic logging setup (helpful for seeing what the handler does)\n",
        "logging.basicConfig(level=logging.INFO)\n",
        "logger = logging.getLogger(__name__)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MEmXjfSQbNag"
      },
      "source": [
        "### Build Airbnb Agent and Deploy to Cloud Run"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "n1DYCYG82rnV"
      },
      "outputs": [],
      "source": [
        "#@title Prepare the docker files\n",
        "%%writefile a2a-samples/samples/python/Dockerfile\n",
        "FROM python:3.13-slim\n",
        "COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/\n",
        "\n",
        "# Download the latest LTS version of Node.js\n",
        "# This is required for the airbnb_agent to work\n",
        "RUN apt-get update && apt-get install -y --no-install-recommends curl && \\\n",
        "    curl -fsSL https://deb.nodesource.com/setup_lts.x | bash - && \\\n",
        "    apt-get install -y --no-install-recommends nodejs && \\\n",
        "    apt-get clean && rm -rf /var/lib/apt/lists/*\n",
        "\n",
        "EXPOSE 10002\n",
        "WORKDIR /app\n",
        "\n",
        "COPY . /app\n",
        "\n",
        "RUN uv sync\n",
        "\n",
        "WORKDIR /app/agents/airbnb_planner_multiagent/airbnb_agent/\n",
        "\n",
        "ENTRYPOINT [\"uv\", \"run\", \".\", \"--host\", \"0.0.0.0\", \"--port\", \"10002\"]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "o6PCB40XadRk"
      },
      "outputs": [],
      "source": [
        "#@title Build the docker image for airbnb A2A Agent\n",
        "# Replace [PROJECT_ID] with your Google Cloud Project ID\n",
        "# Replace [IMAGE_NAME] with the desired name for your Docker image.\n",
        "# Replace [TAG] with a tag for your image (e.g., latest)\n",
        "# Replace [PATH_TO_YOUR_SOURCE_CODE] with the path to the source directory.\n",
        "# If your source code is in the current directory, you can use '.'\n",
        "\n",
        "IMAGE_NAME = \"airbnb-a2a-sample-agent\" # @param {type:\"string\"}\n",
        "# LOCATION = \"us-central1\" # @param {type:\"string\"}\n",
        "TAG = \"latest\" # @param {type:\"string\"}\n",
        "SOURCE_PATH = \"a2a-samples/samples/python/\" # @param {type:\"string\"}\n",
        "# Using Google Container Registry (GCR)\n",
        "IMAGE_URL = f\"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}\"\n",
        "\n",
        "print(f\"Building and pushing image to: {IMAGE_URL}\")\n",
        "\n",
        "!gcloud builds submit {SOURCE_PATH} \\\n",
        "  --project={PROJECT_ID} \\\n",
        "  --tag={IMAGE_URL}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "wAhLVdJCyflK"
      },
      "outputs": [],
      "source": [
        "# Replace [SERVICE-NAME] with the desired name for your A2A Agent.\n",
        "# Replace [REGION] with the Google Cloud region where you want to deploy.\n",
        "# Replace [IMAGE_URL] with the full path to your container image.\n",
        "\n",
        "# Replace with your actual service name, region, and image URL\n",
        "SERVICE_NAME = 'airbnb-a2a-sample-agent'  # @param {type:\"string\"}\n",
        "# Correctly format the IMAGE_URL string\n",
        "IMAGE_URL = f'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest'\n",
        "AIRBNB_APP_URL = f'https://{SERVICE_NAME}-{PROJECT_NUM}.{LOCATION}.run.app'"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "71YlW3emat33"
      },
      "outputs": [],
      "source": [
        "#@title Run the airbnb A2A Agent in Cloud Run\n",
        "!gcloud run deploy {SERVICE_NAME} \\\n",
        "    --verbosity=debug \\\n",
        "    --memory=1.5G \\\n",
        "    --image={IMAGE_URL} \\\n",
        "    --region={LOCATION} \\\n",
        "    --port=10002 \\\n",
        "    --project={PROJECT_ID} \\\n",
        "    --no-allow-unauthenticated \\\n",
        "    --set-env-vars=GOOGLE_GENAI_USE_VERTEXAI=TRUE,GOOGLE_GENAI_MODEL=\"gemini-2.5-flash\",PROJECT_ID={PROJECT_ID},LOCATION={LOCATION},APP_URL={AIRBNB_APP_URL}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "P14LOENGbSh2"
      },
      "source": [
        "### Build Weather Agent and Deploy to Cloud Run"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "oz13DQrtbLTZ"
      },
      "outputs": [],
      "source": [
        "#@title Prepare the docker file\n",
        "%%writefile a2a-samples/samples/python/Dockerfile\n",
        "FROM python:3.13-slim\n",
        "COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/\n",
        "\n",
        "# Add Node.js and npm\n",
        "# Required for airbnb_agent to work\n",
        "# RUN apt-get update && apt-get install -y nodejs npm\n",
        "\n",
        "EXPOSE 10001\n",
        "WORKDIR /app\n",
        "\n",
        "COPY . /app\n",
        "\n",
        "RUN uv sync\n",
        "\n",
        "WORKDIR /app/agents/airbnb_planner_multiagent/weather_agent/\n",
        "\n",
        "ENTRYPOINT [\"uv\", \"run\", \".\", \"--host\", \"0.0.0.0\", \"--port\", \"10001\"]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YCFB0gB2bcIG"
      },
      "outputs": [],
      "source": [
        "#@title Build the docker image for Weather A2A Agent\n",
        "\n",
        "# Replace [PROJECT_ID] with your Google Cloud Project ID\n",
        "# Replace [IMAGE_NAME] with the desired name for your Docker image.\n",
        "# Replace [TAG] with a tag for your image (e.g., latest)\n",
        "# Replace [SOURCE_PATH] with the path to the source directory.\n",
        "# If your source code is in the current directory, you can use '.'\n",
        "IMAGE_NAME = \"weather-a2a-sample-agent\" # @param {type:\"string\"}\n",
        "TAG = \"latest\" # @param {type:\"string\"}\n",
        "SOURCE_PATH = \"a2a-samples/samples/python/\" # @param {type:\"string\"}\n",
        "# Using Google Container Registry (GCR)\n",
        "IMAGE_URL = f\"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}\"\n",
        "\n",
        "print(f\"Building and pushing image to: {IMAGE_URL}\")\n",
        "!gcloud builds submit {SOURCE_PATH} \\\n",
        "  --verbosity=debug \\\n",
        "  --project={PROJECT_ID} \\\n",
        "  --tag={IMAGE_URL}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1-ymQAQcycuO"
      },
      "outputs": [],
      "source": [
        "# Replace [SERVICE-NAME] with the desired name for your Cloud Run service\n",
        "# Replace [REGION] with the Google Cloud region where you want to deploy (e.g., us-central1)\n",
        "# Replace [IMAGE_URL] with the full path to your container image in GCR or Artifact Registry\n",
        "\n",
        "# Replace with your actual service name, region, and image URL\n",
        "SERVICE_NAME = 'weather-a2a-sample-agent'  # @param {type:\"string\"}\n",
        "IMAGE_URL = f'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest'\n",
        "# The agent service can either run using API_KEY or Vertex AI directly.\n",
        "API_KEY = ''  # @param {type:\"string\"}\n",
        "WEATHER_APP_URL = f'https://{SERVICE_NAME}-{PROJECT_NUM}.{LOCATION}.run.app'"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "268oeup7bmxA"
      },
      "outputs": [],
      "source": [
        "#@title Run the Weather A2A Agent in Cloud Run\n",
        "!gcloud run deploy {SERVICE_NAME} \\\n",
        "    --verbosity=debug \\\n",
        "    --memory=2.5G \\\n",
        "    --image={IMAGE_URL} \\\n",
        "    --region={LOCATION} \\\n",
        "    --port=10001 \\\n",
        "    --project={PROJECT_ID} \\\n",
        "    --no-allow-unauthenticated \\\n",
        "    --set-env-vars=GOOGLE_GENAI_USE_VERTEXAI=False,GOOGLE_API_KEY={API_KEY},APP_URL={WEATHER_APP_URL}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Bn2-ZYkLjncT"
      },
      "source": [
        "## Command line to quick test the Agent servers on Cloud Run"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4WUIAPSgv28w"
      },
      "source": [
        "If the above gcloud run, if \"--allow-unauthenticated\" paramater is not set as disabled, the generated A2A endpoints is a public URL which can be accessed by anyone. If it's set as disabled as \"--no-allow-unauthenticated\", the URL is not a public URL and it's based IAM based auth (private). In this case, you'll need to set the cloud identity token in the auth header when talking to this A2A endpints.\n",
        "\n",
        "The below will show how you can obtain the identity token and used in the Hosting Agent defined below. For public URL, you don't have to obtain this token and remove that auth header parts.\n",
        "\n",
        "The token can be obtained by gcloud cli. Please follow the steps below in this notebook or you can run gcloud command in the shell and copy the token value to here."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "S6WlYRmUibt2"
      },
      "outputs": [],
      "source": [
        "!gcloud auth login\n",
        "!gcloud config set project {PROJECT_ID}\n",
        "!gcloud auth print-identity-token"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "8XWSSWJXwaI9"
      },
      "outputs": [],
      "source": [
        "try:\n",
        "    # Run the gcloud command and capture its output\n",
        "    token_bytes = subprocess.check_output(\n",
        "        ['gcloud', 'auth', 'print-identity-token']\n",
        "    )\n",
        "\n",
        "    # Decode the bytes to a string and remove any leading/trailing whitespace\n",
        "    TOKEN = token_bytes.decode('utf-8').strip()\n",
        "\n",
        "    print('Captured token:')\n",
        "    print(TOKEN)\n",
        "\n",
        "except subprocess.CalledProcessError as e:\n",
        "    print(f'Error running gcloud command: {e}')\n",
        "    print(f'Stderr: {e.stderr.decode(\"utf-8\")}')\n",
        "except FileNotFoundError:\n",
        "    print('Error: gcloud command not found. Make sure gcloud SDK is installed.')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "uP25oDnHiQGs"
      },
      "outputs": [],
      "source": [
        "HOST = f'{AIRBNB_APP_URL}{AGENT_CARD_WELL_KNOWN_PATH}'\n",
        "!curl -H \"Authorization: Bearer {TOKEN}\" {HOST}"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0_FF3KEei_sO"
      },
      "outputs": [],
      "source": [
        "HOST = f'{WEATHER_APP_URL}{AGENT_CARD_WELL_KNOWN_PATH}'\n",
        "!curl -H \"Authorization: Bearer {TOKEN}\" {HOST}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PyMx0qgJ1QSX"
      },
      "source": [
        "## Define Eval helper functions\n",
        "\n",
        "Initiate a set of helper functions to print tutorial results."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "uSgWjMD_g1_v"
      },
      "outputs": [],
      "source": [
        "# @title eval helper functions\n",
        "\n",
        "\n",
        "def get_id(length: int = 8) -> str:\n",
        "    \"\"\"Generate a uuid of a specified length (default=8).\"\"\"\n",
        "    return ''.join(\n",
        "        random.choices(string.ascii_lowercase + string.digits, k=length)\n",
        "    )\n",
        "\n",
        "\n",
        "def parse_adk_output_to_dictionary(\n",
        "    events: list[Event], *, as_json: bool = False\n",
        ") -> dict[str, Any]:\n",
        "    \"\"\"Parse ADK event output into a structured dictionary format.\"\"\"\n",
        "    final_response = ''\n",
        "    trajectory = []\n",
        "\n",
        "    for event in events:\n",
        "        if not getattr(event, 'content', None) or not getattr(\n",
        "            event.content, 'parts', None\n",
        "        ):\n",
        "            continue\n",
        "        for part in event.content.parts:\n",
        "            if getattr(part, 'function_call', None):\n",
        "                info = {\n",
        "                    'tool_name': part.function_call.name,\n",
        "                    'tool_input': dict(part.function_call.args),\n",
        "                }\n",
        "                if info not in trajectory:\n",
        "                    trajectory.append(info)\n",
        "            if event.content.role == 'model' and getattr(part, 'text', None):\n",
        "                final_response = part.text.strip()\n",
        "\n",
        "    trajectory_out = json.dumps(trajectory) if as_json else trajectory\n",
        "    return {'response': final_response, 'predicted_trajectory': trajectory_out}\n",
        "\n",
        "def format_output_as_markdown(output: dict) -> str:\n",
        "    \"\"\"Convert the output dictionary to a formatted markdown string.\"\"\"\n",
        "    markdown = '### AI Response\\n' + output['response'] + '\\n\\n'\n",
        "    if output['predicted_trajectory']:\n",
        "        markdown += '### Function Calls\\n'\n",
        "        for call in output['predicted_trajectory']:\n",
        "            markdown += f'- **Function**: `{call[\"tool_name\"]}`\\n'\n",
        "            markdown += '  - **Arguments**\\n'\n",
        "            for key, value in call['tool_input'].items():\n",
        "                markdown += f'    - `{key}`: `{value}`\\n'\n",
        "    return markdown\n",
        "\n",
        "\n",
        "def display_eval_report(eval_result: pd.DataFrame) -> None:\n",
        "    \"\"\"Display the evaluation results.\"\"\"\n",
        "    display(Markdown('### Summary Metrics'))\n",
        "    display(\n",
        "        pd.DataFrame(\n",
        "            eval_result.summary_metrics.items(), columns=['metric', 'value']\n",
        "        )\n",
        "    )\n",
        "    if getattr(eval_result, 'metrics_table', None) is not None:\n",
        "        display(Markdown('### Rowwise Metrics'))\n",
        "        display(eval_result.metrics_table.head())\n",
        "\n",
        "\n",
        "def display_drilldown(row: pd.Series) -> None:\n",
        "    \"\"\"Displays a drill-down view for trajectory data within a row.\"\"\"\n",
        "    style = 'white-space: pre-wrap; width: 800px; overflow-x: auto;'\n",
        "\n",
        "    if not (\n",
        "        isinstance(row['predicted_trajectory'], list)\n",
        "        and isinstance(row['reference_trajectory'], list)\n",
        "    ):\n",
        "        return\n",
        "\n",
        "    for predicted_trajectory, reference_trajectory in zip(\n",
        "        row['predicted_trajectory'], row['reference_trajectory'], strict=False\n",
        "    ):\n",
        "        display(\n",
        "            HTML(\n",
        "                f\"<h3>Tool Names:</h3><div style='{style}'>\"\n",
        "                f'{predicted_trajectory[\"tool_name\"], reference_trajectory[\"tool_name\"]}</div>'\n",
        "            )\n",
        "        )\n",
        "\n",
        "        if not (\n",
        "            isinstance(predicted_trajectory.get('tool_input'), dict)\n",
        "            and isinstance(reference_trajectory.get('tool_input'), dict)\n",
        "        ):\n",
        "            continue\n",
        "\n",
        "        for tool_input_key in predicted_trajectory['tool_input']:\n",
        "            print('Tool Input Key: ', tool_input_key)\n",
        "\n",
        "            if tool_input_key in reference_trajectory['tool_input']:\n",
        "                print(\n",
        "                    'Tool Values: ',\n",
        "                    predicted_trajectory['tool_input'][tool_input_key],\n",
        "                    reference_trajectory['tool_input'][tool_input_key],\n",
        "                )\n",
        "            else:\n",
        "                print(\n",
        "                    'Tool Values: ',\n",
        "                    predicted_trajectory['tool_input'][tool_input_key],\n",
        "                    'N/A',\n",
        "                )\n",
        "        print('\\n')\n",
        "    display(HTML('<hr>'))\n",
        "\n",
        "\n",
        "def display_dataframe_rows(\n",
        "    df: pd.DataFrame,\n",
        "    columns: list[str] | None = None,\n",
        "    num_rows: int = 3,\n",
        "    allow_display_drilldown: bool = False,\n",
        ") -> None:\n",
        "    \"\"\"Displays a subset of rows from a DataFrame.\"\"\"\n",
        "    if columns:\n",
        "        df = df[columns]\n",
        "\n",
        "    base_style = 'font-family: monospace; font-size: 14px; '\n",
        "    'white-space: pre-wrap; width: auto; overflow-x: auto;'\n",
        "    header_style = base_style + 'font-weight: bold;'\n",
        "\n",
        "    for _, row in df.head(num_rows).iterrows():\n",
        "        for column in df.columns:\n",
        "            display(\n",
        "                HTML(\n",
        "                    f\"<span style='{header_style}'>{column.replace('_', ' ').title()}: </span>\"\n",
        "                )\n",
        "            )\n",
        "            display(\n",
        "                HTML(f\"<span style='{base_style}'>{row[column]}</span><br>\")\n",
        "            )\n",
        "\n",
        "        display(HTML('<hr>'))\n",
        "\n",
        "        if (\n",
        "            allow_display_drilldown\n",
        "            and 'predicted_trajectory' in df.columns\n",
        "            and 'reference_trajectory' in df.columns\n",
        "        ):\n",
        "            display_drilldown(row)\n",
        "\n",
        "\n",
        "def plot_bar_plot(\n",
        "    eval_result: pd.DataFrame, title: str, metrics: list[str] | None = None\n",
        ") -> None:\n",
        "    \"\"\"Plot the bar plot for summary metrics.\"\"\"\n",
        "    fig = go.Figure()\n",
        "    data = []\n",
        "\n",
        "    summary_metrics = eval_result.summary_metrics\n",
        "    if metrics:\n",
        "        summary_metrics = {\n",
        "            k: v\n",
        "            for k, v in summary_metrics.items()\n",
        "            if any(selected_metric in k for selected_metric in metrics)\n",
        "        }\n",
        "\n",
        "    data.append(\n",
        "        go.Bar(\n",
        "            x=list(summary_metrics.keys()),\n",
        "            y=list(summary_metrics.values()),\n",
        "            name=title,\n",
        "        )\n",
        "    )\n",
        "\n",
        "    fig = go.Figure(data=data)\n",
        "\n",
        "    # Change the bar mode\n",
        "    fig.update_layout(barmode='group')\n",
        "    fig.show()\n",
        "\n",
        "\n",
        "def display_radar_plot(\n",
        "    eval_results: pd.DataFrame, title: str, metrics: list[str] | None = None\n",
        ") -> None:\n",
        "    \"\"\"Plot the radar plot.\"\"\"\n",
        "    fig = go.Figure()\n",
        "    summary_metrics = eval_results.summary_metrics\n",
        "    if metrics:\n",
        "        summary_metrics = {\n",
        "            k: v\n",
        "            for k, v in summary_metrics.items()\n",
        "            if any(selected_metric in k for selected_metric in metrics)\n",
        "        }\n",
        "\n",
        "    min_val = min(summary_metrics.values())\n",
        "    max_val = max(summary_metrics.values())\n",
        "\n",
        "    fig.add_trace(\n",
        "        go.Scatterpolar(\n",
        "            r=list(summary_metrics.values()),\n",
        "            theta=list(summary_metrics.keys()),\n",
        "            fill='toself',\n",
        "            name=title,\n",
        "        )\n",
        "    )\n",
        "    fig.update_layout(\n",
        "        title=title,\n",
        "        polar_radialaxis_range=[min_val, max_val],\n",
        "        showlegend=True,\n",
        "    )\n",
        "    fig.show()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3TjM3Sheh8sB"
      },
      "source": [
        "## Assemble the Hosting (ADK) Agent\n",
        "\n",
        "The Vertex AI Gen AI Evaluation works directly with 'Queryable' agents, and also lets you add your own custom functions with a specific structure (signature).\n",
        "\n",
        "In this case, you assemble the agent using a custom function. The function triggers the agent for a given input and parse the agent outcome to extract the response and called tools."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YpQj91aWZHWP"
      },
      "source": [
        "### Defining the `RemoteAgentConnection` helper class\n",
        "This class uses the bearer token to authenticate so it's able to talk to the remote A2A endpoints which is under Cloud IAM permission control."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bVrP5kgHLe4d"
      },
      "outputs": [],
      "source": [
        "load_dotenv()\n",
        "\n",
        "TaskCallbackArg: TypeAlias = (\n",
        "    Task | TaskStatusUpdateEvent | TaskArtifactUpdateEvent\n",
        ")\n",
        "TaskUpdateCallback = Callable[[TaskCallbackArg, AgentCard], Task]\n",
        "\n",
        "# --- End Authentication Header ---\n",
        "\n",
        "\n",
        "class RemoteAgentConnections:\n",
        "    \"\"\"A class to hold the connections to the remote agents.\"\"\"\n",
        "\n",
        "    def __init__(self, agent_card: AgentCard, agent_url: str):\n",
        "        print(f'agent_card: {agent_card}')\n",
        "        print(f'agent_url: {agent_url}')\n",
        "        headers = {'Authorization': f'Bearer {TOKEN}'}\n",
        "        self._httpx_client = httpx.AsyncClient(timeout=30, headers=headers)\n",
        "        self.agent_client = A2AClient(\n",
        "            self._httpx_client, agent_card, url=agent_url\n",
        "        )\n",
        "        self.card = agent_card\n",
        "\n",
        "    def get_agent(self) -> AgentCard:\n",
        "        \"\"\"Get the agent card.\"\"\"\n",
        "        return self.card\n",
        "\n",
        "    async def send_message(\n",
        "        self, message_request: SendMessageRequest\n",
        "    ) -> SendMessageResponse:\n",
        "        \"\"\"Send a message to the agent.\"\"\"\n",
        "        return await self.agent_client.send_message(message_request)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2HI6iq3NZfQo"
      },
      "source": [
        "### Defining the Hosting Agent\n",
        "This hosting agent does orchetration and routing to different A2A Agents deployed in Cloud Run\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "OklN-9dmKPxS"
      },
      "outputs": [],
      "source": [
        "load_dotenv()\n",
        "\n",
        "\n",
        "def convert_part(part: Part, _tool_context: ToolContext) -> str:\n",
        "    \"\"\"Convert a part to text. Only text parts are supported.\"\"\"\n",
        "    if part.type == 'text':\n",
        "        return part.text\n",
        "\n",
        "    return f'Unknown type: {part.type}'\n",
        "\n",
        "\n",
        "def convert_parts(parts: list[Part], tool_context: ToolContext) -> list[str]:\n",
        "    \"\"\"Convert parts to text.\"\"\"\n",
        "    rval = []\n",
        "    for p in parts:\n",
        "        rval.append(convert_part(p, tool_context))\n",
        "    return rval\n",
        "\n",
        "\n",
        "def create_send_message_payload(\n",
        "    text: str, task_id: str | None = None, context_id: str | None = None\n",
        ") -> dict[str, Any]:\n",
        "    \"\"\"Helper function to create the payload for sending a task.\"\"\"\n",
        "    payload: dict[str, Any] = {\n",
        "        'message': {\n",
        "            'role': 'user',\n",
        "            'parts': [{'type': 'text', 'text': text}],\n",
        "            'messageId': uuid.uuid4().hex,\n",
        "        },\n",
        "    }\n",
        "\n",
        "    if task_id:\n",
        "        payload['message']['taskId'] = task_id\n",
        "\n",
        "    if context_id:\n",
        "        payload['message']['contextId'] = context_id\n",
        "    return payload\n",
        "\n",
        "\n",
        "class RoutingAgent:\n",
        "    \"\"\"The Routing agent.\n",
        "\n",
        "    This is the agent responsible for choosing which remote seller agents to\n",
        "    send tasks to and coordinate their work.\n",
        "    \"\"\"\n",
        "\n",
        "    def __init__(\n",
        "        self,\n",
        "        task_callback: TaskUpdateCallback | None = None,\n",
        "    ):\n",
        "        self.task_callback = task_callback\n",
        "        self.remote_agent_connections: dict[str, RemoteAgentConnections] = {}\n",
        "        self.cards: dict[str, AgentCard] = {}\n",
        "        self.agents: str = ''\n",
        "\n",
        "    async def _async_init_components(\n",
        "        self, remote_agent_addresses: list[str]\n",
        "    ) -> None:\n",
        "        \"\"\"Asynchronous part of initialization.\"\"\"\n",
        "        # Use a single httpx.AsyncClient for all card resolutions for efficiency\n",
        "        headers = {'Authorization': f'Bearer {TOKEN}'}\n",
        "        print('Use auth headers')\n",
        "\n",
        "        async with httpx.AsyncClient(timeout=30, headers=headers) as client:\n",
        "            for address in remote_agent_addresses:\n",
        "                card_resolver = A2ACardResolver(\n",
        "                    client, address\n",
        "                )  # Constructor is sync\n",
        "                try:\n",
        "                    card = (\n",
        "                        await card_resolver.get_agent_card()\n",
        "                    )  # get_agent_card is async\n",
        "\n",
        "                    remote_connection = RemoteAgentConnections(\n",
        "                        agent_card=card, agent_url=address\n",
        "                    )\n",
        "                    self.remote_agent_connections[card.name] = remote_connection\n",
        "                    self.cards[card.name] = card\n",
        "                except httpx.ConnectError as e:\n",
        "                    print(\n",
        "                        f'ERROR: Failed to get agent card from {address}: {e}'\n",
        "                    )\n",
        "                except Exception as e:  # Catch other potential errors\n",
        "                    print(\n",
        "                        f'ERROR: Failed to initialize connection for {address}: {e}'\n",
        "                    )  # noqa: E501, RUF100\n",
        "\n",
        "        # Populate self.agents using the logic from original __init__ (via list_remote_agents)\n",
        "        agent_info = []\n",
        "        for agent_detail_dict in self.list_remote_agents():\n",
        "            agent_info.append(json.dumps(agent_detail_dict))\n",
        "        self.agents = '\\n'.join(agent_info)\n",
        "\n",
        "    @classmethod\n",
        "    async def create(\n",
        "        cls,\n",
        "        remote_agent_addresses: list[str],\n",
        "        task_callback: TaskUpdateCallback | None = None,\n",
        "    ) -> 'RoutingAgent':\n",
        "        \"\"\"Create and asynchronously initialize an instance of the RoutingAgent.\"\"\"\n",
        "        instance = cls(task_callback)\n",
        "        await instance._async_init_components(remote_agent_addresses)\n",
        "        return instance\n",
        "\n",
        "    def create_agent(self) -> Agent:\n",
        "        \"\"\"Create an instance of the RoutingAgent.\"\"\"\n",
        "        model_id = 'gemini-2.5-flash'\n",
        "        print(f'Using hardcoded model: {model_id}')\n",
        "        return Agent(\n",
        "            model=model_id,\n",
        "            name='Routing_agent',\n",
        "            instruction=self.root_instruction,\n",
        "            before_model_callback=self.before_model_callback,\n",
        "            description=(\n",
        "                'This Routing agent orchestrates the decomposition '\n",
        "                'of the user asking for weather forecast or airbnb accommodation'\n",
        "            ),\n",
        "            tools=[\n",
        "                self.send_message,\n",
        "            ],\n",
        "        )\n",
        "\n",
        "    def root_instruction(self, context: ReadonlyContext) -> str:\n",
        "        \"\"\"Generate the root instruction for the RoutingAgent.\"\"\"\n",
        "        current_agent = self.check_active_agent(context)\n",
        "        return f\"\"\"\n",
        "        **Role:** You are an expert Routing Delegator. Your primary function is\n",
        "        to accurately delegate user inquiries regarding weather or\n",
        "        accommodations to the appropriate specialized remote agents.\n",
        "\n",
        "        **Core Directives:**\n",
        "\n",
        "        * **Task Delegation:** Utilize the `send_message` function to assign\n",
        "        actionable tasks to remote agents.\n",
        "        * **Contextual Awareness for Remote Agents:** If a remote agent\n",
        "        repeatedly requests user confirmation, assume it lacks access to the\n",
        "        full conversation history. In such cases, enrich the task description\n",
        "        with all necessary contextual information relevant to that specific\n",
        "        agent.\n",
        "        * **Autonomous Agent Engagement:** Never seek user permission before\n",
        "        engaging with remote agents. If multiple agents are required to fulfill\n",
        "        a request, connect with them directly without requesting user preference\n",
        "        or confirmation.\n",
        "        * **Transparent Communication:** Always present the complete and\n",
        "        detailed response from the remote agent to the user.\n",
        "        * **User Confirmation Relay:** If a remote agent asks for confirmation,\n",
        "        and the user has not already provided it, relay this confirmation\n",
        "        request to the user.\n",
        "        * **Focused Information Sharing:** Provide remote agents with only\n",
        "        relevant contextual information. Avoid extraneous details.\n",
        "        * **No Redundant Confirmations:** Do not ask remote agents for\n",
        "        confirmation of information or actions.\n",
        "        * **Tool Reliance:** Strictly rely on available tools to address user\n",
        "        requests. Do not generate responses based on assumptions. If information\n",
        "        is insufficient, request clarification from the user.\n",
        "        * **Prioritize Recent Interaction:** Focus primarily on the most recent\n",
        "        parts of the conversation when processing requests.\n",
        "        * **Active Agent Prioritization:** If an active agent is already\n",
        "        engaged, route subsequent related requests to that agent using the\n",
        "        appropriate task update tool.\n",
        "\n",
        "        **Agent Roster:**\n",
        "\n",
        "        * Available Agents: `{self.agents}`\n",
        "        * Currently Active Seller Agent: `{current_agent['active_agent']}`\n",
        "                \"\"\"\n",
        "\n",
        "    def check_active_agent(self, context: ReadonlyContext) -> dict[str, str]:\n",
        "        \"\"\"Check if there is an active agent in the current context.\"\"\"\n",
        "        state = context.state\n",
        "        if (\n",
        "            'session_id' in state\n",
        "            and 'session_active' in state\n",
        "            and state['session_active']\n",
        "            and 'active_agent' in state\n",
        "        ):\n",
        "            return {'active_agent': f'{state[\"active_agent\"]}'}\n",
        "        return {'active_agent': 'None'}\n",
        "\n",
        "    def before_model_callback(\n",
        "        self, callback_context: CallbackContext, _llm_request: Any\n",
        "    ) -> None:\n",
        "        \"\"\"Callback to set up the session state before the model is called.\"\"\"\n",
        "        state = callback_context.state\n",
        "        if 'session_active' not in state or not state['session_active']:\n",
        "            if 'session_id' not in state:\n",
        "                state['session_id'] = str(uuid.uuid4())\n",
        "            state['session_active'] = True\n",
        "\n",
        "    def list_remote_agents(self) -> list[dict[str, str]]:\n",
        "        \"\"\"List the available remote agents you can use to delegate the task.\"\"\"\n",
        "        if not self.cards:\n",
        "            return []\n",
        "\n",
        "        remote_agent_info = []\n",
        "        for card in self.cards.values():\n",
        "            print(f'Found agent card: {card.model_dump(exclude_none=True)}')\n",
        "            print('=' * 100)\n",
        "            remote_agent_info.append(\n",
        "                {'name': card.name, 'description': card.description}\n",
        "            )\n",
        "        return remote_agent_info\n",
        "\n",
        "    async def send_message(\n",
        "        self, agent_name: str, task: str, tool_context: ToolContext\n",
        "    ) -> Any:\n",
        "        \"\"\"Sends a task to remote seller agent.\n",
        "\n",
        "        This will send a message to the remote agent named agent_name.\n",
        "\n",
        "        Args:\n",
        "            agent_name: The name of the agent to send the task to.\n",
        "            task: The comprehensive conversation context summary\n",
        "                and goal to be achieved regarding user inquiry and purchase request.\n",
        "            tool_context: The tool context this method runs in.\n",
        "\n",
        "        Yields:\n",
        "            A dictionary of JSON data.\n",
        "        \"\"\"\n",
        "        if agent_name not in self.remote_agent_connections:\n",
        "            raise ValueError(f'Agent {agent_name} not found')\n",
        "        state = tool_context.state\n",
        "        state['active_agent'] = agent_name\n",
        "        client = self.remote_agent_connections[agent_name]\n",
        "\n",
        "        if not client:\n",
        "            raise ValueError(f'Client not available for {agent_name}')\n",
        "        task_id = state['task_id'] if 'task_id' in state else str(uuid.uuid4())\n",
        "\n",
        "        if 'context_id' in state:\n",
        "            context_id = state['context_id']\n",
        "        else:\n",
        "            context_id = str(uuid.uuid4())\n",
        "\n",
        "        message_id = ''\n",
        "        metadata: dict[str, str] = {}\n",
        "        if 'input_message_metadata' in state:\n",
        "            metadata.update(**state['input_message_metadata'])\n",
        "            if 'message_id' in state['input_message_metadata']:\n",
        "                message_id = state['input_message_metadata']['message_id']\n",
        "        if not message_id:\n",
        "            message_id = str(uuid.uuid4())\n",
        "\n",
        "        payload = {\n",
        "            'message': {\n",
        "                'role': 'user',\n",
        "                'parts': [\n",
        "                    {'type': 'text', 'text': task}\n",
        "                ],  # Use the 'task' argument here\n",
        "                'messageId': message_id,\n",
        "            },\n",
        "        }\n",
        "\n",
        "        if task_id:\n",
        "            payload['message']['taskId'] = task_id\n",
        "\n",
        "        if context_id:\n",
        "            payload['message']['contextId'] = context_id\n",
        "\n",
        "        message_request = SendMessageRequest(\n",
        "            id=message_id, params=MessageSendParams.model_validate(payload)\n",
        "        )\n",
        "        send_response: SendMessageResponse = await client.send_message(\n",
        "            message_request=message_request\n",
        "        )\n",
        "\n",
        "        if not isinstance(send_response.root, SendMessageSuccessResponse):\n",
        "            print('received non-success response. Aborting get task ')\n",
        "            return None\n",
        "\n",
        "        if not isinstance(send_response.root.result, Task):\n",
        "            print('received non-task response. Aborting get task ')\n",
        "            return None\n",
        "\n",
        "        return send_response.root.result\n",
        "\n",
        "\n",
        "async def create_routing_agent() -> Agent:\n",
        "    \"\"\"Creates and asynchronously initializes the RoutingAgent.\"\"\"\n",
        "    routing_agent_instance = await RoutingAgent.create(\n",
        "        remote_agent_addresses=[\n",
        "            AIRBNB_APP_URL,\n",
        "            WEATHER_APP_URL,\n",
        "        ]\n",
        "    )\n",
        "    return routing_agent_instance.create_agent()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "t_K_PRCpZqq6"
      },
      "source": [
        "### Define the Agent helper"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6_MTQ-T5L1mu"
      },
      "outputs": [],
      "source": [
        "async def agent_parsed_outcome(query: str) -> dict[str, Any]:\n",
        "    \"\"\"Runs the routing agent with the provided query and returns the parsed outcome.\"\"\"\n",
        "    app_name = 'airbnb_weather_app'\n",
        "    user_id = 'user1'\n",
        "    session_id = 'session_one'\n",
        "\n",
        "    routing_agent = await create_routing_agent()  # Await the async function\n",
        "\n",
        "    session_service = InMemorySessionService()\n",
        "    await session_service.create_session(\n",
        "        app_name=app_name, user_id=user_id, session_id=session_id\n",
        "    )\n",
        "\n",
        "    runner = Runner(\n",
        "        agent=routing_agent, app_name=app_name, session_service=session_service\n",
        "    )\n",
        "\n",
        "    content = types.Content(\n",
        "        role='user', parts=[types.Part(text=query)]\n",
        "    )  # Changed role to 'user'\n",
        "    events = [\n",
        "        event\n",
        "        async for event in runner.run_async(\n",
        "            user_id=user_id, session_id=session_id, new_message=content\n",
        "        )\n",
        "    ]\n",
        "\n",
        "    return parse_adk_output_to_dictionary(events)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "qlG8aYRd4AJF"
      },
      "outputs": [],
      "source": [
        "# --- Sync wrapper for Vertex AI evaluation\n",
        "\n",
        "\n",
        "def agent_parsed_outcome_sync(prompt: str) -> dict[str, Any]:\n",
        "    \"\"\"Synchronous wrapper for the async agent_parsed_outcome function.\"\"\"\n",
        "    result = asyncio.run(agent_parsed_outcome(prompt))\n",
        "    result['predicted_trajectory'] = json.dumps(result['predicted_trajectory'])\n",
        "    return result"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hBsXbvk4Zvyp"
      },
      "source": [
        "### Quick test with Agent Runner"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "M8HRLEcEMBRd"
      },
      "outputs": [],
      "source": [
        "response = agent_parsed_outcome_sync(prompt='Get product details for shoes')\n",
        "display(Markdown(format_output_as_markdown(response)))\n",
        "\n",
        "response = agent_parsed_outcome_sync(\n",
        "    prompt=\"What's the weather in Yosemite Valley, CA\"\n",
        ")\n",
        "display(Markdown(format_output_as_markdown(response)))\n",
        "\n",
        "response = agent_parsed_outcome_sync(\n",
        "    prompt='Looking for Airbnb in Yosemite for August 1 to 6, 2025'\n",
        ")\n",
        "display(Markdown(format_output_as_markdown(response)))\n",
        "\n",
        "response = agent_parsed_outcome_sync(\n",
        "    prompt=\"What's the weather in San Francisco, CA\"\n",
        ")\n",
        "display(Markdown(format_output_as_markdown(response)))\n",
        "\n",
        "response = agent_parsed_outcome_sync(\n",
        "    prompt='Looking for Airbnb in Paris, France for August 10 to 12, 2025'\n",
        ")\n",
        "display(Markdown(format_output_as_markdown(response)))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "udUTRcdmWl6N"
      },
      "source": [
        "## Evaluation"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kVeNrhBXUjYl"
      },
      "source": [
        "### Prepare Agent Evaluation dataset\n",
        "\n",
        "To evaluate your AI agent using the Vertex AI Gen AI Evaluation service, you need a specific dataset depending on what aspects you want to evaluate of your agent.  \n",
        "\n",
        "This dataset should include the prompts given to the agent. It can also contain the ideal or expected response (ground truth) and the intended sequence of tool calls the agent should take (reference trajectory) representing the sequence of tools you expect agent calls for each given prompt.\n",
        "\n",
        "> Optionally, you can provide both generated responses and predicted trajectory (**Bring-Your-Own-Dataset scenario**).\n",
        "\n",
        "Below you have an example of dataset you might have with a customer support agent with user prompt and the reference trajectory."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "UIrEeCRxUjYl"
      },
      "outputs": [],
      "source": [
        "# @title Define eval datasets\n",
        "# The reference trajectory are empty in this example.\n",
        "eval_data_a2a = {\n",
        "    'prompt': [\n",
        "        \"What's the weather in Yosemite Valley, CA\",\n",
        "        'Looking for Airbnb in Yosemite for August 1 to 6, 2025',\n",
        "        \"What's the weather in San Francisco, CA\",\n",
        "        'Looking for Airbnb in Paris, France for August 10 to 12, 2025',\n",
        "    ],\n",
        "    'predicted_trajectory': [\n",
        "        [\n",
        "            {\n",
        "                'tool_name': 'send_message',\n",
        "                'tool_input': {\n",
        "                    'task': \"What's the weather in Yosemite Valley, CA\",\n",
        "                    'agent_name': 'Weather Agent',\n",
        "                },\n",
        "            }\n",
        "        ],\n",
        "        [\n",
        "            {\n",
        "                'tool_name': 'send_message',\n",
        "                'tool_input': {\n",
        "                    'task': 'Find Airbnb in Yosemite for August 1 to 6, 2025',\n",
        "                    'agent_name': 'Airbnb Agent',\n",
        "                },\n",
        "            }\n",
        "        ],\n",
        "        [\n",
        "            {\n",
        "                'tool_name': 'send_message',\n",
        "                'tool_input': {\n",
        "                    'task': \"What's the weather in San Francisco, CA\",\n",
        "                    'agent_name': 'Weather Agent',\n",
        "                },\n",
        "            }\n",
        "        ],\n",
        "        [\n",
        "            {\n",
        "                'tool_name': 'send_message',\n",
        "                'tool_input': {\n",
        "                    'task': 'Find Airbnb in Yosemite for August 10 to 12, 2025',\n",
        "                    'agent_name': 'Airbnb Agent',\n",
        "                },\n",
        "            }\n",
        "        ],\n",
        "    ],\n",
        "}\n",
        "\n",
        "eval_sample_dataset = pd.DataFrame(eval_data_a2a)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "fvKARu8vUjYl"
      },
      "outputs": [],
      "source": [
        "display_dataframe_rows(eval_sample_dataset, num_rows=30)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Xb_klTKR6Rg9"
      },
      "source": [
        "### Trajectory Evaluation\n",
        "\n",
        "After evaluating the agent's ability to select the single most appropriate tool for a given task, you generalize the evaluation by analyzing the tool sequence choices with respect to the user input (trajectory). This assesses whether the agent not only chooses the right tools but also utilizes them in a rational and effective order."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vfHE2djq6UX4"
      },
      "source": [
        "#### Set trajectory metrics\n",
        "\n",
        "To evaluate agent's trajectory, Vertex AI Gen AI Evaluation provides several ground-truth based metrics:\n",
        "\n",
        "* `trajectory_exact_match`: identical trajectories (same actions, same order)\n",
        "\n",
        "* `trajectory_in_order_match`: reference actions present in predicted trajectory, in order (extras allowed)\n",
        "\n",
        "* `trajectory_any_order_match`: all reference actions present in predicted trajectory (order, extras don't matter).\n",
        "\n",
        "* `trajectory_precision`: proportion of predicted actions present in reference\n",
        "\n",
        "* `trajectory_recall`: proportion of reference actions present in predicted.  \n",
        "\n",
        "All metrics score 0 or 1, except `trajectory_precision` and `trajectory_recall` which range from 0 to 1."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "dCnuxwwv6Wzn"
      },
      "outputs": [],
      "source": [
        "trajectory_metrics = [\n",
        "    'trajectory_exact_match',\n",
        "    'trajectory_in_order_match',\n",
        "    'trajectory_any_order_match',\n",
        "    'trajectory_precision',\n",
        "    'trajectory_recall',\n",
        "]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2NyQmI-n6ZhW"
      },
      "source": [
        "#### Run an evaluation task\n",
        "\n",
        "Submit an evaluation by running `evaluate` method of the new `EvalTask`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "KnkQfOrU6mWN"
      },
      "outputs": [],
      "source": [
        "EXPERIMENT_RUN = f'trajectory-{get_id()}'\n",
        "\n",
        "trajectory_eval_task = EvalTask(\n",
        "    dataset=eval_sample_dataset,\n",
        "    metrics=trajectory_metrics,\n",
        "    experiment=EXPERIMENT_NAME,\n",
        "    output_uri_prefix=BUCKET_URI + '/multiple-metric-eval',\n",
        ")\n",
        "\n",
        "trajectory_eval_result = trajectory_eval_task.evaluate(\n",
        "    runnable=agent_parsed_outcome_sync, experiment_run_name=EXPERIMENT_RUN\n",
        ")\n",
        "\n",
        "display_eval_report(trajectory_eval_result)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VDg1ZzFU6osW"
      },
      "source": [
        "#### Visualize evaluation results\n",
        "\n",
        "Print and visualize a sample of evaluation results."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3PERHO666s-v"
      },
      "outputs": [],
      "source": [
        "display_dataframe_rows(trajectory_eval_result.metrics_table, num_rows=3)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4a0ML8hc6uu1"
      },
      "outputs": [],
      "source": [
        "plot_bar_plot(\n",
        "    trajectory_eval_result,\n",
        "    title='Trajectory Metrics',\n",
        "    metrics=[f'{metric}/mean' for metric in trajectory_metrics],\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2BHVvsnEUpUH"
      },
      "source": [
        "### Evaluate final response\n",
        "\n",
        "Similar to model evaluation, you can evaluate the final response of the agent using Vertex AI Gen AI Evaluation."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-nIUJqXUUpUH"
      },
      "source": [
        "#### Set response metrics\n",
        "\n",
        "After agent inference, Vertex AI Gen AI Evaluation provides several metrics to evaluate generated responses. You can use computation-based metrics to compare the response to a reference (if needed) and using existing or custom model-based metrics to determine the quality of the final response.\n",
        "\n",
        "Check out the [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval) to learn more.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "jiw2UK_eUpUH"
      },
      "outputs": [],
      "source": [
        "response_metrics = ['safety', 'coherence']"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "XMGwuylbUpUH"
      },
      "outputs": [],
      "source": [
        "EXPERIMENT_RUN = f'response-{get_id()}'\n",
        "\n",
        "response_eval_task = EvalTask(\n",
        "    dataset=eval_sample_dataset,\n",
        "    metrics=response_metrics,\n",
        "    experiment=EXPERIMENT_NAME,\n",
        "    output_uri_prefix=BUCKET_URI + '/response-metric-eval',\n",
        ")\n",
        "\n",
        "response_eval_result = response_eval_task.evaluate(\n",
        "    runnable=agent_parsed_outcome_sync, experiment_run_name=EXPERIMENT_RUN\n",
        ")\n",
        "\n",
        "display_eval_report(response_eval_result)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "l1zFso8NUpUH"
      },
      "source": [
        "#### Visualize evaluation results\n",
        "\n",
        "\n",
        "Print new evaluation result sample."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "m2BsqdlJUpUH"
      },
      "outputs": [],
      "source": [
        "display_dataframe_rows(response_eval_result.metrics_table, num_rows=5)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_smzZMDuUpUH"
      },
      "source": [
        "After running the code above, you should see the public URL printed. You can access your running Flask service through this URL.\n",
        "\n",
        "**Important:** Keep the cell running to keep the service and the ngrok tunnel active. Stopping the cell will stop the service and invalidate the public URL."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ntRBK3Te6PEc"
      },
      "source": [
        "### Evaluate generated response conditioned by tool choosing\n",
        "\n",
        "When evaluating AI agents that interact with environments, standard text generation metrics like coherence may not be sufficient. This is because these metrics primarily focus on text structure, while agent responses should be assessed based on their effectiveness within the environment.\n",
        "\n",
        "Instead, use custom metrics that assess whether the agent's response logically follows from its tools choices like the one you have in this section."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4bENwFcd6prX"
      },
      "source": [
        "#### Define a custom metric\n",
        "\n",
        "According to the [documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval#model-based-metrics), you can define a prompt template for evaluating whether an AI agent's response follows logically from its actions by setting up criteria and a rating system for this evaluation.\n",
        "\n",
        "Define a `criteria` to set the evaluation guidelines and a `pointwise_rating_rubric` to provide a scoring system (1 or 0). Then use a `PointwiseMetricPromptTemplate` to create the template using these components.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "txGEHcg76riI"
      },
      "outputs": [],
      "source": [
        "criteria = {\n",
        "    'Follows trajectory': (\n",
        "        \"Evaluate whether the agent's response logically follows from the \"\n",
        "        'sequence of actions it took. Consider these sub-points:\\n'\n",
        "        '  - Does the response reflect the information gathered during the trajectory?\\n'\n",
        "        '  - Is the response consistent with the goals and constraints of the task?\\n'\n",
        "        '  - Are there any unexpected or illogical jumps in reasoning?\\n'\n",
        "        'Provide specific examples from the trajectory and response to support your evaluation.'\n",
        "    )\n",
        "}\n",
        "\n",
        "pointwise_rating_rubric = {\n",
        "    '1': 'Follows trajectory',\n",
        "    '0': 'Does not follow trajectory',\n",
        "}\n",
        "\n",
        "response_follows_trajectory_prompt_template = PointwiseMetricPromptTemplate(\n",
        "    criteria=criteria,\n",
        "    rating_rubric=pointwise_rating_rubric,\n",
        "    input_variables=['prompt', 'predicted_trajectory'],\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8MJqXu0kikxd"
      },
      "source": [
        "Print the prompt_data of this template containing the combined criteria and rubric information ready for use in an evaluation."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5EL7iEDMikNQ"
      },
      "outputs": [],
      "source": [
        "print(response_follows_trajectory_prompt_template.prompt_data)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "e1djVp7Fi4Yy"
      },
      "source": [
        "After you define the evaluation prompt template, set up the associated metric to evaluate how well a response follows a specific trajectory. The `PointwiseMetric` creates a metric where `response_follows_trajectory` is the metric's name and `response_follows_trajectory_prompt_template` provides instructions or context for evaluation you set up before.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Nx1xbZD87iMj"
      },
      "outputs": [],
      "source": [
        "response_follows_trajectory_metric = PointwiseMetric(\n",
        "    metric='response_follows_trajectory',\n",
        "    metric_prompt_template=response_follows_trajectory_prompt_template,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1pmxLwTe7Ywv"
      },
      "source": [
        "#### Set response metrics\n",
        "\n",
        "Set new generated response evaluation metrics by including the custom metric.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "wrsbVFDd7Ywv"
      },
      "outputs": [],
      "source": [
        "response_tool_metrics = [\n",
        "    'trajectory_exact_match',\n",
        "    'trajectory_in_order_match',\n",
        "    'safety',\n",
        "    response_follows_trajectory_metric,\n",
        "]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Lo-Sza807Ywv"
      },
      "source": [
        "#### Run an evaluation task\n",
        "\n",
        "Run a new agent's evaluation."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "_dkb4gSn7Ywv"
      },
      "outputs": [],
      "source": [
        "EXPERIMENT_RUN = f'response-over-tools-{get_id()}'\n",
        "\n",
        "response_eval_tool_task = EvalTask(\n",
        "    dataset=eval_sample_dataset,\n",
        "    metrics=response_tool_metrics,\n",
        "    experiment=EXPERIMENT_NAME,\n",
        "    output_uri_prefix=BUCKET_URI + '/reasoning-metric-eval',\n",
        ")\n",
        "\n",
        "response_eval_tool_result = response_eval_tool_task.evaluate(\n",
        "    # Uncomment the line below if you are providing the agent with an unparsed dataset\n",
        "    runnable=agent_parsed_outcome_sync,\n",
        "    experiment_run_name=EXPERIMENT_RUN,\n",
        ")\n",
        "\n",
        "display_eval_report(response_eval_tool_result)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AtOfIFi2j88g"
      },
      "source": [
        "#### Visualize evaluation results\n",
        "\n",
        "Visualize evaluation result sample."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "GH2YvXgLlLH7"
      },
      "outputs": [],
      "source": [
        "display_dataframe_rows(response_eval_tool_result.metrics_table, num_rows=3)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tdVhCURXMdLG"
      },
      "outputs": [],
      "source": [
        "plot_bar_plot(\n",
        "    response_eval_tool_result,\n",
        "    title='Response Metrics',\n",
        "    metrics=[f'{metric}/mean' for metric in response_tool_metrics],\n",
        ")"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "multi_agents_eval_with_cloud_run_deployment.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
