{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ur8xi4C7S06n"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JAPoU8Sm5E6e"
      },
      "source": [
        "# Get started with A2A on Agent Engine\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_a2a_on_agent_engine.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fagents%2Fagent_engine%2Ftutorial_a2a_on_agent_engine.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/agents/agent_engine/tutorial_a2a_on_agent_engine.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_a2a_on_agent_engine.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_a2a_on_agent_engine.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_a2a_on_agent_engine.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_a2a_on_agent_engine.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_a2a_on_agent_engine.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_a2a_on_agent_engine.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "84f0f73a0f76"
      },
      "source": [
        "| Authors |\n",
        "| --- |\n",
        "| [Joyce Liu](https://github.com/joycel-github) |\n",
        "| [Rajesh Velicheti](https://github.com/rvelicheti) |\n",
        "| [Ivan Nardini](https://github.com/inardini) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tvgnzT1CKxrO"
      },
      "source": [
        "## Overview\n",
        "\n",
        "This notebook shows how to build, deploy, and interact with **[Agent2Agent (A2A) protocol](https://a2aprotocol.org)** agents hosted on the fully-managed, serverless **Vertex AI Agent Engine**.\n",
        "\n",
        "A2A is an open standard, like HTTP for AI agents, enabling communication and collaboration between diverse AI agents by standardizing capability discovery (via Agent Cards) and interaction for complex tasks, thereby eliminating custom integrations.\n",
        "\n",
        "Vertex AI Agent Engine is fully-managed, serverless platform for running A2A agents. It handles all the infrastructure, scaling, security, and monitoring so you can focus on your agent's logic.\n",
        "\n",
        "In this tutorial, you will:\n",
        "\n",
        "* **Build** a simple, A2A-compliant agent using the Vertex AI SDK.  \n",
        "* **Test** the agent locally to ensure it works as expected.  \n",
        "* **Deploy** the agent to Agent Engine with a single command.  \n",
        "* **Query** the managed agent endpoint using three different methods (Vertex AI SDK, A2A SDK, and direct HTTP requests). \n",
        "* **Clean up** the resources you've created."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "61RBz8LLbxCR"
      },
      "source": [
        "## Get started"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "No17Cw5hgx12"
      },
      "source": [
        "### Install required packages\n",
        "\n",
        "First, we'll install the necessary packages.\n",
        "\n",
        "- `a2a-sdk` is the foundational open-source SDK for building A2A-compliant agents.\n",
        "- `google-cloud-aiplatform` is the Vertex AI SDK, containing the new Agent Engine template we'll use for deployment."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tFy3H3aPgx12"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade --quiet \"a2a-sdk>=0.3.4\" --force-reinstall --quiet\n",
        "%pip install --upgrade --quiet \"google-cloud-aiplatform[agent_engines, adk]>=1.112.0\" --force-reinstall --quiet"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dmWOrTJ3gx13"
      },
      "source": [
        "### Authenticate your notebook environment (Colab only)\n",
        "\n",
        "If you're running this notebook on Google Colab, run the cell below to authenticate your environment."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "NyKGtVQjgx13"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DF4l8DTdWgPY"
      },
      "source": [
        "### Set Google Cloud project information\n",
        "\n",
        "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
        "\n",
        "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Nqwi-5ufWp_B"
      },
      "outputs": [],
      "source": [
        "# Use the environment variable if the user doesn't provide Project ID.\n",
        "import os\n",
        "\n",
        "import vertexai\n",
        "from google.genai import types\n",
        "\n",
        "# fmt: off\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "\n",
        "LOCATION = \"[your-location]\"  # @param {type: \"string\", placeholder: \"[your-location]\", isTemplate: true}\n",
        "if not LOCATION or LOCATION == \"[your-location]\":\n",
        "    LOCATION = str(os.environ.get(\"GOOGLE_CLOUD_REGION\"))\n",
        "\n",
        "BUCKET_NAME = \"[your-bucket-name]\"  # @param {type: \"string\", placeholder: \"[your-bucket-name]\", isTemplate: true}\n",
        "# fmt: on\n",
        "if not BUCKET_NAME or BUCKET_NAME == \"[your-bucket-name]\":\n",
        "    BUCKET_NAME = PROJECT_ID\n",
        "\n",
        "BUCKET_URI = f\"gs://{BUCKET_NAME}\"\n",
        "\n",
        "# !gsutil mb -l $LOCATION -p $PROJECT_ID $BUCKET_URI\n",
        "\n",
        "# Initialize Vertex AI session\n",
        "vertexai.init(project=PROJECT_ID, location=LOCATION, staging_bucket=BUCKET_URI)\n",
        "\n",
        "# Initialize the Gen AI client using http_options\n",
        "# The parameter customizes how the Vertex AI client communicates with Google Cloud's backend services.\n",
        "# It's used here to access new, pre-release features.\n",
        "client = vertexai.Client(\n",
        "    project=PROJECT_ID,\n",
        "    location=LOCATION,\n",
        "    http_options=types.HttpOptions(\n",
        "        api_version=\"v1beta1\", base_url=f\"https://{LOCATION}-aiplatform.googleapis.com/\"\n",
        "    ),\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5303c05f7aa6"
      },
      "source": [
        "### Import libraries\n",
        "\n",
        "Here, we're importing all the necessary Python classes and functions we'll use throughout the notebook."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6fc324893334"
      },
      "outputs": [],
      "source": [
        "# Helpers\n",
        "import json\n",
        "import logging\n",
        "import time\n",
        "from collections.abc import Awaitable, Callable\n",
        "from datetime import datetime\n",
        "from pprint import pprint\n",
        "from typing import Any, NoReturn\n",
        "\n",
        "import httpx\n",
        "from IPython.display import Markdown, display\n",
        "from google.auth import default\n",
        "from google.auth.transport.requests import Request as req\n",
        "from starlette.requests import Request\n",
        "\n",
        "logging.getLogger().setLevel(logging.INFO)\n",
        "\n",
        "\n",
        "# A2A\n",
        "from a2a.client import ClientConfig, ClientFactory\n",
        "from a2a.server.agent_execution import AgentExecutor, RequestContext\n",
        "from a2a.server.events import EventQueue\n",
        "from a2a.server.tasks import TaskUpdater\n",
        "from a2a.types import (\n",
        "    AgentSkill,\n",
        "    Message,\n",
        "    Part,\n",
        "    Role,\n",
        "    TaskState,\n",
        "    TextPart,\n",
        "    TransportProtocol,\n",
        "    UnsupportedOperationError,\n",
        ")\n",
        "from a2a.utils import new_agent_text_message\n",
        "from a2a.utils.errors import ServerError\n",
        "\n",
        "# ADK\n",
        "from google.adk import Runner\n",
        "from google.adk.agents import LlmAgent\n",
        "from google.adk.artifacts import InMemoryArtifactService\n",
        "from google.adk.memory.in_memory_memory_service import InMemoryMemoryService\n",
        "from google.adk.sessions import InMemorySessionService\n",
        "from google.adk.tools import google_search_tool\n",
        "from google.genai import types\n",
        "\n",
        "# Agent Engine\n",
        "from vertexai.preview.reasoning_engines import A2aAgent\n",
        "from vertexai.preview.reasoning_engines.templates.a2a import create_agent_card"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Ra6lH9ad3Vkx"
      },
      "source": [
        "### Helpers\n",
        "\n",
        "These are simple utility functions to make our lives easier, especially for local testing. They help create mock HTTP requests (`build_post_request`, `build_get_request`) and fetch authentication tokens (`get_bearer_token`)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "1zH-NZDQ5n9e"
      },
      "outputs": [],
      "source": [
        "def receive_wrapper(data: dict) -> Callable[[], Awaitable[dict]]:\n",
        "    \"\"\"Creates a mock ASGI receive callable for testing.\"\"\"\n",
        "\n",
        "    async def receive():\n",
        "        byte_data = json.dumps(data).encode(\"utf-8\")\n",
        "        return {\"type\": \"http.request\", \"body\": byte_data, \"more_body\": False}\n",
        "\n",
        "    return receive\n",
        "\n",
        "\n",
        "def build_post_request(\n",
        "    data: dict[str, Any] | None = None, path_params: dict[str, str] | None = None\n",
        ") -> Request:\n",
        "    \"\"\"Builds a mock Starlette Request object for a POST request with JSON data.\"\"\"\n",
        "    scope = {\n",
        "        \"type\": \"http\",\n",
        "        \"http_version\": \"1.1\",\n",
        "        \"headers\": [(b\"content-type\", b\"application/json\")],\n",
        "        \"app\": None,\n",
        "    }\n",
        "    if path_params:\n",
        "        scope[\"path_params\"] = path_params\n",
        "    receiver = receive_wrapper(data)\n",
        "    return Request(scope, receiver)\n",
        "\n",
        "\n",
        "def build_get_request(path_params: dict[str, str]) -> Request:\n",
        "    \"\"\"Builds a mock Starlette Request object for a GET request.\"\"\"\n",
        "    scope = {\n",
        "        \"type\": \"http\",\n",
        "        \"http_version\": \"1.1\",\n",
        "        \"query_string\": b\"\",\n",
        "        \"app\": None,\n",
        "    }\n",
        "    if path_params:\n",
        "        scope[\"path_params\"] = path_params\n",
        "\n",
        "    async def receive():\n",
        "        return {\"type\": \"http.disconnect\"}\n",
        "\n",
        "    return Request(scope, receive)\n",
        "\n",
        "\n",
        "def get_bearer_token() -> str | None:\n",
        "    \"\"\"Fetches a Google Cloud bearer token using Application Default Credentials.\"\"\"\n",
        "    try:\n",
        "        # Use an alias to avoid name collision with starlette.requests.Request\n",
        "        credentials, project = default(\n",
        "            scopes=[\"https://www.googleapis.com/auth/cloud-platform\"]\n",
        "        )\n",
        "        request = req()\n",
        "        credentials.refresh(request)\n",
        "        return credentials.token\n",
        "    except Exception as e:\n",
        "        print(f\"Error getting credentials: {e}\")\n",
        "        print(\n",
        "            \"Please ensure you have authenticated with 'gcloud auth application-default login'.\"\n",
        "        )\n",
        "    return None"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RuFUUkq53kdo"
      },
      "source": [
        "### Build a simple ADK agent\n",
        "\n",
        "Before we can build an A2A agent, we need an agent. We create an agent using the Agent Development Kit (ADK).\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "dxG9BjjWub2S"
      },
      "outputs": [],
      "source": [
        "qna_agent = LlmAgent(\n",
        "    # The LLM model to use\n",
        "    model=\"gemini-2.5-flash\",\n",
        "    # Internal name for the agent (used in logging and sessions)\n",
        "    name=\"qa_assistant\",\n",
        "    # Human-readable description\n",
        "    description=\"I answer questions using web search.\",\n",
        "    # The system instruction that guides the agent's behavior\n",
        "    # This is crucial for getting good results\n",
        "    instruction=\"\"\"You are a helpful Q&A assistant.\n",
        "        When asked a question:\n",
        "        1. Use Google Search to find current, accurate information\n",
        "        2. Synthesize the search results into a clear answer\n",
        "        3. Cite your sources when possible\n",
        "        4. If you can't find a good answer, say so honestly\n",
        "\n",
        "        Always aim for accuracy over speculation.\"\"\",\n",
        "    # Tools available to the agent\n",
        "    # The agent will automatically use these when needed\n",
        "    tools=[google_search_tool.google_search],\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xdoPfLSR6dN1"
      },
      "source": [
        "### Define the agent card\n",
        "\n",
        "AgentCard is an important component of the A2A protocol. Think of it as a digital business card for your agent. It's a structured JSON document that tells other agents everything they need to know to interact with yours: its name, what it does, the skills it offers, and how to call its API endpoint.\n",
        "\n",
        "We define an `AgentSkill` to describe our agent's Q&A capability. Then, we use the `create_agent_card` helper function to assemble the full card, including the agent's name, description, and the skill we just defined.\n",
        "\n",
        "> Note: The utility builds the card based on the limitations in the current integration: Streaming is turned off and supports Authenticated Extended Card is turned on. Also `create_agent_card` supports `agent_card` which allows you to supply an `agent_card` as dictionary. If an Agent Card is supplied as a dictionary, validation errors might show depending on whether the card meets the current integration limitations."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "c7ibkgJe6eu-"
      },
      "outputs": [],
      "source": [
        "# Define a skill - a specific capability your agent offers\n",
        "# Agents can have multiple skills for different tasks\n",
        "qna_agent_skill = AgentSkill(\n",
        "    # Unique identifier for this skill\n",
        "    id=\"web_qa\",\n",
        "    # Human-friendly name\n",
        "    name=\"Web Q&A\",\n",
        "    # Detailed description helps clients understand when to use this skill\n",
        "    description=\"Answer questions using current web search results\",\n",
        "    # Tags for categorization and discovery\n",
        "    # These help in agent marketplaces or registries\n",
        "    tags=[\"question-answering\", \"search\", \"research\"],\n",
        "    # Examples show clients what kinds of requests work well\n",
        "    # This is especially helpful for LLM-based clients\n",
        "    examples=[\n",
        "        \"What is the current weather in Tokyo?\",\n",
        "        \"Who won the latest Nobel Prize in Physics?\",\n",
        "        \"What are the symptoms of the flu?\",\n",
        "        \"How do I make sourdough bread?\",\n",
        "    ],\n",
        "    # Optional: specify input/output modes\n",
        "    # Default is text, but could include images, files, etc.\n",
        "    input_modes=[\"text/plain\"],\n",
        "    output_modes=[\"text/plain\"],\n",
        ")\n",
        "\n",
        "# Use the helper function to create a complete Agent Card\n",
        "qna_agent_card = create_agent_card(\n",
        "    agent_name=\"Q&A Agent\",\n",
        "    description=\"A helpful assistant agent that can answer questions.\",\n",
        "    skills=[qna_agent_skill],\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_qPvhFCAVpRz"
      },
      "source": [
        "Let's print the AgentCard we just created.\n",
        "\n",
        "Take a look at the structure. You can see key fields like name, description, skills, and the url. For now, the URL points to localhost, which is perfect for local testing. When we deploy to Agent Engine, this URL will be automatically updated to point to the managed endpoint.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "FiH7cekl7M4j"
      },
      "outputs": [],
      "source": [
        "qna_agent_card.model_dump()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "R-d-VhZi7RIM"
      },
      "source": [
        "### Define the agent executor\n",
        "\n",
        "The AgentExecutor is the bridge between the A2A protocol and our agent's internal logic. It's a class that you implement to handle incoming A2A requests. It has two main methods:\n",
        "\n",
        "*   `execute`: This is the main entry point. When a message arrives, this method gets the user's query from the RequestContext, creates a TaskUpdater - a handy A2A SDK tool for managing the task's lifecycle (e.g., setting its state to working), calls the ADK Runner to process the query with the Gemini model and Google Search tool, asynchronously waits for the final response from the agent, packages the text response into an A2A Artifact—the official output of a task and finally, marks the task as completed.\n",
        "*   `cancel`: Our simple agent doesn't support long-running, cancelable jobs, so we simply state that the operation is unsupported.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "QyOCEG8o7VNH"
      },
      "outputs": [],
      "source": [
        "class QnAAgentExecutor(AgentExecutor):\n",
        "    \"\"\"Agent Executor that bridges A2A protocol with our ADK agent.\n",
        "\n",
        "    The executor handles:\n",
        "    1. Protocol translation (A2A messages to/from agent format)\n",
        "    2. Task lifecycle management (submitted -> working -> completed)\n",
        "    3. Session management for multi-turn conversations\n",
        "    4. Error handling and recovery\n",
        "    \"\"\"\n",
        "\n",
        "    def __init__(self) -> None:\n",
        "        \"\"\"Initialize with lazy loading pattern.\"\"\"\n",
        "        self.agent = None\n",
        "        self.runner = None\n",
        "\n",
        "    def _init_agent(self) -> None:\n",
        "        \"\"\"Lazy initialization of agent resources.\"\"\"\n",
        "        if self.agent is None:\n",
        "            # Create the actual agent\n",
        "            self.agent = qna_agent\n",
        "\n",
        "            # The Runner orchestrates the agent execution\n",
        "            # It manages the LLM calls, tool execution, and state\n",
        "            self.runner = Runner(\n",
        "                app_name=self.agent.name,\n",
        "                agent=self.agent,\n",
        "                # In-memory services for simplicity\n",
        "                # In production, you might use persistent storage\n",
        "                artifact_service=InMemoryArtifactService(),\n",
        "                session_service=InMemorySessionService(),\n",
        "                memory_service=InMemoryMemoryService(),\n",
        "            )\n",
        "\n",
        "    async def execute(\n",
        "        self,\n",
        "        context: RequestContext,\n",
        "        event_queue: EventQueue,\n",
        "    ) -> None:\n",
        "        \"\"\"Process a user query and return the answer.\n",
        "\n",
        "        This method is called by the A2A protocol handler when:\n",
        "        1. A new message arrives (message/send)\n",
        "        2. A streaming request is made (message/stream)\n",
        "        \"\"\"\n",
        "        # Initialize agent\n",
        "        if self.agent is None:\n",
        "            self._init_agent()\n",
        "\n",
        "        # Extract the user's question from the protocol message\n",
        "        query = context.get_user_input()\n",
        "\n",
        "        # Create a TaskUpdater for managing task state\n",
        "        updater = TaskUpdater(event_queue, context.task_id, context.context_id)\n",
        "\n",
        "        # Update task status through its lifecycle\n",
        "        # submitted -> working -> completed/failed\n",
        "        if not context.current_task:\n",
        "            # New task - mark as submitted\n",
        "            await updater.submit()\n",
        "\n",
        "        # Mark task as working (processing)\n",
        "        await updater.start_work()\n",
        "\n",
        "        try:\n",
        "            # Get or create a session for this conversation\n",
        "            session = await self._get_or_create_session(context.context_id)\n",
        "\n",
        "            # Prepare the user message in ADK format\n",
        "            content = types.Content(role=Role.user, parts=[types.Part(text=query)])\n",
        "\n",
        "            # Run the agent asynchronously\n",
        "            # This may involve multiple LLM calls and tool uses\n",
        "            async for event in self.runner.run_async(\n",
        "                session_id=session.id,\n",
        "                user_id=\"user\",  # In production, use actual user ID\n",
        "                new_message=content,\n",
        "            ):\n",
        "                # The agent may produce multiple events\n",
        "                # We're interested in the final response\n",
        "                if event.is_final_response():\n",
        "                    # Extract the answer text from the response\n",
        "                    answer = self._extract_answer(event)\n",
        "\n",
        "                    # Add the answer as an artifact\n",
        "                    # Artifacts are the \"outputs\" or \"results\" of a task\n",
        "                    # They're separate from status messages\n",
        "                    await updater.add_artifact(\n",
        "                        [TextPart(text=answer)],\n",
        "                        name=\"answer\",  # Name helps clients identify artifacts\n",
        "                    )\n",
        "\n",
        "                    # Mark task as completed successfully\n",
        "                    await updater.complete()\n",
        "                    break\n",
        "\n",
        "                    # For intermediate events, we could send status updates\n",
        "                    # This is useful for long-running tasks\n",
        "                    # Example:\n",
        "                    # await updater.update_status(\n",
        "                    #     TaskState.working,\n",
        "                    #     message=new_agent_text_message(\"Searching the web...\")\n",
        "                    # )\n",
        "\n",
        "        except Exception as e:\n",
        "            # Errors should never pass silently (Zen of Python)\n",
        "            # Always inform the client when something goes wrong\n",
        "            await updater.update_status(\n",
        "                TaskState.failed, message=new_agent_text_message(f\"Error: {e!s}\")\n",
        "            )\n",
        "            # Re-raise for proper error handling up the stack\n",
        "            raise\n",
        "\n",
        "    async def _get_or_create_session(self, context_id: str):\n",
        "        \"\"\"Get existing session or create new one.\"\"\"\n",
        "        session = await self.runner.session_service.get_session(\n",
        "            app_name=self.runner.app_name,\n",
        "            user_id=\"user\",\n",
        "            session_id=context_id,\n",
        "        )\n",
        "\n",
        "        if not session:\n",
        "            session = await self.runner.session_service.create_session(\n",
        "                app_name=self.runner.app_name,\n",
        "                user_id=\"user\",\n",
        "                session_id=context_id,\n",
        "            )\n",
        "\n",
        "        return session\n",
        "\n",
        "    def _extract_answer(self, event) -> str:\n",
        "        \"\"\"Extract text answer from agent response.\"\"\"\n",
        "        parts = event.content.parts\n",
        "        text_parts = [part.text for part in parts if part.text]\n",
        "\n",
        "        # Join all text parts with space\n",
        "        return \" \".join(text_parts) if text_parts else \"No answer found.\"\n",
        "\n",
        "    async def cancel(\n",
        "        self, context: RequestContext, event_queue: EventQueue\n",
        "    ) -> NoReturn:\n",
        "        \"\"\"Handle task cancellation requests.\n",
        "\n",
        "        For long-running agents, this would:\n",
        "        1. Stop any ongoing processing\n",
        "        2. Clean up resources\n",
        "        3. Update task state to 'cancelled'\n",
        "        \"\"\"\n",
        "        # Inform client that cancellation isn't supported\n",
        "        raise ServerError(error=UnsupportedOperationError())"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_sOxOLNU-Vz6"
      },
      "source": [
        "### Test the agent locally\n",
        "\n",
        "Before deploying anything to the cloud, a crucial step is to test locally. This allows for rapid iteration and debugging.\n",
        "\n",
        "The A2aAgent class from the Vertex AI SDK is our deployable unit. It wraps our AgentCard and AgentExecutor together. Calling set_up() prepares an in-memory server, allowing us to simulate calls to the agent as if it were deployed.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "o3i7QWuX-ZLI"
      },
      "outputs": [],
      "source": [
        "a2a_agent = A2aAgent(agent_card=qna_agent_card, agent_executor_builder=QnAAgentExecutor)\n",
        "a2a_agent.set_up()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ipiSUpCpfwNA"
      },
      "source": [
        "#### Get the agent card\n",
        "\n",
        "At this point, we can call the handle_authenticated_agent_card method on our local agent instance to simulate a client discovering our agent by requesting its \"business card.\" It would return the agent's capabilities, skills, and its endpoint URL, confirming our local server is set up correctly."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "y0hvJyJbf0rK"
      },
      "outputs": [],
      "source": [
        "request = build_get_request(None)\n",
        "response = await a2a_agent.handle_authenticated_agent_card(\n",
        "    request=request, context=None\n",
        ")\n",
        "pprint(response)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OBe-1LvofdOq"
      },
      "source": [
        "#### Send a query\n",
        "\n",
        "Finally, let's call the on_message_send method, which is the A2A endpoint for starting a new task.\n",
        "\n",
        "The agent immediately responds with a Task object in the TASK_STATE_SUBMITTED state. This is standard asynchronous behavior: the system acknowledges the request and gives us a task_id to track its progress.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "KfiPCQldfQuD"
      },
      "outputs": [],
      "source": [
        "message_data = {\n",
        "    \"message\": {\n",
        "        \"messageId\": f\"msg-{os.urandom(8).hex()}\",\n",
        "        \"content\": [{\"text\": \"What is the capital of France?\"}],\n",
        "        \"role\": \"ROLE_USER\",\n",
        "    },\n",
        "}\n",
        "request = build_post_request(message_data)\n",
        "response = await a2a_agent.on_message_send(request=request, context=None)\n",
        "pprint(response)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XEsAZMNSmOVa"
      },
      "source": [
        "We simply extract the task_id from the previous response and print it. We'll use this ID in the next step to fetch our answer.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "33WYXxRa_AFN"
      },
      "outputs": [],
      "source": [
        "task_id = response[\"task\"][\"id\"]\n",
        "print(f\"The Task ID is: {task_id}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FxRnBO9rfq4x"
      },
      "source": [
        "#### Get the response\n",
        "\n",
        "With the task_id in hand, we can now poll for the result. We call the on_get_task method, which retrieves the current status of our task.\n",
        "\n",
        "Since our ADK agent is fast, the task should have already moved to the\n",
        "TASK_STATE_COMPLETED state. Notice the artifacts field in the response. This contains the answer to our question, neatly packaged as a text part."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "dqwFGCDRfp-0"
      },
      "outputs": [],
      "source": [
        "task_data = {\"id\": task_id}\n",
        "request = build_get_request(task_data)\n",
        "response = await a2a_agent.on_get_task(request=request, context=None)\n",
        "\n",
        "for artifact in response[\"artifacts\"]:\n",
        "    # Access the text through the 'root' attribute of the Part object\n",
        "    if artifact[\"parts\"] and \"text\" in artifact[\"parts\"][0]:\n",
        "        display(Markdown(f\"**Answer**:\\n {artifact['parts'][0]['text']}\"))\n",
        "    else:\n",
        "        print(\"Could not extract text from artifact parts.\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vX2k34USO-q3"
      },
      "source": [
        "#### (Optional) Cancel a task\n",
        "\n",
        "If your agent executes long run operations, you can always cancel the associated task as shown below:\n",
        "\n",
        "```py\n",
        "# Local agent\n",
        "task_id = response['task']['id']\n",
        "task_data={\"id\": task_id}\n",
        "request = build_post_request(path_params=task_data)\n",
        "response = await long_running_agent.on_cancel_task(request=request, context=None)\n",
        "```"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NmhoohK980lI"
      },
      "source": [
        "### Deploy on Agent Engine\n",
        "\n",
        "Now it is time to deploy the agent to a fully-managed, scalable platform, Vertex AI Agent Engine.\n",
        "\n",
        "With a single agent_engines.create() call, the Vertex AI SDK performs a series of actions behind the scenes that allows you to scale your A2A agent. In order:\n",
        "\n",
        "*   It takes our local `a2a_agent` object.\n",
        "*   It serializes (pickles) the agent's code and its configuration.\n",
        "*   It inspects the environment to determine the necessary Python package requirements.\n",
        "*   It packages everything up and uploads it to the Cloud Storage bucket we configured earlier.\n",
        "*   It provisions a secure, scalable, and fully-managed serverless endpoint on Agent Engine to host our agent."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "OgrQOTEy82ku"
      },
      "outputs": [],
      "source": [
        "remote_a2a_agent = client.agent_engines.create(\n",
        "    # The actual agent to deploy\n",
        "    agent=a2a_agent,\n",
        "    config={\n",
        "        # Display name shown in the console\n",
        "        \"display_name\": a2a_agent.agent_card.name,\n",
        "        # Description for documentation\n",
        "        \"description\": a2a_agent.agent_card.description,\n",
        "        # Python dependencies needed in Agent Engine\n",
        "        \"requirements\": [\n",
        "            \"google-cloud-aiplatform[agent_engines,adk]>=1.112.0\",\n",
        "            \"a2a-sdk >= 0.3.4\",\n",
        "        ],\n",
        "        # Http options\n",
        "        \"http_options\": {\n",
        "            \"base_url\": f\"https://{LOCATION}-aiplatform.googleapis.com\",\n",
        "            \"api_version\": \"v1beta1\",\n",
        "        },\n",
        "        # Staging bucket\n",
        "        \"staging_bucket\": BUCKET_URI,\n",
        "    },\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JH8TDYjcAoRv"
      },
      "source": [
        "### Get the remote agent card\n",
        "\n",
        "The SDK handles the authentication and API call to our deployed endpoint, and we get back the AgentCard. The get method allows you to reconnect to an existing, already-deployed agent in a new session just by using its resource name.\n",
        "\n",
        "Notice that the url field in the card now points to the public `aiplatform.googleapis.com` endpoint, not localhost."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7aNZospgxvMc"
      },
      "outputs": [],
      "source": [
        "remote_a2a_agent_resource_name = remote_a2a_agent.api_resource.name\n",
        "config = {\"http_options\": {\"base_url\": f\"https://{LOCATION}-aiplatform.googleapis.com\"}}\n",
        "\n",
        "remote_a2a_agent = client.agent_engines.get(\n",
        "    name=remote_a2a_agent_resource_name,\n",
        "    config=config,\n",
        ")\n",
        "\n",
        "remote_a2a_agent_card = await remote_a2a_agent.handle_authenticated_agent_card()\n",
        "print(f\"Agent: {remote_a2a_agent_card.name}\")\n",
        "print(f\"URL: {remote_a2a_agent_card.url}\")\n",
        "print(f\"Skills: {[s.description for s in remote_a2a_agent_card.skills]}\")\n",
        "print(f\"Examples: {[s.examples for s in remote_a2a_agent_card.skills][0]}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MCC26PxHAg29"
      },
      "source": [
        "### Query the remote A2A agent\n",
        "\n",
        "Our agent is now live on Vertex AI! Let's interact with it.\n",
        "\n",
        "Agent Engine and its A2A integration provides multiple ways to connect, catering to different developer needs and use cases. We'll explore three common methods:\n",
        "\n",
        "*   Via Vertex AI SDK for Python\n",
        "*   Via A2A Client\n",
        "*   Via http request\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7roiTtsjEZNf"
      },
      "source": [
        "#### Via Vertex AI SDK for Python\n",
        "\n",
        "For Python developers, this is the simplest method. The `remote_a2a_agent` acts as a smart client or proxy that knows how to communicate with the deployed endpoint. This allows you to use the same methods you used for local testing to interact with the remote agent.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SJTLCRknBDhQ"
      },
      "source": [
        "##### Send a message to start a task\n",
        "\n",
        "Again, the code is nearly identical to our local test. We call `on_message_send` with our question. The SDK sends the request to the deployed agent, which kicks off the task on the agent engine. The response contains the `task_id` for our remote job.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Xq9tp5KZCUIv"
      },
      "outputs": [],
      "source": [
        "# Create a message\n",
        "message_data = {\n",
        "    # Unique ID for this message (for tracking)\n",
        "    \"messageId\": f\"msg-{os.urandom(8).hex()}\",\n",
        "    # Role identifies the sender (user vs agent)\n",
        "    \"role\": \"user\",\n",
        "    # The actual message content\n",
        "    # Parts can include text, files, or structured data\n",
        "    \"parts\": [{\"kind\": \"text\", \"text\": \"What is the capital of Italy?\"}],\n",
        "}\n",
        "\n",
        "# Invoke the agent\n",
        "response = await remote_a2a_agent.on_message_send(**message_data)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "cmlZutLBJWjA"
      },
      "outputs": [],
      "source": [
        "# The response contains a Task object with status and ID\n",
        "task_object = None\n",
        "for chunk in response:\n",
        "    # Assuming the first chunk contains the task object\n",
        "    if isinstance(chunk, tuple) and len(chunk) > 0 and hasattr(chunk[0], \"id\"):\n",
        "        task_object = chunk[0]\n",
        "        break\n",
        "\n",
        "if task_object:\n",
        "    task_id = task_object.id\n",
        "    print(f\"Task started: {task_id}\")\n",
        "    print(f\"Status: {task_object.status.state}\")\n",
        "else:\n",
        "    print(\"Could not retrieve the task object from the response.\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xNF6bIO03OaT"
      },
      "source": [
        "##### Get the response\n",
        "\n",
        "Using the `task_id` from the previous step, we call `on_get_task`.\n",
        "\n",
        "The SDK polls the Agent Engine endpoint and retrieves the completed task, including the final answer in the artifacts field. We have successfully communicated with our deployed A2A agent.\n",
        "\n",
        "> **Note**: Running this cell might require few seconds depending on the use case."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "fc4tKDTTG8YW"
      },
      "outputs": [],
      "source": [
        "task_data = {\n",
        "    \"id\": task_id,\n",
        "    \"historyLength\": 1,  # Include conversation history\n",
        "}\n",
        "\n",
        "result = None\n",
        "retries = 0\n",
        "max_retries = 30\n",
        "\n",
        "while True:\n",
        "    try:\n",
        "        # Get the task result\n",
        "        result = await remote_a2a_agent.on_get_task(**task_data)\n",
        "\n",
        "        if result.status.state in [TaskState.completed, TaskState.failed]:\n",
        "            break\n",
        "\n",
        "        print(f\"Task state: {result.status.state}. Waiting 1s...\")\n",
        "        time.sleep(1)\n",
        "\n",
        "    except Exception as e:\n",
        "        status_code = getattr(e, \"status_code\", None)\n",
        "        if status_code == 400:\n",
        "            retries += 1\n",
        "            if retries <= max_retries:\n",
        "                print(f\"Received HTTP 400. Retrying in 1s ({retries}/{max_retries})...\")\n",
        "                time.sleep(1)\n",
        "                continue\n",
        "            else:\n",
        "                print(\"Max retries reached.\")\n",
        "                raise\n",
        "        else:\n",
        "            raise"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "82WxxCLB3QGF"
      },
      "outputs": [],
      "source": [
        "# Artifacts contain the actual results\n",
        "for artifact in result.artifacts:\n",
        "    # Access the text through the 'root' attribute of the Part object\n",
        "    if (\n",
        "        artifact.parts\n",
        "        and hasattr(artifact.parts[0], \"root\")\n",
        "        and hasattr(artifact.parts[0].root, \"text\")\n",
        "    ):\n",
        "        display(Markdown(f\"**Answer**:\\n {artifact.parts[0].root.text}\"))\n",
        "    else:\n",
        "        print(\"Could not extract text from artifact parts.\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hqLepgzAN6XB"
      },
      "source": [
        "##### (Optional) Cancel a task\n",
        "\n",
        "If your remote agent executes long run operations, you can always cancel the associated task as shown below:\n",
        "\n",
        "```py\n",
        "# Remote agent\n",
        "task_data ={\n",
        "    \"id\":task_id,\n",
        "}\n",
        "response = await remote_agent.on_cancel_task(**task_data)\n",
        "```\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gHS-KYKkEj34"
      },
      "source": [
        "#### Via A2A Client\n",
        "\n",
        "This method is for developers who want to use the standard, open-source a2a-sdk client directly. This is useful if you're building an application that needs to talk to various A2A agents, not just those hosted on Agent Engine, or if you prefer to work directly with the protocol's native objects.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Ax7PS6DUH69z"
      },
      "source": [
        "##### Initialize A2A Client\n",
        "\n",
        "Here, we set up the A2A SDK `ClientFactory`.\n",
        "\n",
        "We start from the `AgentCard` we fetched from our deployed agent. This is crucial because the client needs the card (especially the url) to know where to send requests.\n",
        "\n",
        "Next, we get standard Google Cloud authentication credentials.\n",
        "Then, we create a ClientConfig object, telling it to use standard HTTP transport and providing a httpx client pre-configured with our authentication headers.\n",
        "\n",
        "Finally, the `factory.create(remote_a2a_agent_card)` call gives us a client instance ready to communicate with our specific agent endpoint.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "oB2XJk-rJu4W"
      },
      "outputs": [],
      "source": [
        "# Get authentication token for Google Cloud\n",
        "bearer_token = get_bearer_token()\n",
        "headers = {\n",
        "    \"Authorization\": f\"Bearer {bearer_token}\",\n",
        "    \"Content-Type\": \"application/json\",\n",
        "}\n",
        "\n",
        "# Configure the A2A client factory\n",
        "# This handles the protocol implementation details\n",
        "factory = ClientFactory(\n",
        "    ClientConfig(\n",
        "        # Specify supported transport mechanisms\n",
        "        supported_transports=[TransportProtocol.http_json],\n",
        "        # Use client preferences for protocol negotiation\n",
        "        use_client_preference=True,\n",
        "        # Configure HTTP client with authentication\n",
        "        httpx_client=httpx.AsyncClient(\n",
        "            headers={\n",
        "                \"Authorization\": f\"Bearer {bearer_token}\",\n",
        "                \"Content-Type\": \"application/json\",\n",
        "            }\n",
        "        ),\n",
        "    )\n",
        ")\n",
        "\n",
        "\n",
        "# Create a client for our specific agent\n",
        "# The client uses the Agent Card to understand capabilities\n",
        "a2a_client = factory.create(remote_a2a_agent_card)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "n3baJYxC3sV8"
      },
      "source": [
        "##### Get the agent card\n",
        "\n",
        "This is a simple call to the A2A client `get_card()` method to verify that our connection and authentication are configured correctly. It should return the same agent card we've seen before.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "szAk6PMX3uQg"
      },
      "outputs": [],
      "source": [
        "response = await a2a_client.get_card()\n",
        "print(f\"Agent: {remote_a2a_agent_card.name}\")\n",
        "print(f\"URL: {remote_a2a_agent_card.url}\")\n",
        "print(f\"Skills: {[s.description for s in remote_a2a_agent_card.skills]}\")\n",
        "print(f\"Examples: {[s.examples for s in remote_a2a_agent_card.skills][0]}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_7ARxhG9ER3h"
      },
      "source": [
        "##### Send a message to start a task\n",
        "\n",
        "Once again, we manually construct a `Message` object with a role, parts, and a unique `message_id`. We then call `a2a_client.send_message()`. This method returns an async generator, so we loop through it to get the resulting chunks, which contain the submitted `Task` object. From there, we extract the `task_id`.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tDKynwAoNeCh"
      },
      "outputs": [],
      "source": [
        "# Send a message using A2A protocol objects\n",
        "message = Message(\n",
        "    message_id=f\"message-{os.urandom(8).hex()}\",\n",
        "    role=Role.user,\n",
        "    parts=[Part(root=TextPart(text=\"What is the weather in Paris today?\"))],\n",
        ")\n",
        "\n",
        "# Get response\n",
        "response = a2a_client.send_message(message)\n",
        "\n",
        "# The response is an async generator\n",
        "async for response_chunk in response:\n",
        "    # The response is often a tuple, with the Task as the first element\n",
        "    task_object = response_chunk[0]\n",
        "    task_id = task_object.id\n",
        "print(f\"Task started: {task_id}\")\n",
        "print(f\"Status: {task_object.status.state}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SVZvep3x3j61"
      },
      "source": [
        "##### Get the response\n",
        "\n",
        "Using the `task_id`, we create a `TaskQueryParams` object and pass it to the `a2a_client.get_task()` method. This fetches the final result from the Agent Engine endpoint, demonstrating a successful interaction using the generic A2A client SDK.\n",
        "\n",
        "> **Note**: Running this cell might require few seconds depending on the use case."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "m2cH3TaO3nUu"
      },
      "outputs": [],
      "source": [
        "# Poll for completion\n",
        "task_data = {\n",
        "    \"id\": task_id,\n",
        "    \"history_length\": 1,\n",
        "}\n",
        "response = None\n",
        "retries = 0\n",
        "max_retries = 30  # Set a reasonable maximum number of retries\n",
        "\n",
        "while True:\n",
        "    try:\n",
        "        print(f\"Attempting to get task {task_id} (Retry {retries}/{max_retries})...\")\n",
        "        response = await a2a_client.get_task(TaskQueryParams(**task_data))\n",
        "\n",
        "        # If we get a response, check the task state\n",
        "        if response.status.state == TaskState.completed:\n",
        "            print(f\"Task {task_id} completed successfully.\")\n",
        "            break  # Exit loop if task is completed\n",
        "        elif response.status.state == TaskState.failed:\n",
        "            print(f\"Task {task_id} failed with state: {response.status.state}.\")\n",
        "            break  # Exit loop if task is failed\n",
        "        else:\n",
        "            # If still in progress, wait and retry (though A2A get_task usually returns terminal state)\n",
        "            print(\n",
        "                f\"Task {task_id} is still in state: {response.status.state}. Waiting 1 second...\"\n",
        "            )\n",
        "            # Wait for a second before checking again to avoid spamming the API.\n",
        "            time.sleep(1)\n",
        "            continue\n",
        "\n",
        "    except Exception as e:\n",
        "        # Check if the exception acts like an HTTP 400 error\n",
        "        # We use getattr because we are catching the generic Exception class\n",
        "        status_code = getattr(e, \"status_code\", None)\n",
        "\n",
        "        if status_code == 400:\n",
        "            retries += 1\n",
        "            if retries < max_retries:\n",
        "                # Wait for a second before checking again to avoid spamming the API.\n",
        "                time.sleep(1)\n",
        "                continue  # Retry\n",
        "            else:\n",
        "                print(\n",
        "                    f\"Max retries ({max_retries}) reached for HTTP 400 Bad Request for task {task_id}.\"\n",
        "                )\n",
        "                raise  # Re-raise if max retries reached\n",
        "        else:\n",
        "            # For other errors, we re-raise\n",
        "            print(f\"An error occurred for task {task_id}: {e}\")\n",
        "            raise\n",
        "\n",
        "# Check if it has artifacts only if the task completed successfully\n",
        "if (\n",
        "    response\n",
        "    and response.status.state == TaskState.completed\n",
        "    and hasattr(response, \"artifacts\")\n",
        "    and response.artifacts\n",
        "):\n",
        "    for artifact in response.artifacts:\n",
        "        # Access the text through the 'root' attribute of the Part object\n",
        "        if (\n",
        "            artifact.parts\n",
        "            and hasattr(artifact.parts[0], \"root\")\n",
        "            and hasattr(artifact.parts[0].root, \"text\")\n",
        "        ):\n",
        "            result_text = artifact.parts[0].root.text\n",
        "            display(Markdown(f\"**Result**\\n: {result_text}\"))\n",
        "        else:\n",
        "            print(\"Could not extract text from artifact parts.\")\n",
        "elif response and response.status.state == TaskState.failed:\n",
        "    print(f\"Task {task_id} failed. No artifacts to display.\")\n",
        "else:\n",
        "    print(\"No artifacts found or task not completed successfully.\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fWxu3lwPMJot"
      },
      "source": [
        "#### Via http request\n",
        "\n",
        "This is the most fundamental way to interact with our agent: making direct HTTP requests.\n",
        "\n",
        "This approach is perfect for debugging with tools like\n",
        "curl or for integrating from languages that don't have a dedicated A2A or Vertex AI SDK."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EdbbXLPX3FKO"
      },
      "source": [
        "##### Get the agent card\n",
        "\n",
        "To start, we get the endpoint URL from the agent card we fetched earlier. Next, we obtain a bearer token for authentication. Then\n",
        "we set the necessary Authorization and Content-Type headers. And finally, we use the httpx library to make the GET request and print the resulting JSON response.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "uv4QKLE23CdV"
      },
      "outputs": [],
      "source": [
        "# Prepare authentication headers\n",
        "headers = {\n",
        "    \"Authorization\": f\"Bearer {get_bearer_token()}\",\n",
        "    \"Content-Type\": \"application/json\",\n",
        "}\n",
        "\n",
        "# Get the agent card endpoint\n",
        "remote_agent_card_url = remote_a2a_agent_card.url\n",
        "remote_agent_card_endpoint = f\"{remote_agent_card_url}/v1/card\"\n",
        "\n",
        "try:\n",
        "    # Send the HTTP request\n",
        "    response = httpx.get(remote_agent_card_endpoint, headers=headers)\n",
        "    response.raise_for_status()\n",
        "    # Parse the response\n",
        "    result = response.json()\n",
        "    print(json.dumps(result, indent=2))\n",
        "except httpx.HTTPStatusError as e:\n",
        "    print(f\"HTTP error occurred: {e}\")\n",
        "    print(f\"Response body: {e.response.text}\")\n",
        "except httpx.RequestError as e:\n",
        "    print(f\"An error occurred while trying to send the request: {e}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IL8jBXqZ24vz"
      },
      "source": [
        "##### Send a message to start a task\n",
        "\n",
        "Now you can make a POST request to the `/v1/message:send` endpoint. We construct the JSON payload manually, following the structure defined by the A2A protocol. We then send the request using `httpx.post` with the same headers as before. The response is the JSON object for the submitted task, from which we extract the `task_id`.\n",
        "\n",
        "> **Note**: Running this cell might require few seconds depending on the use case."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Ty7lDm8iTsBa"
      },
      "outputs": [],
      "source": [
        "# Construct the A2A message payload\n",
        "payload = {\n",
        "    \"message\": {\n",
        "        \"messageId\": f\"msg-{os.urandom(8).hex()}\",\n",
        "        \"role\": \"1\",  # \"1\" = user, \"2\" = agent (in HTTP encoding)\n",
        "        \"content\": [{\"text\": \"Who is the current UN Secretary-General?\"}],\n",
        "    },\n",
        "    \"metadata\": {\n",
        "        # Optional metadata for tracking/debugging\n",
        "        \"source\": \"tutorial\",\n",
        "        \"timestamp\": datetime.now().isoformat(),\n",
        "        \"user_agent\": \"test_script\",\n",
        "    },\n",
        "}\n",
        "\n",
        "try:\n",
        "    # Send the HTTP request\n",
        "    response = httpx.post(\n",
        "        f\"{remote_agent_card_url}/v1/message:send\", json=payload, headers=headers\n",
        "    )\n",
        "    response.raise_for_status()\n",
        "    # Parse the response\n",
        "    result = response.json()\n",
        "    print(json.dumps(result, indent=2))\n",
        "except httpx.HTTPStatusError as e:\n",
        "    print(f\"HTTP error occurred: {e}\")\n",
        "    print(f\"Response body: {e.response.text}\")\n",
        "except httpx.RequestError as e:\n",
        "    print(f\"An error occurred while trying to send the request: {e}\")\n",
        "\n",
        "# The response contains a task\n",
        "task_id = result[\"task\"][\"id\"]\n",
        "task_status = result[\"task\"][\"status\"][\"state\"]\n",
        "print(f\"Task started: {task_id}\")\n",
        "print(f\"Status: {task_status}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jBMR0XuT295K"
      },
      "source": [
        "##### Get the response\n",
        "\n",
        "Finally, we construct the URL for the specific task using the task_id and make a `GET` request to the `/v1/tasks/{task_id}` endpoint. The response is the full JSON object for the completed task, containing the final answer. This confirms we can interact with our agent using nothing but standard HTTP calls."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "BQRxK3SMU8u2"
      },
      "outputs": [],
      "source": [
        "# Poll for completion\n",
        "task_url = f\"{remote_agent_card_url}/v1/tasks/{task_id}\"\n",
        "print(f\"Polling for results at: {task_url}\")\n",
        "\n",
        "task_data = {}\n",
        "state = None\n",
        "retries = 0\n",
        "max_retries = 30\n",
        "\n",
        "while True:\n",
        "    try:\n",
        "        # Poll the task endpoint until it reaches a terminal state.\n",
        "        response = httpx.get(task_url, headers=headers, params={\"historyLength\": 1})\n",
        "        response.raise_for_status()\n",
        "        task_data = response.json()\n",
        "        state = task_data[\"status\"][\"state\"]\n",
        "\n",
        "        if state in [\"TASK_STATE_COMPLETED\", \"TASK_STATE_FAILED\"]:\n",
        "            print(f\"Task finished with state: {state}\")\n",
        "            break\n",
        "\n",
        "        # Wait for a second before checking again to avoid spamming the API.\n",
        "        time.sleep(1)\n",
        "\n",
        "    except httpx.HTTPStatusError as e:\n",
        "        if e.response.status_code == 400:\n",
        "            retries += 1\n",
        "            if retries <= max_retries:\n",
        "                print(\n",
        "                    f\"Received HTTP 400 Bad Request. Retrying in 1s ({retries}/{max_retries})...\"\n",
        "                )\n",
        "                time.sleep(1)\n",
        "                continue\n",
        "            else:\n",
        "                print(\"Max retries reached.\")\n",
        "                raise\n",
        "        else:\n",
        "            raise\n",
        "\n",
        "# Extract the result\n",
        "if state == \"TASK_STATE_COMPLETED\":\n",
        "    artifacts = task_data.get(\"artifacts\", [])\n",
        "    for artifact in artifacts:\n",
        "        for part in artifact[\"parts\"]:\n",
        "            if \"text\" in part:\n",
        "                display(Markdown(f\"**Result**\\n: {part['text']}\"))\n",
        "                break"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2a4e033321ad"
      },
      "source": [
        "## Cleaning up\n",
        "\n",
        "Time to clean up. Run the cell below prevents you from incurring ongoing costs for the services you've provisioned during this tutorial."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "fzbvwHQMM7HK"
      },
      "outputs": [],
      "source": [
        "delete_agent_engine = True\n",
        "\n",
        "if delete_agent_engine:\n",
        "    remote_a2a_agent.delete(force=True)"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "tutorial_a2a_on_agent_engine.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
