{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ur8xi4C7S06n"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JAPoU8Sm5E6e"
      },
      "source": [
        "# Building Multi-Agent Systems with Vertex AI and Llama model\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_multi_agent_systems_vertexai_llama_arize.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fagents%2Fagent_engine%2Ftutorial_multi_agent_systems_vertexai_llama_arize.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/agents/agent_engine/tutorial_multi_agent_systems_vertexai_llama_arize.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_multi_agent_systems_vertexai_llama_arize.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://storage.googleapis.com/github-repo/generative-ai/logos/GitHub_Invertocat_Dark.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<p>\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_multi_agent_systems_vertexai_llama_arize.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_multi_agent_systems_vertexai_llama_arize.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_multi_agent_systems_vertexai_llama_arize.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_multi_agent_systems_vertexai_llama_arize.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_multi_agent_systems_vertexai_llama_arize.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>\n",
        "</p>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MNJ75m_94swN"
      },
      "source": [
        "| Author |\n",
        "| --- |\n",
        "| [Ivan Nardini](https://github.com/inardini) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tvgnzT1CKxrO"
      },
      "source": [
        "## Overview\n",
        "\n",
        "This tutorial demonstrates how to build a **multi-agent system** using Google Cloud's Agent Development Kit (ADK), the Agent2Agent (A2A) protocol, and the Model Context Protocol (MCP) together with Meta's Llama model.\n",
        "\n",
        "You'll create a trading analysis platform where specialized AI agents collaborate to provide balanced market insights. The system features two specialized agents: a Bear Agent (risk-focused) built with Pydantic AI, and a Bull Agent (opportunity-focused) built with Google ADK. Both agents communicate using the A2A protocol and are enhanced with custom tools through MCP.\n",
        "\n",
        "## Architecture Overview\n",
        "\n",
        "The multi-agent system consists of three main components:\n",
        "\n",
        "1. **Bear Agent** (Pydantic AI + MCP): Focuses on risk analysis, identifying downside catalysts and warning signals\n",
        "2. **Bull Agent** (ADK + MCP): Focuses on growth opportunities, bullish patterns, and upside potential\n",
        "3. **Orchestrator Agent** (ADK): Coordinates both agents to provide balanced market analysis\n",
        "\n",
        "The agents communicate using the A2A protocol, which enables standardized agent-to-agent communication with capabilities for:\n",
        "\n",
        "- Agent discovery through agent cards\n",
        "- Asynchronous task execution\n",
        "- Structured message passing\n",
        "- Transport protocol negotiation\n",
        "\n",
        "Observability is provided through Arize tracing, allowing you to monitor agent behavior, tool usage, and performance.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "61RBz8LLbxCR"
      },
      "source": [
        "## Get started"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sfZkhtTDT94p"
      },
      "source": [
        "### Prerequisites\n",
        "\n",
        "Before starting this tutorial, ensure you have:\n",
        "\n",
        "- Arize Phoenix Cloud account ( [sign up for free](https://phoenix.arize.com/))\n",
        "- A Google Cloud project with [Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) enabled\n",
        "- Appropriate permissions to deploy agents to Vertex AI Agent Engine\n",
        "- Basic understanding of async Python programming\n",
        "- Familiarity with AI/LLM concepts"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "No17Cw5hgx12"
      },
      "source": [
        "### Install Google Gen AI SDK and other required packages\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tFy3H3aPgx12"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade --quiet google-cloud-aiplatform[agent_engines,adk] a2a-sdk a2a-sdk[http-server] litellm pydantic pydantic-ai fastmcp numpy python-dotenv nest-asyncio arize-phoenix openinference-instrumentation-google-adk arize-phoenix-otel openinference-instrumentation-pydantic-ai opentelemetry-sdk opentelemetry-exporter-otlp opentelemetry-api"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lkwlKUxFkxVW"
      },
      "source": [
        "### Restart runtime (Colab only)\n",
        "\n",
        "To use the newly installed packages, you must restart the runtime on Google Colab."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "myqkvrNgk3f3"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    import IPython\n",
        "\n",
        "    app = IPython.Application.instance()\n",
        "    app.kernel.do_shutdown(True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dmWOrTJ3gx13"
      },
      "source": [
        "### Authenticate your notebook environment\n",
        "\n",
        "If you are running this notebook in **Google Colab**, run the cell below to authenticate your account."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "NyKGtVQjgx13"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DF4l8DTdWgPY"
      },
      "source": [
        "### Set Google Cloud project information\n",
        "\n",
        "Set up your Google Cloud project configuration. This establishes the environment variables needed for Vertex AI initialization and defines the Cloud Storage bucket for agent deployment artifacts. The nest_asyncio configuration enables running async code in Jupyter notebooks."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Nqwi-5ufWp_B"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "import nest_asyncio\n",
        "import vertexai\n",
        "\n",
        "# fmt: off\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "LOCATION = \"us-central1\" # @param {type: \"string\", placeholder: \"[your-location]\", isTemplate: true}\n",
        "# fmt: on\n",
        "\n",
        "# Create the bucket\n",
        "BUCKET_NAME = f\"{PROJECT_ID}-agent\"\n",
        "BUCKET_URI = f\"gs://{BUCKET_NAME}\"\n",
        "\n",
        "# Set environment variables for ADK\n",
        "os.environ[\"GOOGLE_CLOUD_PROJECT\"] = PROJECT_ID\n",
        "os.environ[\"GOOGLE_CLOUD_LOCATION\"] = LOCATION\n",
        "os.environ[\"GOOGLE_GENAI_USE_VERTEXAI\"] = \"TRUE\"\n",
        "\n",
        "# For notebook async support\n",
        "nest_asyncio.apply()\n",
        "\n",
        "# Initiate the client\n",
        "client = vertexai.Client(project=PROJECT_ID, location=LOCATION)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5303c05f7aa6"
      },
      "source": [
        "### Import libraries\n",
        "\n",
        "Import all necessary libraries for building the multi-agent system. These imports are organized into logical groups: standard library utilities, async and HTTP clients, data handling, MCP and A2A protocol components, Pydantic AI for the Bear agent, Google ADK for the Bull agent and orchestrator, and Vertex AI deployment utilities.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6fc324893334"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "from pathlib import Path\n",
        "import random\n",
        "import uvicorn\n",
        "import threading\n",
        "import time\n",
        "import asyncio\n",
        "import httpx\n",
        "from datetime import datetime, timedelta\n",
        "from textwrap import dedent\n",
        "import numpy as np\n",
        "import warnings\n",
        "\n",
        "warnings.filterwarnings(\"ignore\")\n",
        "\n",
        "# Pydantic agent\n",
        "from mcp.server.fastmcp import FastMCP\n",
        "from pydantic_ai import Agent\n",
        "from pydantic_ai.models.google import GoogleModel\n",
        "from pydantic_ai.providers.google import GoogleProvider\n",
        "from pydantic_ai.mcp import MCPServerStdio\n",
        "from a2a.types import AgentSkill\n",
        "from vertexai.preview.reasoning_engines.templates.a2a import create_agent_card\n",
        "from a2a.server.agent_execution import AgentExecutor, RequestContext\n",
        "from a2a.server.events import EventQueue\n",
        "from a2a.server.tasks import TaskUpdater\n",
        "from a2a.types import TaskState, TextPart, UnsupportedOperationError\n",
        "from a2a.utils import new_agent_text_message\n",
        "from a2a.utils.errors import ServerError\n",
        "\n",
        "# ADK agent\n",
        "from google.adk.models.lite_llm import litellm\n",
        "from google.adk.models.lite_llm import LiteLlm\n",
        "from google.adk.agents import LlmAgent, SequentialAgent\n",
        "from google.adk import Runner\n",
        "from google.adk.memory.in_memory_memory_service import InMemoryMemoryService\n",
        "from google.adk.sessions import InMemorySessionService\n",
        "from google.genai import types\n",
        "from google.adk.a2a.executor.a2a_agent_executor import (\n",
        "    A2aAgentExecutor,\n",
        "    A2aAgentExecutorConfig,\n",
        ")\n",
        "from google.adk.agents.remote_a2a_agent import RemoteA2aAgent\n",
        "from a2a.server.apps import A2AStarletteApplication\n",
        "from a2a.server.request_handlers import DefaultRequestHandler\n",
        "from a2a.server.tasks import InMemoryTaskStore\n",
        "from a2a.types import TransportProtocol\n",
        "from a2a.utils.constants import AGENT_CARD_WELL_KNOWN_PATH\n",
        "from google.adk.tools.agent_tool import AgentTool\n",
        "\n",
        "# Agent deployment\n",
        "import vertexai\n",
        "from vertexai import agent_engines\n",
        "from vertexai.preview.reasoning_engines import A2aAgent\n",
        "from google.auth import default\n",
        "from google.auth.credentials import Credentials\n",
        "from google.auth.transport.requests import Request as AuthRequest\n",
        "from a2a.client.client import ClientConfig as A2AClientConfig\n",
        "from a2a.client.client_factory import ClientFactory as A2AClientFactory\n",
        "from a2a.types import TransportProtocol as A2ATransport\n",
        "\n",
        "# Observability\n",
        "from phoenix.otel import register\n",
        "from opentelemetry import trace\n",
        "from opentelemetry.sdk.trace import TracerProvider\n",
        "from openinference.instrumentation.pydantic_ai import OpenInferenceSpanProcessor\n",
        "from opentelemetry.sdk.trace.export import SimpleSpanProcessor\n",
        "from openinference.instrumentation.google_adk import GoogleADKInstrumentor\n",
        "from opentelemetry.sdk.trace import TracerProvider\n",
        "from opentelemetry.sdk.trace.export import BatchSpanProcessor\n",
        "from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HKxUedWkVKsR"
      },
      "source": [
        "### Observability Setup\n",
        "\n",
        "Configure Arize Phoenix tracing to monitor your agents' behavior, tool usage, and performance. This helps you debug issues and optimize agent interactions."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "RYBgsjR2VP57"
      },
      "outputs": [],
      "source": [
        "# Phoenix configuration\n",
        "os.environ[\"PHOENIX_API_KEY\"] = \"\"  # <---- UPDATE with your PHOENIX API Key\n",
        "os.environ[\"PHOENIX_BASE_URL\"] = (\n",
        "    \"https://app.phoenix.arize.com/s/ryoung-meta\"  # <---- UPDATE with your Phoenix hostname\n",
        ")\n",
        "os.environ[\"PHOENIX_COLLECTOR_ENDPOINT\"] = (\n",
        "    \"https://app.phoenix.arize.com/s/ryoung-meta/v1/trace\"  # <---- UPDATE with your (append /v1/trace to hostname)\n",
        ")\n",
        "os.environ[\"PHOENIX_PROJECT_NAME\"] = \"trading-agent\"\n",
        "\n",
        "# Configure the Phoenix tracer\n",
        "tracer_provider = register(\n",
        "    project_name=os.environ[\"PHOENIX_PROJECT_NAME\"],  # Default is 'default'\n",
        "    auto_instrument=True,  # Auto-instrument your app based on installed OI dependencies\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ye91SloVXFtl"
      },
      "source": [
        "## Building the Market Data Generator\n",
        "\n",
        "Before creating our agents, we need a utility to generate synthetic market data for testing.\n",
        "\n",
        "This class simulates realistic stock price movements using random walk with drift and technical indicators. The generator produces OHLCV (Open, High, Low, Close, Volume) data series and calculates common technical indicators like RSI and MACD."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "NMoaItJKXP1T"
      },
      "outputs": [],
      "source": [
        "class MarketDataGenerator:\n",
        "    \"\"\"Generate realistic synthetic market data.\"\"\"\n",
        "\n",
        "    def __init__(self, seed: int = 42):\n",
        "        random.seed(seed)\n",
        "        np.random.seed(seed)\n",
        "\n",
        "        # Base prices for common symbols\n",
        "        self.base_prices = {\n",
        "            \"NVDA\": 850.0,\n",
        "            \"AAPL\": 185.0,\n",
        "            \"GOOGL\": 155.0,\n",
        "            \"MSFT\": 420.0,\n",
        "            \"TSLA\": 245.0,\n",
        "        }\n",
        "\n",
        "    def generate_price_series(self, symbol: str, days: int = 30) -> list[dict]:\n",
        "        \"\"\"Generate realistic OHLCV price series.\"\"\"\n",
        "        base_price = self.base_prices.get(symbol, 100.0)\n",
        "\n",
        "        prices = [base_price]\n",
        "        for _ in range(days - 1):\n",
        "            drift = random.uniform(-0.005, 0.01)\n",
        "            shock = random.gauss(0, 0.02)\n",
        "            new_price = prices[-1] * (1 + drift + shock)\n",
        "            prices.append(max(new_price, 1.0))\n",
        "\n",
        "        # Generate OHLCV data\n",
        "        ohlcv_data = []\n",
        "        start_date = datetime.now() - timedelta(days=days)\n",
        "\n",
        "        for i, close in enumerate(prices):\n",
        "            date = start_date + timedelta(days=i)\n",
        "            intraday_range = close * random.uniform(0.01, 0.03)\n",
        "            open_price = close + random.uniform(-intraday_range / 2, intraday_range / 2)\n",
        "            high = max(open_price, close) + random.uniform(0, intraday_range)\n",
        "            low = min(open_price, close) - random.uniform(0, intraday_range)\n",
        "            volume = int(random.uniform(50_000_000, 150_000_000))\n",
        "\n",
        "            ohlcv_data.append(\n",
        "                {\n",
        "                    \"date\": date.strftime(\"%Y-%m-%d\"),\n",
        "                    \"open\": round(open_price, 2),\n",
        "                    \"high\": round(high, 2),\n",
        "                    \"low\": round(low, 2),\n",
        "                    \"close\": round(close, 2),\n",
        "                    \"volume\": volume,\n",
        "                }\n",
        "            )\n",
        "\n",
        "        return ohlcv_data\n",
        "\n",
        "    def _calculate_rsi(self, prices: list[float], period: int = 14) -> float:\n",
        "        \"\"\"Calculate RSI indicator.\"\"\"\n",
        "        if len(prices) < period + 1:\n",
        "            return 50.0\n",
        "\n",
        "        deltas = np.diff(prices[-period - 1 :])\n",
        "        gains = deltas.copy()\n",
        "        losses = deltas.copy()\n",
        "        gains[gains < 0] = 0\n",
        "        losses[losses > 0] = 0\n",
        "        losses = abs(losses)\n",
        "\n",
        "        avg_gain = np.mean(gains) if len(gains) > 0 else 0\n",
        "        avg_loss = np.mean(losses) if len(losses) > 0 else 0.01\n",
        "\n",
        "        rs = avg_gain / avg_loss if avg_loss != 0 else 100\n",
        "        rsi = 100 - (100 / (1 + rs))\n",
        "\n",
        "        return rsi\n",
        "\n",
        "    def _calculate_macd(self, prices: list[float]) -> tuple:\n",
        "        \"\"\"Calculate MACD and signal line.\"\"\"\n",
        "        if len(prices) < 26:\n",
        "            return (0.0, 0.0)\n",
        "\n",
        "        # Simplified MACD calculation\n",
        "        fast_ema = np.mean(prices[-12:])\n",
        "        slow_ema = np.mean(prices[-26:])\n",
        "        macd = fast_ema - slow_ema\n",
        "        signal = macd * 0.9\n",
        "\n",
        "        return (macd, signal)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "JHFWaZXfXWGA"
      },
      "outputs": [],
      "source": [
        "market_generator = MarketDataGenerator()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jS2nsEYZMcKY"
      },
      "source": [
        "## Building agents"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Fn8SFxgvXYJu"
      },
      "source": [
        "### Building the Bear Agent (Risk Analysis)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kH_exfXzXaKh"
      },
      "source": [
        "#### Creating MCP Tools for Risk Analysis\n",
        "\n",
        "MCP (Model Context Protocol) tools extend the agent's capabilities beyond basic LLM functionality. These tools enable the agent to perform specialized market analysis tasks."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "TcF2J_FbXeeZ"
      },
      "outputs": [],
      "source": [
        "# Constants for risk analysis\n",
        "RSI_OVERBOUGHT_THRESHOLD = 70\n",
        "RISK_HIGH_THRESHOLD = 60\n",
        "\n",
        "# Initialize MCP server for Bear Agent tools\n",
        "bear_mcp = FastMCP(\"bear-agent-tools\")\n",
        "\n",
        "\n",
        "@bear_mcp.tool()\n",
        "async def risk_scanner(symbol: str) -> str:\n",
        "    \"\"\"Scan for potential downside risks and warning signals.\n",
        "\n",
        "    Args:\n",
        "        symbol: Stock symbol to analyze (e.g., NVDA, AAPL)\n",
        "\n",
        "    Returns:\n",
        "        Risk analysis report\n",
        "    \"\"\"\n",
        "    # Generate market data and calculate technical indicators\n",
        "    prices = market_generator.generate_price_series(symbol, days=30)\n",
        "    current_price = prices[-1][\"close\"]\n",
        "    closes = [p[\"close\"] for p in prices]\n",
        "    rsi = market_generator._calculate_rsi(closes)\n",
        "\n",
        "    # Calculate overall risk score\n",
        "    risk_score = np.random.uniform(40, 75)\n",
        "\n",
        "    # Identify specific risks based on technical indicators\n",
        "    risks = []\n",
        "\n",
        "    if rsi > RSI_OVERBOUGHT_THRESHOLD:\n",
        "        risks.append(\n",
        "            {\n",
        "                \"risk\": \"Overbought Conditions\",\n",
        "                \"severity\": \"HIGH\",\n",
        "                \"description\": f\"RSI at {rsi:.1f} indicates potential pullback\",\n",
        "                \"impact\": \"-5% to -10%\",\n",
        "            }\n",
        "        )\n",
        "\n",
        "    if len(risks) == 0:\n",
        "        risks.append(\n",
        "            {\n",
        "                \"risk\": \"Valuation Concerns\",\n",
        "                \"severity\": \"MEDIUM\",\n",
        "                \"description\": \"P/E ratio elevated vs historical average\",\n",
        "                \"impact\": \"-10% to -15%\",\n",
        "            }\n",
        "        )\n",
        "\n",
        "    # Format comprehensive risk report\n",
        "    separator = \"=\" * 40\n",
        "    risk_level = \"HIGH\" if risk_score > RISK_HIGH_THRESHOLD else \"MEDIUM\"\n",
        "\n",
        "    result = dedent(f\"\"\"\\\n",
        "        RISK ANALYSIS FOR {symbol}\n",
        "        {separator}\n",
        "        Current Price: ${current_price}\n",
        "        Risk Score: {risk_score:.1f}/100\n",
        "        Risk Level: {risk_level}\n",
        "\n",
        "        Identified Risks:\n",
        "    \"\"\")\n",
        "\n",
        "    for risk in risks:\n",
        "        result += f\"\\n[{risk['severity']}] {risk['risk']}\"\n",
        "        result += f\"\\n   {risk['description']}\"\n",
        "        result += f\"\\n   Potential Impact: {risk['impact']}\\n\"\n",
        "\n",
        "    return result\n",
        "\n",
        "\n",
        "@bear_mcp.tool()\n",
        "async def divergence_detector(symbol: str) -> str:\n",
        "    \"\"\"Detect bearish divergences and technical weakness.\n",
        "\n",
        "    Args:\n",
        "        symbol: Stock symbol to analyze\n",
        "\n",
        "    Returns:\n",
        "        Divergence analysis report\n",
        "    \"\"\"\n",
        "    prices = market_generator.generate_price_series(symbol, days=30)\n",
        "    closes = [p[\"close\"] for p in prices]\n",
        "    rsi = market_generator._calculate_rsi(closes)\n",
        "\n",
        "    divergence_score = np.random.uniform(30, 70)\n",
        "    separator = \"=\" * 40\n",
        "\n",
        "    result = dedent(f\"\"\"\\\n",
        "        DIVERGENCE ANALYSIS FOR {symbol}\n",
        "        {separator}\n",
        "        Divergence Score: {divergence_score:.1f}/100\n",
        "        RSI: {rsi:.1f}\n",
        "\n",
        "        Detected Divergences:\n",
        "        • RSI Bearish Divergence\n",
        "           Price making highs but RSI not confirming\n",
        "           Confidence: 75%\n",
        "\n",
        "        • Volume Divergence\n",
        "           Declining volume on advances\n",
        "           Confidence: 70%\n",
        "    \"\"\")\n",
        "\n",
        "    return result\n",
        "\n",
        "\n",
        "@bear_mcp.tool()\n",
        "async def exit_signal_monitor(symbol: str) -> str:\n",
        "    \"\"\"Monitor for distribution patterns and exit signals.\n",
        "\n",
        "    Args:\n",
        "        symbol: Stock symbol to analyze\n",
        "\n",
        "    Returns:\n",
        "        Exit signal analysis\n",
        "    \"\"\"\n",
        "    prices = market_generator.generate_price_series(symbol, days=30)\n",
        "    current_price = prices[-1][\"close\"]\n",
        "\n",
        "    # Stop loss levels\n",
        "    stop_aggressive = round(current_price * 0.95, 2)\n",
        "    stop_moderate = round(current_price * 0.93, 2)\n",
        "    separator = \"=\" * 40\n",
        "\n",
        "    result = dedent(f\"\"\"\\\n",
        "        EXIT SIGNAL MONITOR FOR {symbol}\n",
        "        {separator}\n",
        "        Current Price: ${current_price}\n",
        "\n",
        "        Exit Signals:\n",
        "        [MED] Distribution Pattern\n",
        "           Heavy selling on up days\n",
        "           Action: Reduce position size\n",
        "\n",
        "        Stop Loss Recommendations:\n",
        "           Aggressive: ${stop_aggressive} (-5%)\n",
        "           Moderate: ${stop_moderate} (-7%)\n",
        "    \"\"\")\n",
        "\n",
        "    return result"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iOyQNRr8XiKh"
      },
      "source": [
        "#### Creating the Bear Agent with Pydantic AI\n",
        "\n",
        "Now we instantiate the Bear Agent using Pydantic AI. The agent is configured with a system prompt that establishes its personality as a cautious risk analyst focused on capital preservation.\n",
        "\n",
        "The `instrument=True` parameter enables automatic tracing to Phoenix, allowing you to monitor the agent's behavior."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "F5mTGIfgXmiX"
      },
      "outputs": [],
      "source": [
        "# Define the Bear Agent's personality and role\n",
        "bear_system_prompt = (\n",
        "    \"You are a cautious risk analyst focused on identifying potential downside catalysts, \"\n",
        "    \"warning signals, and protective strategies. You prioritize capital preservation. \"\n",
        "    \"Use the available MCP tools to analyze market risks comprehensively.\"\n",
        ")\n",
        "\n",
        "# Configure Gemini model for Vertex AI\n",
        "provider = GoogleProvider(vertexai=True)\n",
        "model = GoogleModel(\"gemini-2.5-flash\", provider=provider)\n",
        "\n",
        "# Create Pydantic AI agent with MCP tools and tracing enabled\n",
        "bear_agent = Agent(\n",
        "    model=model,\n",
        "    system_prompt=bear_system_prompt,\n",
        "    tools=[risk_scanner, divergence_detector, exit_signal_monitor],\n",
        "    retries=2,\n",
        "    instrument=True,  # Enable automatic tracing\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EMcCKRMbXqK2"
      },
      "source": [
        "#### Testing the Bear Agent Locally\n",
        "\n",
        "Before deploying, test the agent locally to verify it works correctly."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "KKF9gx51Xs6J"
      },
      "outputs": [],
      "source": [
        "async def test_bear_agent():\n",
        "    # Test query for risk analysis\n",
        "    query = \"Analyze the risks for NVDA stock\"\n",
        "    print(f\"Query: {query}\")\n",
        "    print(\"-\" * 60)\n",
        "\n",
        "    # Run agent and get response\n",
        "    result = await bear_agent.run(query)\n",
        "    print(\"Agent Response:\\n\")\n",
        "    print(result.output)\n",
        "\n",
        "    # Give Phoenix a moment to receive the data\n",
        "    await asyncio.sleep(2)\n",
        "\n",
        "\n",
        "# Execute the test\n",
        "await test_bear_agent()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nv4N1pruugXN"
      },
      "source": [
        "### Building the Bull Agent (Opportunity Analysis)\n",
        "\n",
        "The Bull Agent focuses on identifying growth opportunities and bullish signals. Built with Google ADK and Llama models through LiteLLM routing, this agent provides analysis of breakout patterns, momentum signals, and optimal entry points."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VhVfeNNeumaL"
      },
      "source": [
        "#### Creating MCP Tools for Opportunity Analysis\n",
        "\n",
        "These tools enable the Bull Agent to identify bullish market conditions. Each tool focuses on a different aspect of opportunity analysis. In order, you have:\n",
        "\n",
        "- The breakout pattern finder identifies bullish technical patterns like resistance breakouts and ascending triangles. These patterns suggest potential upward price movement with specific price targets based on pattern characteristics.\n",
        "\n",
        "- The momentum screener evaluates trend strength and identifies stocks with strong upward momentum. It considers multiple factors including RSI levels, MACD crossovers, volume patterns, and overall trend structure to assess momentum quality.\n",
        "\n",
        "- The entry signal detector identifies optimal entry points for long positions. It evaluates support levels, calculates appropriate stop-loss placement, and determines position sizing based on entry quality.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "M9Y5xZnWukcp"
      },
      "outputs": [],
      "source": [
        "# Initialize MCP server for Bull agent\n",
        "bull_mcp = FastMCP(\"bull-agent-tools\")\n",
        "\n",
        "\n",
        "@bull_mcp.tool()\n",
        "async def find_breakout_patterns(symbol: str) -> str:\n",
        "    \"\"\"Identify bullish breakout patterns and technical setups.\n",
        "\n",
        "    Args:\n",
        "        symbol: Stock symbol to analyze\n",
        "\n",
        "    Returns:\n",
        "        Breakout analysis report\n",
        "    \"\"\"\n",
        "    # Generate prices\n",
        "    prices = market_generator.generate_price_series(symbol, days=30)\n",
        "    current_price = prices[-1][\"close\"]\n",
        "\n",
        "    # Calculate the score\n",
        "    breakout_score = np.random.uniform(55, 85)\n",
        "\n",
        "    result = f\"\"\"\n",
        "BREAKOUT PATTERN ANALYSIS FOR {symbol}\n",
        "{\"=\" * 40}\n",
        "Current Price: ${current_price}\n",
        "Breakout Score: {breakout_score:.1f}/100\n",
        "Momentum: {\"STRONG\" if breakout_score > 70 else \"MODERATE\"}\n",
        "\n",
        "Bullish Patterns:\n",
        "[HIGH] Resistance Breakout\n",
        "   Price breaking above key resistance\n",
        "   Target: ${round(current_price * 1.08, 2)} (+8%)\n",
        "\n",
        "[MED] Ascending Triangle\n",
        "   Higher lows with resistance test\n",
        "   Target: ${round(current_price * 1.10, 2)} (+10%)\n",
        "\"\"\"\n",
        "\n",
        "    return result\n",
        "\n",
        "\n",
        "@bull_mcp.tool()\n",
        "async def momentum_screener(symbol: str) -> str:\n",
        "    \"\"\"Screen for stocks with strong upward momentum.\n",
        "\n",
        "    Args:\n",
        "        symbol: Stock symbol to analyze\n",
        "\n",
        "    Returns:\n",
        "        Momentum analysis report\n",
        "    \"\"\"\n",
        "    # Generate prices\n",
        "    prices = market_generator.generate_price_series(symbol, days=30)\n",
        "    closes = [p[\"close\"] for p in prices]\n",
        "\n",
        "    # Calculate kpis\n",
        "    rsi = market_generator._calculate_rsi(closes)\n",
        "    momentum_score = np.random.uniform(60, 90)\n",
        "\n",
        "    result = f\"\"\"\n",
        "MOMENTUM ANALYSIS FOR {symbol}\n",
        "{\"=\" * 40}\n",
        "Momentum Score: {momentum_score:.1f}/100\n",
        "Rating: {\"VERY STRONG\" if momentum_score > 80 else \"STRONG\"}\n",
        "Trend: BULLISH\n",
        "\n",
        "Momentum Factors:\n",
        "• Healthy RSI at {rsi:.1f} - room to run\n",
        "• MACD bullish crossover confirmed\n",
        "• Volume surge - institutions accumulating\n",
        "• Uptrend pattern intact\n",
        "\"\"\"\n",
        "\n",
        "    return result\n",
        "\n",
        "\n",
        "@bull_mcp.tool()\n",
        "async def entry_signal_detector(symbol: str) -> str:\n",
        "    \"\"\"Detect optimal entry points for long positions.\n",
        "\n",
        "    Args:\n",
        "        symbol: Stock symbol to analyze\n",
        "\n",
        "    Returns:\n",
        "        Entry signal analysis\n",
        "    \"\"\"\n",
        "    # Generate prices\n",
        "    prices = market_generator.generate_price_series(symbol, days=30)\n",
        "\n",
        "    current_price = prices[-1][\"close\"]\n",
        "    entry_quality = np.random.uniform(60, 90)\n",
        "\n",
        "    result = f\"\"\"\n",
        "ENTRY SIGNAL ANALYSIS FOR {symbol}\n",
        "{\"=\" * 40}\n",
        "Current Price: ${current_price}\n",
        "Entry Quality: {entry_quality:.1f}/100\n",
        "\n",
        "Entry Signals:\n",
        "[HIGH] Pullback to Support\n",
        "   Quality entry at ${round(current_price * 0.98, 2)}\n",
        "   Stop Loss: ${round(current_price * 0.95, 2)}\n",
        "   Risk/Reward: 1:3\n",
        "\n",
        "Position Sizing:\n",
        "   Suggested: {\"75-100%\" if entry_quality > 80 else \"50-75%\"} of planned position\n",
        "\"\"\"\n",
        "\n",
        "    return result"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YsWQgTbwuzp5"
      },
      "source": [
        "#### Creating the Bull Agent with Google ADK\n",
        "\n",
        "The Bull Agent is created using Google ADK's LlmAgent class. We configure LiteLLM to route requests to Llama model on Vertex AI, demonstrating how to use non-Google models with the ADK framework. The agent's system instruction establishes its optimistic personality focused on growth opportunities."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "fBFgswNIuuCR"
      },
      "outputs": [],
      "source": [
        "# Model configuration\n",
        "llama_model = \"vertex_ai/meta/llama-3.3-70b-instruct-maas\"\n",
        "\n",
        "# Configure LiteLLM to route requests to Vertex AI\n",
        "litellm.vertex_project = os.environ.get(\"GOOGLE_CLOUD_PROJECT\")\n",
        "litellm.vertex_location = os.environ.get(\"GOOGLE_CLOUD_REGION\")\n",
        "\n",
        "# Define the Bull Agent's personality and role\n",
        "bull_system_instruction = (\n",
        "    \"You are an optimistic market analyst focused on identifying growth opportunities, \"\n",
        "    \"bullish patterns, and upside catalysts. You emphasize potential gains and momentum. \"\n",
        "    \"Use the available tools to analyze market opportunities comprehensively.\"\n",
        ")\n",
        "\n",
        "# Create ADK agent\n",
        "bull_agent = LlmAgent(\n",
        "    name=\"bull_agent\",\n",
        "    model=LiteLlm(llama_model),  # Using Llama 3.3 for opportunity analysis\n",
        "    description=\"Optimistic analyst focused on growth opportunities and bullish signals.\",\n",
        "    instruction=bull_system_instruction,\n",
        "    tools=[find_breakout_patterns, momentum_screener, entry_signal_detector],\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AXtZ2kKivLVs"
      },
      "source": [
        "#### Testing the Bull Agent Locally\n",
        "\n",
        "Test the Bull Agent using the ADK Runner, which manages the agent's execution lifecycle. The Runner handles session management, enabling conversation context to persist across multiple messages. The test demonstrates the full execution flow including tool calls and response formatting."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "B0WWHGHZvSs9"
      },
      "outputs": [],
      "source": [
        "async def test_bull_agent():\n",
        "    \"\"\"Test the Bull Agent locally using ADK Runner.\"\"\"\n",
        "    # Create ADK Runner to manage agent execution\n",
        "    runner = Runner(\n",
        "        app_name=bull_agent.name,\n",
        "        agent=bull_agent,\n",
        "        session_service=InMemorySessionService(),  # Manages conversations\n",
        "    )\n",
        "\n",
        "    # Create a session for the conversation\n",
        "    session = await runner.session_service.create_session(\n",
        "        app_name=bull_agent.name,\n",
        "        user_id=\"test_user\",\n",
        "        session_id=\"test_session\",\n",
        "    )\n",
        "\n",
        "    # Test query for opportunity analysis\n",
        "    query = \"What are the growth opportunities for AAPL stock?\"\n",
        "    print(f\"Query: {query}\")\n",
        "    print(\"-\" * 60)\n",
        "\n",
        "    # Format message in ADK/Gemini format\n",
        "    content = types.Content(role=\"user\", parts=[types.Part(text=query)])\n",
        "\n",
        "    # Run agent and capture final response\n",
        "    final_response = None\n",
        "    async for event in runner.run_async(\n",
        "        session_id=session.id, user_id=\"test_user\", new_message=content\n",
        "    ):\n",
        "        # Look for the final response event\n",
        "        if event.is_final_response():\n",
        "            final_response = event\n",
        "            break\n",
        "\n",
        "    # Extract and display the response\n",
        "    if final_response and final_response.content:\n",
        "        print(\"Agent Response:\\n\")\n",
        "        for part in final_response.content.parts:\n",
        "            if hasattr(part, \"text\") and part.text:\n",
        "                print(part.text)\n",
        "\n",
        "\n",
        "# Execute the test\n",
        "await test_bull_agent()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "d0dCPKaoKoBw"
      },
      "source": [
        "## Packaging Agents for A2A Deployment on Agent Engine\n",
        "\n",
        "To deploy the Bear Agent to Vertex AI Agent Engine, we need to package our agent."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SvMdWsQsSTL9"
      },
      "source": [
        "### Package the Bear Agent (Risk Analysis)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Sgh6iAitq3QH"
      },
      "source": [
        "#### Packaging MCP tools\n",
        "\n",
        "We start with preparing the MCP tools as a Python module. This involves creating a directory structure with the market data generator, tool definitions, and an MCP server that can be spawned as a subprocess.\n",
        "\n",
        "First, create the package directory structure.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "12FsyKN-1PK6"
      },
      "outputs": [],
      "source": [
        "# Create directory structure for MCP tools\n",
        "mcp_tools_dir = Path(\"mcp_tools\")\n",
        "mcp_tools_dir.mkdir(exist_ok=True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pbZpjQWeesYM"
      },
      "source": [
        "Create the package initialization file to make it importable.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2Pes2mPt3AN0"
      },
      "outputs": [],
      "source": [
        "%%writefile $mcp_tools_dir/__init__.py\n",
        "\"\"\"MCP Tools package for trading agents.\"\"\"\n",
        "\n",
        "from .market_data import MarketDataGenerator\n",
        "\n",
        "__all__ = [\"MarketDataGenerator\"]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "t4W2CB_ze1mR"
      },
      "source": [
        "Write the market data generator as a standalone module.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "hMKEXqDa1cne"
      },
      "outputs": [],
      "source": [
        "%%writefile $mcp_tools_dir/market_data.py\n",
        "\"\"\"Market Data Generator - Creates synthetic market data for testing.\"\"\"\n",
        "\n",
        "import random\n",
        "import numpy as np\n",
        "from datetime import datetime, timedelta\n",
        "from typing import List, Dict\n",
        "\n",
        "\n",
        "class MarketDataGenerator:\n",
        "    \"\"\"Generate realistic synthetic market data.\"\"\"\n",
        "\n",
        "    def __init__(self, seed: int = 42):\n",
        "        random.seed(seed)\n",
        "        np.random.seed(seed)\n",
        "\n",
        "        # Base prices for common symbols\n",
        "        self.base_prices = {\n",
        "            \"NVDA\": 850.0,\n",
        "            \"AAPL\": 185.0,\n",
        "            \"GOOGL\": 155.0,\n",
        "            \"MSFT\": 420.0,\n",
        "            \"TSLA\": 245.0,\n",
        "        }\n",
        "\n",
        "    def generate_price_series(self, symbol: str, days: int = 30) -> List[Dict]:\n",
        "        \"\"\"Generate realistic OHLCV price series.\"\"\"\n",
        "        base_price = self.base_prices.get(symbol, 100.0)\n",
        "\n",
        "        prices = [base_price]\n",
        "        for _ in range(days - 1):\n",
        "            drift = random.uniform(-0.005, 0.01)\n",
        "            shock = random.gauss(0, 0.02)\n",
        "            new_price = prices[-1] * (1 + drift + shock)\n",
        "            prices.append(max(new_price, 1.0))\n",
        "\n",
        "        # Generate OHLCV data\n",
        "        ohlcv_data = []\n",
        "        start_date = datetime.now() - timedelta(days=days)\n",
        "\n",
        "        for i, close in enumerate(prices):\n",
        "            date = start_date + timedelta(days=i)\n",
        "            intraday_range = close * random.uniform(0.01, 0.03)\n",
        "            open_price = close + random.uniform(-intraday_range/2, intraday_range/2)\n",
        "            high = max(open_price, close) + random.uniform(0, intraday_range)\n",
        "            low = min(open_price, close) - random.uniform(0, intraday_range)\n",
        "            volume = int(random.uniform(50_000_000, 150_000_000))\n",
        "\n",
        "            ohlcv_data.append({\n",
        "                \"date\": date.strftime(\"%Y-%m-%d\"),\n",
        "                \"open\": round(open_price, 2),\n",
        "                \"high\": round(high, 2),\n",
        "                \"low\": round(low, 2),\n",
        "                \"close\": round(close, 2),\n",
        "                \"volume\": volume\n",
        "            })\n",
        "\n",
        "        return ohlcv_data\n",
        "\n",
        "    def _calculate_rsi(self, prices: List[float], period: int = 14) -> float:\n",
        "        \"\"\"Calculate RSI indicator.\"\"\"\n",
        "        if len(prices) < period + 1:\n",
        "            return 50.0\n",
        "\n",
        "        deltas = np.diff(prices[-period-1:])\n",
        "        gains = deltas.copy()\n",
        "        losses = deltas.copy()\n",
        "        gains[gains < 0] = 0\n",
        "        losses[losses > 0] = 0\n",
        "        losses = abs(losses)\n",
        "\n",
        "        avg_gain = np.mean(gains) if len(gains) > 0 else 0\n",
        "        avg_loss = np.mean(losses) if len(losses) > 0 else 0.01\n",
        "\n",
        "        rs = avg_gain / avg_loss if avg_loss != 0 else 100\n",
        "        rsi = 100 - (100 / (1 + rs))\n",
        "\n",
        "        return rsi\n",
        "\n",
        "    def _calculate_macd(self, prices: List[float]) -> tuple:\n",
        "        \"\"\"Calculate MACD and signal line.\"\"\"\n",
        "        if len(prices) < 26:\n",
        "            return (0.0, 0.0)\n",
        "\n",
        "        # Simplified MACD calculation\n",
        "        fast_ema = np.mean(prices[-12:])\n",
        "        slow_ema = np.mean(prices[-26:])\n",
        "        macd = fast_ema - slow_ema\n",
        "        signal = macd * 0.9  # Simplified signal\n",
        "\n",
        "        return (macd, signal)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lIZtn4Dge6Tz"
      },
      "source": [
        "Create the Bear MCP tools module that will be used by the deployed agent."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "s9Olil692t30"
      },
      "outputs": [],
      "source": [
        "%%writefile $mcp_tools_dir/bear_tools.py\n",
        "\"\"\"Bear Agent MCP Tools - Risk analysis tools.\"\"\"\n",
        "\n",
        "import numpy as np\n",
        "from mcp.server.fastmcp import FastMCP\n",
        "from market_data import MarketDataGenerator\n",
        "\n",
        "# Initialize MCP server\n",
        "mcp = FastMCP(\"bear-agent-tools\")\n",
        "\n",
        "# Create global market data generator\n",
        "market_generator = MarketDataGenerator()\n",
        "\n",
        "\n",
        "@mcp.tool()\n",
        "async def risk_scanner(symbol: str) -> str:\n",
        "    \"\"\"Scan for potential downside risks and warning signals.\"\"\"\n",
        "    prices = market_generator.generate_price_series(symbol, days=30)\n",
        "    current_price = prices[-1][\"close\"]\n",
        "    closes = [p[\"close\"] for p in prices]\n",
        "    rsi = market_generator._calculate_rsi(closes)\n",
        "\n",
        "    risk_score = np.random.uniform(40, 75)\n",
        "\n",
        "    risks = []\n",
        "    if rsi > 70:\n",
        "        risks.append({\n",
        "            \"risk\": \"Overbought Conditions\",\n",
        "            \"severity\": \"HIGH\",\n",
        "            \"description\": f\"RSI at {rsi:.1f} indicates potential pullback\",\n",
        "            \"impact\": \"-5% to -10%\",\n",
        "        })\n",
        "\n",
        "    if len(risks) == 0:\n",
        "        risks.append({\n",
        "            \"risk\": \"Valuation Concerns\",\n",
        "            \"severity\": \"MEDIUM\",\n",
        "            \"description\": \"P/E ratio elevated vs historical average\",\n",
        "            \"impact\": \"-10% to -15%\",\n",
        "        })\n",
        "\n",
        "    result = f\"\"\"\n",
        "RISK ANALYSIS FOR {symbol}\n",
        "{'='*40}\n",
        "Current Price: ${current_price}\n",
        "Risk Score: {risk_score:.1f}/100\n",
        "Risk Level: {\"HIGH\" if risk_score > 60 else \"MEDIUM\"}\n",
        "\n",
        "Identified Risks:\n",
        "\"\"\"\n",
        "\n",
        "    for risk in risks:\n",
        "        result += f\"\\\\n[{risk['severity']}] {risk['risk']}\"\n",
        "        result += f\"\\\\n   {risk['description']}\"\n",
        "        result += f\"\\\\n   Potential Impact: {risk['impact']}\\\\n\"\n",
        "\n",
        "    return result\n",
        "\n",
        "\n",
        "@mcp.tool()\n",
        "async def divergence_detector(symbol: str) -> str:\n",
        "    \"\"\"Detect bearish divergences and technical weakness.\"\"\"\n",
        "    prices = market_generator.generate_price_series(symbol, days=30)\n",
        "    closes = [p[\"close\"] for p in prices]\n",
        "    rsi = market_generator._calculate_rsi(closes)\n",
        "\n",
        "    divergence_score = np.random.uniform(30, 70)\n",
        "\n",
        "    result = f\"\"\"\n",
        "DIVERGENCE ANALYSIS FOR {symbol}\n",
        "{'='*40}\n",
        "Divergence Score: {divergence_score:.1f}/100\n",
        "RSI: {rsi:.1f}\n",
        "\n",
        "Detected Divergences:\n",
        "• RSI Bearish Divergence\n",
        "   Price making highs but RSI not confirming\n",
        "   Confidence: 75%\n",
        "\n",
        "• Volume Divergence\n",
        "   Declining volume on advances\n",
        "   Confidence: 70%\n",
        "\"\"\"\n",
        "\n",
        "    return result\n",
        "\n",
        "\n",
        "@mcp.tool()\n",
        "async def exit_signal_monitor(symbol: str) -> str:\n",
        "    \"\"\"Monitor for distribution patterns and exit signals.\"\"\"\n",
        "    prices = market_generator.generate_price_series(symbol, days=30)\n",
        "    current_price = prices[-1][\"close\"]\n",
        "\n",
        "    stop_aggressive = round(current_price * 0.95, 2)\n",
        "    stop_moderate = round(current_price * 0.93, 2)\n",
        "\n",
        "    result = f\"\"\"\n",
        "EXIT SIGNAL MONITOR FOR {symbol}\n",
        "{'='*40}\n",
        "Current Price: ${current_price}\n",
        "\n",
        "Exit Signals:\n",
        "[MED] Distribution Pattern\n",
        "   Heavy selling on up days\n",
        "   Action: Reduce position size\n",
        "\n",
        "Stop Loss Recommendations:\n",
        "   Aggressive: ${stop_aggressive} (-5%)\n",
        "   Moderate: ${stop_moderate} (-7%)\n",
        "\"\"\"\n",
        "\n",
        "    return result"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hTtTTpK0e9og"
      },
      "source": [
        "Create the MCP server entry point that will be spawned as a subprocess."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "43Pmj4SIol-G"
      },
      "outputs": [],
      "source": [
        "%%writefile $mcp_tools_dir/bear_mcp_server.py\n",
        "\"\"\"Bear Agent MCP Server - Risk-focused market analysis tools.\"\"\"\n",
        "\n",
        "# Import the mcp instance with all registered tools\n",
        "from bear_tools import mcp\n",
        "\n",
        "if __name__ == \"__main__\":\n",
        "    # Run the MCP server with STDIO transport\n",
        "    mcp.run(transport=\"stdio\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QgniWaTLtZY3"
      },
      "source": [
        "#### Creating the Bear Agent Card\n",
        "\n",
        "An Agent Card is a standardized descriptor in the A2A protocol that advertises an agent's capabilities. The card defines the agent's skills, which are discrete capabilities with examples and tags for discovery. Other agents can query this card to understand what the Bear Agent can do before sending requests."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7DAS4YAotsY6"
      },
      "outputs": [],
      "source": [
        "def create_bear_agent_card():\n",
        "    \"\"\"Create A2A Agent Card for Bear Risk Analyst.\"\"\"\n",
        "    # Define the agent's capabilities as A2A skills\n",
        "    skills = [\n",
        "        AgentSkill(\n",
        "            id=\"risk_analysis\",\n",
        "            name=\"Risk Factor Scanner\",\n",
        "            description=\"Identifies potential downside catalysts and risk factors\",\n",
        "            tags=[\"Risk-Analysis\", \"Market-Analysis\"],\n",
        "            examples=[\n",
        "                \"What are the key risks for NVDA?\",\n",
        "                \"Analyze downside catalysts for tech stocks\",\n",
        "            ],\n",
        "        ),\n",
        "        AgentSkill(\n",
        "            id=\"divergence_detection\",\n",
        "            name=\"Divergence Detection\",\n",
        "            description=\"Finds bearish divergences and technical weakness signals\",\n",
        "            tags=[\"Technical-Analysis\", \"Divergence\"],\n",
        "            examples=[\n",
        "                \"Find bearish divergences in AAPL\",\n",
        "            ],\n",
        "        ),\n",
        "        AgentSkill(\n",
        "            id=\"exit_signals\",\n",
        "            name=\"Exit Signal Monitoring\",\n",
        "            description=\"Tracks distribution patterns and exit signals\",\n",
        "            tags=[\"Exit-Strategy\", \"Risk-Management\"],\n",
        "            examples=[\n",
        "                \"Monitor exit signals for NVDA\",\n",
        "            ],\n",
        "        ),\n",
        "    ]\n",
        "\n",
        "    # Create A2A agent card for capability advertisement\n",
        "    return create_agent_card(\n",
        "        agent_name=\"Bear Risk Analyst (Pydantic AI + MCP)\",\n",
        "        description=(\n",
        "            \"A cautious risk analyst powered by Pydantic AI, \"\n",
        "            \"focused on identifying downside catalysts and warning signals.\"\n",
        "        ),\n",
        "        skills=skills,\n",
        "    )\n",
        "\n",
        "\n",
        "# Generate the agent card\n",
        "bear_agent_card = create_bear_agent_card()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nRQf_gCrfPrH"
      },
      "source": [
        "You can check your agent card as shown below."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Sms-ypdVt7hy"
      },
      "outputs": [],
      "source": [
        "print(\"Bear Agent Card:\")\n",
        "print(f\"   Name: {bear_agent_card.name}\")\n",
        "print(f\"   Skills: {len(bear_agent_card.skills)}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "I7UBb-l3uG4o"
      },
      "source": [
        "#### Creating the Bear Agent Executor\n",
        "\n",
        "The Agent Executor bridges the Pydantic AI agent with the A2A protocol. This class handles incoming A2A requests, executes the agent, and formats responses according to the A2A specification. Lazy initialization ensures the agent is only created when needed on the deployed infrastructure, not during the pickling process."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "zk6tVyNZuISN"
      },
      "outputs": [],
      "source": [
        "class BearAgentExecutor(AgentExecutor):\n",
        "    \"\"\"Agent executor for A2A integration with Bear Agent.\"\"\"\n",
        "\n",
        "    def __init__(self):\n",
        "        # Agent initialized lazily to avoid pickling issues\n",
        "        self.agent = None\n",
        "\n",
        "        # Initiate Phoenix register\n",
        "        self.register = None\n",
        "\n",
        "    def _init_agent(self):\n",
        "        \"\"\"Initialize Pydantic AI agent with MCP tools on deployment.\"\"\"\n",
        "        if self.register is None:\n",
        "            import os\n",
        "\n",
        "            from phoenix.otel import register\n",
        "\n",
        "            phoenix_project_name = os.environ.get(\"PHOENIX_PROJECT_NAME\", \"default\")\n",
        "\n",
        "            # Configure the Phoenix tracer\n",
        "            tracer_provider = register(\n",
        "                project_name=phoenix_project_name, auto_instrument=True\n",
        "            )\n",
        "\n",
        "        if self.agent is None:\n",
        "            import os\n",
        "\n",
        "            import vertexai\n",
        "            from pydantic_ai import Agent\n",
        "            from pydantic_ai.mcp import MCPServerStdio\n",
        "            from pydantic_ai.models.google import GoogleModel\n",
        "            from pydantic_ai.providers.google import GoogleProvider\n",
        "\n",
        "            # Get configuration from environment\n",
        "            project_id = os.environ.get(\"GOOGLE_CLOUD_PROJECT\")\n",
        "            location = os.environ.get(\"GOOGLE_CLOUD_LOCATION\", \"us-central1\")\n",
        "\n",
        "            # Initialize Vertex AI\n",
        "            vertexai.init(project=project_id, location=location)\n",
        "\n",
        "            # System prompt for Bear Agent\n",
        "            bear_system_prompt = (\n",
        "                \"You are a cautious risk analyst focused on identifying potential downside catalysts, \"\n",
        "                \"warning signals, and protective strategies. You prioritize capital preservation. \"\n",
        "                \"Use the available MCP tools to analyze market risks comprehensively.\"\n",
        "            )\n",
        "\n",
        "            # Create provider and model\n",
        "            provider = GoogleProvider(vertexai=True)\n",
        "            model = GoogleModel(\"gemini-2.5-flash\", provider=provider)\n",
        "\n",
        "            # Configure MCP server connection\n",
        "            mcp_server = MCPServerStdio(\n",
        "                \"python\", args=[\"mcp_tools/bear_mcp_server.py\"], timeout=60\n",
        "            )\n",
        "\n",
        "            # Create Bear Agent\n",
        "            self.agent = Agent(\n",
        "                model=model,\n",
        "                system_prompt=bear_system_prompt,\n",
        "                toolsets=[mcp_server],\n",
        "                retries=3,\n",
        "            )\n",
        "\n",
        "    async def cancel(self, context: RequestContext, event_queue: EventQueue):\n",
        "        # Cancellation not supported\n",
        "        raise ServerError(error=UnsupportedOperationError())\n",
        "\n",
        "    async def execute(self, context: RequestContext, event_queue: EventQueue) -> None:\n",
        "        \"\"\"Execute Bear Agent analysis.\"\"\"\n",
        "        # Initialize agent if needed\n",
        "        if self.agent is None:\n",
        "            self._init_agent()\n",
        "\n",
        "        # Extract user query from A2A context\n",
        "        query = context.get_user_input()\n",
        "        updater = TaskUpdater(event_queue, context.task_id, context.context_id)\n",
        "\n",
        "        # Submit task if not already submitted\n",
        "        if not hasattr(context, \"current_task\") or not context.current_task:\n",
        "            await updater.submit()\n",
        "\n",
        "        # Mark task as actively working\n",
        "        await updater.start_work()\n",
        "\n",
        "        try:\n",
        "            # Update status to show progress\n",
        "            await updater.update_status(\n",
        "                TaskState.working, message=new_agent_text_message(\"Analyzing risks...\")\n",
        "            )\n",
        "\n",
        "            # Run agent with user query\n",
        "            result = await self.agent.run(query)\n",
        "\n",
        "            # Extract result text from Pydantic AI response\n",
        "            if hasattr(result, \"output\"):\n",
        "                result_text = result.output\n",
        "            else:\n",
        "                result_text = str(result)\n",
        "\n",
        "            # Format response\n",
        "            response = f\"\"\"\n",
        "BEAR RISK ANALYSIS\n",
        "{\"=\" * 50}\n",
        "\n",
        "{result_text}\n",
        "\n",
        "Analysis completed\n",
        "\"\"\"\n",
        "            # Add result as artifact and complete task\n",
        "            await updater.add_artifact([TextPart(text=response)], name=\"risk_analysis\")\n",
        "            await updater.complete()\n",
        "\n",
        "        except Exception as e:\n",
        "            # Mark task as failed on error\n",
        "            await updater.update_status(\n",
        "                TaskState.failed,\n",
        "                message=new_agent_text_message(f\"Analysis failed: {e!s}\"),\n",
        "            )"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OxOKiVezrLwP"
      },
      "source": [
        "### Packaging the Bull Agent (Opportunity Analysis)\n",
        "\n",
        "As for Bear Agent, we need to package our agent to deploy it on Vertex AI Agent Engine."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0xUznwy8796B"
      },
      "source": [
        "##### Packaging Bull MCP Tools\n",
        "\n",
        "Package the Bull Agent's MCP tools as python module."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ZpLqHBzg8CWx"
      },
      "outputs": [],
      "source": [
        "%%writefile $mcp_tools_dir/bull_tools.py\n",
        "\"\"\"Bull Agent MCP Tools - Opportunity analysis tools.\"\"\"\n",
        "\n",
        "import numpy as np\n",
        "from mcp.server.fastmcp import FastMCP\n",
        "from market_data import MarketDataGenerator\n",
        "\n",
        "# Initialize MCP server\n",
        "mcp = FastMCP(\"bull-agent-tools\")\n",
        "\n",
        "# Create global market data generator\n",
        "market_generator = MarketDataGenerator()\n",
        "\n",
        "\n",
        "@mcp.tool()\n",
        "async def find_breakout_patterns(symbol: str) -> str:\n",
        "    \"\"\"Identify bullish breakout patterns and technical setups.\"\"\"\n",
        "    prices = market_generator.generate_price_series(symbol, days=30)\n",
        "    current_price = prices[-1][\"close\"]\n",
        "\n",
        "    breakout_score = np.random.uniform(55, 85)\n",
        "\n",
        "    result = f\"\"\"\n",
        "BREAKOUT PATTERN ANALYSIS FOR {symbol}\n",
        "{'='*40}\n",
        "Current Price: ${current_price}\n",
        "Breakout Score: {breakout_score:.1f}/100\n",
        "Momentum: {\"STRONG\" if breakout_score > 70 else \"MODERATE\"}\n",
        "\n",
        "Bullish Patterns:\n",
        "[HIGH] Resistance Breakout\n",
        "   Price breaking above key resistance\n",
        "   Target: ${round(current_price * 1.08, 2)} (+8%)\n",
        "\n",
        "[MED] Ascending Triangle\n",
        "   Higher lows with resistance test\n",
        "   Target: ${round(current_price * 1.10, 2)} (+10%)\n",
        "\"\"\"\n",
        "\n",
        "    return result\n",
        "\n",
        "\n",
        "@mcp.tool()\n",
        "async def momentum_screener(symbol: str) -> str:\n",
        "    \"\"\"Screen for stocks with strong upward momentum.\"\"\"\n",
        "    prices = market_generator.generate_price_series(symbol, days=30)\n",
        "    closes = [p[\"close\"] for p in prices]\n",
        "    rsi = market_generator._calculate_rsi(closes)\n",
        "\n",
        "    momentum_score = np.random.uniform(60, 90)\n",
        "\n",
        "    result = f\"\"\"\n",
        "MOMENTUM ANALYSIS FOR {symbol}\n",
        "{'='*40}\n",
        "Momentum Score: {momentum_score:.1f}/100\n",
        "Rating: {\"VERY STRONG\" if momentum_score > 80 else \"STRONG\"}\n",
        "Trend: BULLISH\n",
        "\n",
        "Momentum Factors:\n",
        "• Healthy RSI at {rsi:.1f} - room to run\n",
        "• MACD bullish crossover confirmed\n",
        "• Volume surge - institutions accumulating\n",
        "• Uptrend pattern intact\n",
        "\"\"\"\n",
        "\n",
        "    return result\n",
        "\n",
        "\n",
        "@mcp.tool()\n",
        "async def entry_signal_detector(symbol: str) -> str:\n",
        "    \"\"\"Detect optimal entry points for long positions.\"\"\"\n",
        "    prices = market_generator.generate_price_series(symbol, days=30)\n",
        "    current_price = prices[-1][\"close\"]\n",
        "\n",
        "    entry_quality = np.random.uniform(60, 90)\n",
        "\n",
        "    result = f\"\"\"\n",
        "ENTRY SIGNAL ANALYSIS FOR {symbol}\n",
        "{'='*40}\n",
        "Current Price: ${current_price}\n",
        "Entry Quality: {entry_quality:.1f}/100\n",
        "\n",
        "Entry Signals:\n",
        "[HIGH] Pullback to Support\n",
        "   Quality entry at ${round(current_price * 0.98, 2)}\n",
        "   Stop Loss: ${round(current_price * 0.95, 2)}\n",
        "   Risk/Reward: 1:3\n",
        "\n",
        "Position Sizing:\n",
        "   Suggested: {\"75-100%\" if entry_quality > 80 else \"50-75%\"} of planned position\n",
        "\"\"\"\n",
        "\n",
        "    return result"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wB9g__grsGjb"
      },
      "source": [
        "Create the Bull MCP server:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Cnf6X8xr8IkR"
      },
      "outputs": [],
      "source": [
        "%%writefile $mcp_tools_dir/bull_mcp_server.py\n",
        "\"\"\"Bull Agent MCP Server - Opportunity-focused market analysis tools.\"\"\"\n",
        "\n",
        "# Import the mcp instance with all registered tools\n",
        "from bull_tools import mcp\n",
        "\n",
        "if __name__ == \"__main__\":\n",
        "    # Run the MCP server with STDIO transport\n",
        "    mcp.run(transport=\"stdio\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1BGZ7NSMvEPZ"
      },
      "source": [
        "##### Creating the Bull Agent Card and Executor\n",
        "\n",
        "Define the Bull Agent's capabilities through an Agent Card."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "BA9qbdp5vGNw"
      },
      "outputs": [],
      "source": [
        "def create_bull_agent_card():\n",
        "    \"\"\"Create A2A Agent Card for Bull Analyst.\"\"\"\n",
        "    skills = [\n",
        "        AgentSkill(\n",
        "            id=\"breakout_detection\",\n",
        "            name=\"Breakout Pattern Detection\",\n",
        "            description=\"Identify bullish breakout patterns\",\n",
        "            tags=[\"technical-analysis\", \"breakouts\"],\n",
        "            examples=[\"Find breakout patterns for NVDA\"],\n",
        "        ),\n",
        "        AgentSkill(\n",
        "            id=\"momentum_screening\",\n",
        "            name=\"Momentum Screening\",\n",
        "            description=\"Screen for stocks with strong momentum\",\n",
        "            tags=[\"momentum\", \"screening\"],\n",
        "            examples=[\"Find high momentum tech stocks\"],\n",
        "        ),\n",
        "        AgentSkill(\n",
        "            id=\"entry_signals\",\n",
        "            name=\"Entry Signal Detection\",\n",
        "            description=\"Detect optimal entry points\",\n",
        "            tags=[\"entry-points\", \"timing\"],\n",
        "            examples=[\"When should I buy AAPL?\"],\n",
        "        ),\n",
        "    ]\n",
        "\n",
        "    return create_agent_card(\n",
        "        agent_name=\"Bull Market Analyst (ADK + MCP)\",\n",
        "        description=(\n",
        "            \"An optimistic analyst powered by Google ADK, \"\n",
        "            \"focused on growth opportunities and bullish patterns.\"\n",
        "        ),\n",
        "        skills=skills,\n",
        "    )\n",
        "\n",
        "\n",
        "bull_agent_card = create_bull_agent_card()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Hnow8fcospPX"
      },
      "source": [
        "You check for the agent card as before."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "of-7DAdgxfDg"
      },
      "outputs": [],
      "source": [
        "print(\"Bull Agent Card:\")\n",
        "print(f\"   Name: {bull_agent_card.name}\")\n",
        "print(f\"   Skills: {len(bull_agent_card.skills)}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "n3udKxcFxSa3"
      },
      "source": [
        "##### Create Bull Agent Executor\n",
        "\n",
        "The Bull Agent Executor follows a similar pattern to the Bear Agent Executor but uses ADK's native execution model. It creates both the agent and a Runner for execution management, handling session creation and response streaming.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "PFOqnhanUzbB"
      },
      "outputs": [],
      "source": [
        "class BullAgentExecutor(AgentExecutor):\n",
        "    \"\"\"Agent executor for Bull Agent.\"\"\"\n",
        "\n",
        "    def __init__(self):\n",
        "        self.agent = None\n",
        "        self.runner = None\n",
        "        self.register = None\n",
        "\n",
        "    def _init_agent(self):\n",
        "        \"\"\"Lazy initialization of the Bull Agent and ADK Runner.\n",
        "        Creates the agent and runner when first needed.\n",
        "        This happens on Agent Engine after deployment, not during pickling.\n",
        "        \"\"\"\n",
        "        if self.register is None:\n",
        "            import os\n",
        "\n",
        "            from phoenix.otel import register\n",
        "\n",
        "            phoenix_project_name = os.environ.get(\"PHOENIX_PROJECT_NAME\", \"default\")\n",
        "\n",
        "            # Configure the Phoenix tracer\n",
        "            tracer_provider = register(\n",
        "                project_name=phoenix_project_name, auto_instrument=True\n",
        "            )\n",
        "\n",
        "        if self.agent is None:\n",
        "            import os\n",
        "\n",
        "            from google.adk.agents import LlmAgent\n",
        "            from google.adk.models.lite_llm import LiteLlm, litellm\n",
        "            from google.adk.tools.mcp_tool import StdioConnectionParams\n",
        "            from google.adk.tools.mcp_tool.mcp_toolset import (\n",
        "                MCPToolset,\n",
        "                StdioServerParameters,\n",
        "            )\n",
        "\n",
        "            # Configure LiteLLM to route requests to Vertex AI\n",
        "            litellm.vertex_project = os.environ.get(\"GOOGLE_CLOUD_PROJECT\")\n",
        "            litellm.vertex_location = os.environ.get(\"GOOGLE_CLOUD_REGION\")\n",
        "\n",
        "            # Set project and location\n",
        "            project_id = os.environ.get(\"GOOGLE_CLOUD_PROJECT\")\n",
        "            location = os.environ.get(\"GOOGLE_CLOUD_REGION\")\n",
        "\n",
        "            # Initialize Vertex AI\n",
        "            vertexai.init(project=project_id, location=location)\n",
        "\n",
        "            # Create Bull Agent with MCP tools\n",
        "            self.agent = LlmAgent(\n",
        "                model=LiteLlm(\"vertex_ai/meta/llama-3.3-70b-instruct-maas\"),\n",
        "                name=\"bull_market_analyst\",\n",
        "                instruction=\"\"\"You are an optimistic market analyst focused on identifying growth\n",
        "                opportunities, bullish catalysts, and upside potential. Use the available MCP\n",
        "                tools to analyze market opportunities comprehensively.\"\"\",\n",
        "                tools=[\n",
        "                    MCPToolset(\n",
        "                        connection_params=StdioConnectionParams(\n",
        "                            server_params=StdioServerParameters(\n",
        "                                \"python\",\n",
        "                                args=[\"mcp_tools/bull_mcp_server.py\"],\n",
        "                                timeout=60,\n",
        "                            ),\n",
        "                        ),\n",
        "                    )\n",
        "                ],\n",
        "            )\n",
        "\n",
        "        if self.runner is None:\n",
        "            from google.adk import Runner\n",
        "            from google.adk.sessions import InMemorySessionService\n",
        "\n",
        "            self.runner = Runner(\n",
        "                app_name=self.agent.name,\n",
        "                agent=self.agent,\n",
        "                session_service=InMemorySessionService(),\n",
        "            )\n",
        "\n",
        "    async def cancel(self, context: RequestContext, event_queue: EventQueue):\n",
        "        raise ServerError(error=UnsupportedOperationError())\n",
        "\n",
        "    async def execute(self, context: RequestContext, event_queue: EventQueue) -> None:\n",
        "        \"\"\"Execute Bull Agent analysis.\"\"\"\n",
        "        if not context.message:\n",
        "            return\n",
        "\n",
        "        user_id = (\n",
        "            context.message.metadata.get(\"user_id\")\n",
        "            if context.message and context.message.metadata\n",
        "            else \"a2a_user\"\n",
        "        )\n",
        "\n",
        "        updater = TaskUpdater(event_queue, context.task_id, context.context_id)\n",
        "\n",
        "        if not hasattr(context, \"current_task\") or not context.current_task:\n",
        "            await updater.submit()\n",
        "\n",
        "        await updater.start_work()\n",
        "\n",
        "        query = context.get_user_input()\n",
        "\n",
        "        try:\n",
        "            await updater.update_status(\n",
        "                TaskState.working,\n",
        "                message=new_agent_text_message(\"Analyzing opportunities...\"),\n",
        "            )\n",
        "\n",
        "            # Get or create session\n",
        "            from google.genai import types\n",
        "\n",
        "            session = await self.runner.session_service.get_session(\n",
        "                app_name=self.runner.app_name,\n",
        "                user_id=user_id,\n",
        "                session_id=context.context_id,\n",
        "            ) or await self.runner.session_service.create_session(\n",
        "                app_name=self.runner.app_name,\n",
        "                user_id=user_id,\n",
        "                session_id=context.context_id,\n",
        "            )\n",
        "\n",
        "            content = types.Content(role=\"user\", parts=[types.Part(text=query)])\n",
        "\n",
        "            # Run ADK agent\n",
        "            final_event = None\n",
        "            async for event in self.runner.run_async(\n",
        "                session_id=session.id, user_id=user_id, new_message=content\n",
        "            ):\n",
        "                if event.is_final_response():\n",
        "                    final_event = event\n",
        "\n",
        "            # Extract response\n",
        "            if final_event and final_event.content and final_event.content.parts:\n",
        "                response_text = \"\".join(\n",
        "                    part.text\n",
        "                    for part in final_event.content.parts\n",
        "                    if hasattr(part, \"text\") and part.text\n",
        "                )\n",
        "                if response_text:\n",
        "                    await updater.add_artifact(\n",
        "                        [TextPart(text=response_text)],\n",
        "                        name=\"opportunity_analysis\",\n",
        "                    )\n",
        "                    await updater.complete()\n",
        "                    return\n",
        "\n",
        "            await updater.update_status(\n",
        "                TaskState.failed,\n",
        "                message=new_agent_text_message(\"Failed to generate response.\"),\n",
        "                final=True,\n",
        "            )\n",
        "\n",
        "        except Exception as e:\n",
        "            await updater.update_status(\n",
        "                TaskState.failed,\n",
        "                message=new_agent_text_message(f\"Analysis failed: {e!s}\"),\n",
        "                final=True,\n",
        "            )"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TLfRa-ZD9fXq"
      },
      "source": [
        "### Testing the Multi-Agent System Locally\n",
        "\n",
        "Before deploying to production, test the complete multi-agent system locally. This involves running both agents as A2A servers and creating an orchestrator to coordinate them."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TxhT7Ldbs_w_"
      },
      "source": [
        "#### Setting Up Local A2A Servers\n",
        "\n",
        "Configure the agent cards to point to local endpoints and set the transport protocol to JSON-RPC for local testing.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Md1Yc2x6E892"
      },
      "outputs": [],
      "source": [
        "# Update Bear Agent card\n",
        "bear_agent_card.url = \"http://localhost:8001\"\n",
        "bear_agent_card.preferred_transport = TransportProtocol.jsonrpc\n",
        "\n",
        "# Update Bull Agent card\n",
        "bull_agent_card.url = \"http://localhost:8002\"\n",
        "bull_agent_card.preferred_transport = TransportProtocol.jsonrpc"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lenm6J_WtCry"
      },
      "source": [
        "Create helper functions to wrap agents with A2A server functionality. These functions create the necessary infrastructure to expose agents via HTTP endpoints following the A2A specification.\n",
        "\n",
        "We start with the ones for ADK Bull agent.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7pTgEl9wA2_x"
      },
      "outputs": [],
      "source": [
        "def create_bull_agent_a2a_server(agent, agent_card):\n",
        "    \"\"\"Create an A2A server for an ADK agent.\n",
        "\n",
        "    This wraps an ADK agent with A2A protocol handling, making it\n",
        "    accessible via HTTP endpoints that follow the A2A specification.\n",
        "\n",
        "    Args:\n",
        "        agent: The ADK agent instance (LlmAgent, Agent, etc.)\n",
        "        agent_card: The A2A AgentCard describing the agent's capabilities\n",
        "\n",
        "    Returns:\n",
        "        A2AStarletteApplication instance ready to serve via uvicorn\n",
        "    \"\"\"\n",
        "    # Create ADK Runner for the agent\n",
        "    # The Runner manages agent execution, sessions, and artifacts\n",
        "    runner = Runner(\n",
        "        app_name=agent.name,\n",
        "        agent=agent,\n",
        "        session_service=InMemorySessionService(),  # Manages conversation state\n",
        "    )\n",
        "\n",
        "    # Configure A2A agent executor\n",
        "    # This bridges ADK agents with the A2A protocol\n",
        "    config = A2aAgentExecutorConfig()\n",
        "    executor = A2aAgentExecutor(runner=runner, config=config)\n",
        "\n",
        "    # Create A2A request handler\n",
        "    # Handles incoming A2A protocol requests (message:send, get_task, etc.)\n",
        "    request_handler = DefaultRequestHandler(\n",
        "        agent_executor=executor,\n",
        "        task_store=InMemoryTaskStore(),  # Stores task state\n",
        "    )\n",
        "\n",
        "    # Create and return A2A Starlette application\n",
        "    # This is the ASGI app that uvicorn will serve\n",
        "    return A2AStarletteApplication(agent_card=agent_card, http_handler=request_handler)\n",
        "\n",
        "\n",
        "async def run_bull_server(agent, agent_card, port):\n",
        "    \"\"\"Run a single agent as an A2A server on the specified port.\"\"\"\n",
        "    app = create_bull_agent_a2a_server(agent, agent_card)\n",
        "\n",
        "    # Configure uvicorn server\n",
        "    config = uvicorn.Config(\n",
        "        app.build(),  # Build the ASGI application\n",
        "        host=\"127.0.0.1\",  # localhost\n",
        "        port=port,\n",
        "        log_level=\"warning\",  # Quiet output\n",
        "        loop=\"none\",  # Use the current event loop\n",
        "    )\n",
        "\n",
        "    server = uvicorn.Server(config)\n",
        "    await server.serve()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wMcqTkeWtHDg"
      },
      "source": [
        "Here we create server functions for the Bear Agent (using Pydantic AI).\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4joucdZkBwPj"
      },
      "outputs": [],
      "source": [
        "def create_bear_a2a_server(agent_card):\n",
        "    \"\"\"Create A2A server for Pydantic AI Bear Agent.\n",
        "\n",
        "    Since Bear Agent uses Pydantic AI (not ADK), we create the A2A server\n",
        "    directly using the BearAgentExecutor we defined earlier.\n",
        "    \"\"\"\n",
        "    request_handler = DefaultRequestHandler(\n",
        "        agent_executor=BearAgentExecutor(),\n",
        "        task_store=InMemoryTaskStore(),\n",
        "    )\n",
        "\n",
        "    return A2AStarletteApplication(agent_card=agent_card, http_handler=request_handler)\n",
        "\n",
        "\n",
        "async def run_bear_server(agent_card, port):\n",
        "    \"\"\"Run Bear Agent A2A server (Pydantic AI).\"\"\"\n",
        "    app = create_bear_a2a_server(agent_card)\n",
        "\n",
        "    config = uvicorn.Config(\n",
        "        app.build(),\n",
        "        host=\"127.0.0.1\",\n",
        "        port=port,\n",
        "        log_level=\"warning\",\n",
        "        loop=\"none\",\n",
        "    )\n",
        "\n",
        "    server = uvicorn.Server(config)\n",
        "    await server.serve()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dNS94ULuudVf"
      },
      "source": [
        "Create a function to start both servers concurrently."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "pBDwaCwXBBIz"
      },
      "outputs": [],
      "source": [
        "async def start_a2a_servers():\n",
        "    \"\"\"Start both Bear and Bull agents as A2A servers.\"\"\"\n",
        "    # Create tasks for both servers\n",
        "    # Bear Agent uses Pydantic AI, so it needs custom A2A server\n",
        "    # Bull Agent uses ADK, so it uses the standard ADK A2A pattern\n",
        "    tasks = [\n",
        "        asyncio.create_task(run_bear_server(bear_agent_card, 8001)),\n",
        "        asyncio.create_task(run_bull_server(bull_agent, bull_agent_card, 8002)),\n",
        "    ]\n",
        "\n",
        "    # Give servers time to start\n",
        "    await asyncio.sleep(2)\n",
        "\n",
        "    print(\"   ✓ Bear Agent A2A server: http://127.0.0.1:8001 (Pydantic AI)\")\n",
        "    print(\"   ✓ Bull Agent A2A server: http://127.0.0.1:8002 (ADK)\")\n",
        "\n",
        "    # Keep servers running\n",
        "    try:\n",
        "        await asyncio.gather(*tasks)\n",
        "    except KeyboardInterrupt:\n",
        "        print(\"Shutting down A2A servers...\")\n",
        "\n",
        "\n",
        "def run_servers_in_background():\n",
        "    \"\"\"Run A2A servers in a background thread.\"\"\"\n",
        "    loop = asyncio.new_event_loop()\n",
        "    asyncio.set_event_loop(loop)\n",
        "    loop.run_until_complete(start_a2a_servers())"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "V_SKP19LusWr"
      },
      "source": [
        "Start the servers in a background thread."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "63EbthMoBGq9"
      },
      "outputs": [],
      "source": [
        "# Start the A2A servers in background thread\n",
        "server_thread = threading.Thread(target=run_servers_in_background, daemon=True)\n",
        "server_thread.start()\n",
        "\n",
        "# Wait for servers to be ready\n",
        "time.sleep(3)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "krDLxraduv0b"
      },
      "source": [
        "#### Creating the Orchestrator\n",
        "\n",
        "The orchestrator coordinates the Bear and Bull agents using RemoteA2aAgent proxies. These proxies discover agent capabilities through their agent cards and handle A2A protocol communication transparently. The orchestrator itself is an ADK agent that has both remote agents as tools."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "hSqs1PBzB7h_"
      },
      "outputs": [],
      "source": [
        "# Create remote proxy for Bear Agent\n",
        "# RemoteA2aAgent discovers capabilities via the agent card endpoint\n",
        "remote_bear = RemoteA2aAgent(\n",
        "    name=\"bear_risk_analyst\",\n",
        "    description=\"Analyzes risks and warning signals\",\n",
        "    agent_card=f\"http://localhost:8001{AGENT_CARD_WELL_KNOWN_PATH}\",\n",
        ")\n",
        "\n",
        "# Create remote proxy for Bull Agent\n",
        "remote_bull = RemoteA2aAgent(\n",
        "    name=\"bull_market_analyst\",\n",
        "    description=\"Identifies growth opportunities and bullish patterns\",\n",
        "    agent_card=f\"http://localhost:8002{AGENT_CARD_WELL_KNOWN_PATH}\",\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "n5C4Z-4vCIcU"
      },
      "outputs": [],
      "source": [
        "# Create orchestrator that coordinates both agents\n",
        "trading_orchestrator = LlmAgent(\n",
        "    name=\"trading_strategy_orchestrator\",\n",
        "    model=\"gemini-2.5-flash\",\n",
        "    tools=[\n",
        "        AgentTool(\n",
        "            agent=remote_bear,  # Wrap remote agents as tools\n",
        "        ),\n",
        "        AgentTool(\n",
        "            agent=remote_bull,\n",
        "        ),\n",
        "    ],\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tXAmXwp4vIjg"
      },
      "source": [
        "#### Testing End-to-End Integration\n",
        "\n",
        "Test the complete system by sending a query to the orchestrator. The orchestrator will determine which agents to invoke based on the query and aggregate their responses."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "SGG0uE4nCP4r"
      },
      "outputs": [],
      "source": [
        "# Create Runner for the orchestrator\n",
        "orchestrator_runner = Runner(\n",
        "    app_name=trading_orchestrator.name,\n",
        "    agent=trading_orchestrator,\n",
        "    session_service=InMemorySessionService(),\n",
        ")\n",
        "\n",
        "# Create session\n",
        "session = await orchestrator_runner.session_service.create_session(\n",
        "    app_name=trading_orchestrator.name,\n",
        "    user_id=\"test_user\",\n",
        "    session_id=\"orchestrator_test_session\",\n",
        ")\n",
        "\n",
        "# Test query\n",
        "test_query = \"Should I buy NVDA stock? Analyze opportunity only.\"\n",
        "\n",
        "print(f\"\\nQuery: {test_query}\")\n",
        "\n",
        "# Run orchestrator\n",
        "content = types.Content(role=\"user\", parts=[types.Part(text=test_query)])\n",
        "\n",
        "final_result = None\n",
        "async for event in orchestrator_runner.run_async(\n",
        "    session_id=session.id, user_id=\"test_user\", new_message=content\n",
        "):\n",
        "    if event.is_final_response():\n",
        "        if event.content and event.content.parts:\n",
        "            final_result = \"\".join(\n",
        "                part.text\n",
        "                for part in event.content.parts\n",
        "                if hasattr(part, \"text\") and part.text\n",
        "            )\n",
        "        break\n",
        "\n",
        "print(f\"\\nFinal Result:\\n{final_result}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lA5GDpcvOvT7"
      },
      "source": [
        "## Deploying to Vertex AI Agent Engine\n",
        "\n",
        "After validating locally, deploy the agents to Vertex AI Agent Engine for production use. Agent Engine provides managed infrastructure with automatic scaling, monitoring, and authentication."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2fKBpb9C3P-Z"
      },
      "source": [
        "### Deploying the Bear Agent\n",
        "\n",
        "Configure the Bear Agent card for production use with HTTP JSON transport and deploy it with all required dependencies.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5twfZA2E3Jcf"
      },
      "outputs": [],
      "source": [
        "# Configure transport for production deployment\n",
        "bear_agent_card.preferred_transport = TransportProtocol.http_json\n",
        "\n",
        "# Wrap agent card and executor in A2A agent\n",
        "bear_a2a_agent = A2aAgent(\n",
        "    agent_card=bear_agent_card, agent_executor_builder=BearAgentExecutor\n",
        ")\n",
        "\n",
        "# Deploy to Vertex AI Agent Engine\n",
        "deployed_bear = client.agent_engines.create(\n",
        "    agent=bear_a2a_agent,\n",
        "    config={\n",
        "        \"display_name\": \"Bear Risk Analyst\",\n",
        "        \"description\": bear_agent_card.description,\n",
        "        \"requirements\": [\n",
        "            \"a2a-sdk\",\n",
        "            \"google-cloud-aiplatform[agent_engines,adk]\",\n",
        "            \"fastmcp\",  # Required for MCP tools\n",
        "            \"pydantic\",\n",
        "            \"pydantic-ai\",  # Required for Bear Agent\n",
        "            \"numpy\",\n",
        "            \"arize-phoenix-otel\",\n",
        "            \"openinference-instrumentation-pydantic-ai\",\n",
        "            \"opentelemetry-sdk\",\n",
        "            \"opentelemetry-exporter-otlp\",\n",
        "            \"opentelemetry-api\",\n",
        "        ],\n",
        "        \"extra_packages\": [\"mcp_tools\"],  # Include our MCP tools package\n",
        "        \"env_vars\": {\n",
        "            \"PHOENIX_API_KEY\": os.environ.get(\"PHOENIX_API_KEY\"),\n",
        "            \"PHOENIX_COLLECTOR_ENDPOINT\": os.environ.get(\"PHOENIX_COLLECTOR_ENDPOINT\"),\n",
        "        },\n",
        "        \"staging_bucket\": BUCKET_URI,\n",
        "    },\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "q-Zvb5q9v_oh"
      },
      "source": [
        "### Deploying the Bull Agent\n",
        "\n",
        "Deploy the Bull Agent with its specific dependencies including LiteLLM for Llama routing."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "mnUUFFhmsjCw"
      },
      "outputs": [],
      "source": [
        "# Configure transport for production deployment\n",
        "bull_agent_card.preferred_transport = TransportProtocol.http_json\n",
        "\n",
        "# Create A2A agent\n",
        "bull_a2a_agent = A2aAgent(\n",
        "    agent_card=bull_agent_card, agent_executor_builder=BullAgentExecutor\n",
        ")\n",
        "\n",
        "# Deploy to Vertex AI Agent Engine\n",
        "deployed_bull = client.agent_engines.create(\n",
        "    agent=bull_a2a_agent,\n",
        "    config={\n",
        "        \"display_name\": \"Bull Market Analyst\",\n",
        "        \"description\": bull_agent_card.description,\n",
        "        \"requirements\": [\n",
        "            \"a2a-sdk\",\n",
        "            \"google-cloud-aiplatform[agent_engines,adk]\",\n",
        "            \"fastmcp\",  # Required for MCP tools\n",
        "            \"numpy\",\n",
        "            \"litellm\",\n",
        "            \"arize-phoenix-otel\",\n",
        "            \"openinference-instrumentation-google-adk\",\n",
        "        ],\n",
        "        \"env_vars\": {\n",
        "            \"PHOENIX_API_KEY\": os.environ.get(\"PHOENIX_API_KEY\"),\n",
        "            \"PHOENIX_COLLECTOR_ENDPOINT\": os.environ.get(\"PHOENIX_COLLECTOR_ENDPOINT\"),\n",
        "        },\n",
        "        \"extra_packages\": [\"mcp_tools\"],\n",
        "        \"staging_bucket\": BUCKET_URI,\n",
        "    },\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-JbNbA5nzGAl"
      },
      "source": [
        "### Testing Deployed Agents\n",
        "\n",
        "To interact with deployed agents, create an authenticated HTTP client and configure the A2A client factory.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Y9b_zxMsL0xb"
      },
      "outputs": [],
      "source": [
        "# Create GoogleAuth class for httpx authentication\n",
        "class GoogleAuth(httpx.Auth):\n",
        "    \"\"\"Custom httpx Auth class for Google Cloud authentication.\"\"\"\n",
        "\n",
        "    def __init__(self) -> None:\n",
        "        # Get default credentials for the current environment\n",
        "        self.credentials: Credentials\n",
        "        self.project: str | None\n",
        "        self.credentials, self.project = default(\n",
        "            scopes=[\"https://www.googleapis.com/auth/cloud-platform\"]\n",
        "        )\n",
        "        self.auth_request = AuthRequest()\n",
        "\n",
        "    def auth_flow(self, request: httpx.Request):\n",
        "        \"\"\"Add Authorization header to request.\"\"\"\n",
        "        # Refresh credentials if expired\n",
        "        if not self.credentials.valid:\n",
        "            self.credentials.refresh(self.auth_request)\n",
        "\n",
        "        # Add Authorization header\n",
        "        request.headers[\"Authorization\"] = f\"Bearer {self.credentials.token}\"\n",
        "        yield request\n",
        "\n",
        "\n",
        "# Create authenticated httpx client\n",
        "authenticated_client = httpx.AsyncClient(\n",
        "    timeout=120,\n",
        "    auth=GoogleAuth(),  # This adds authentication to ALL requests!\n",
        ")\n",
        "\n",
        "# Create client factory for A2A communication\n",
        "client_config = A2AClientConfig(\n",
        "    httpx_client=authenticated_client,\n",
        "    streaming=False,\n",
        "    polling=False,\n",
        "    supported_transports=[\n",
        "        A2ATransport.http_json,\n",
        "    ],\n",
        ")\n",
        "\n",
        "a2a_client_factory = A2AClientFactory(config=client_config)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6cCBjOblwhMY"
      },
      "source": [
        "Construct the agent endpoints and create remote proxies."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "VIEiHHTG0KSu"
      },
      "outputs": [],
      "source": [
        "# Construct Vertex AI Agent Engine API endpoint\n",
        "api_endpoint = f\"https://{LOCATION}-aiplatform.googleapis.com\"\n",
        "\n",
        "# Get resource names from deployed agents\n",
        "bear_agent_resource_name = deployed_bear.api_resource.name\n",
        "bull_agent_resource_name = deployed_bull.api_resource.name\n",
        "\n",
        "# Build A2A endpoint URLs\n",
        "bear_endpoint = f\"{api_endpoint}/v1beta1/{bear_agent_resource_name}/a2a\"\n",
        "bull_endpoint = f\"{api_endpoint}/v1beta1/{bull_agent_resource_name}/a2a\"\n",
        "\n",
        "# Create remote agent proxies pointing to deployed endpoints\n",
        "remote_bear = RemoteA2aAgent(\n",
        "    name=\"bear_risk_analyst\",\n",
        "    description=\"Analyzes risks and warning signals\",\n",
        "    agent_card=f\"{bear_endpoint}/v1/card\",\n",
        "    httpx_client=authenticated_client,\n",
        "    a2a_client_factory=a2a_client_factory,\n",
        ")\n",
        "\n",
        "remote_bull = RemoteA2aAgent(\n",
        "    name=\"bull_market_analyst\",\n",
        "    description=\"Identifies growth opportunities and bullish patterns\",\n",
        "    agent_card=f\"{bull_endpoint}/v1/card\",\n",
        "    httpx_client=authenticated_client,\n",
        "    a2a_client_factory=a2a_client_factory,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QvkB9wuowtTK"
      },
      "source": [
        "Create an orchestrator using the deployed agents and test it."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "x9SIGAl01LaL"
      },
      "outputs": [],
      "source": [
        "trading_orchestrator = LlmAgent(\n",
        "    name=\"trading_strategy_orchestrator\",\n",
        "    model=\"gemini-2.5-flash\",\n",
        "    tools=[\n",
        "        AgentTool(\n",
        "            agent=remote_bear,\n",
        "        ),\n",
        "        AgentTool(\n",
        "            agent=remote_bull,\n",
        "        ),\n",
        "    ],\n",
        ")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "pEx4EGC21LaM"
      },
      "outputs": [],
      "source": [
        "# Create Runner for the orchestrator\n",
        "orchestrator_runner = Runner(\n",
        "    app_name=trading_orchestrator.name,\n",
        "    agent=trading_orchestrator,\n",
        "    session_service=InMemorySessionService(),\n",
        ")\n",
        "\n",
        "# Create session\n",
        "session = await orchestrator_runner.session_service.create_session(\n",
        "    app_name=trading_orchestrator.name,\n",
        "    user_id=\"test_user\",\n",
        "    session_id=\"orchestrator_test_session\",\n",
        ")\n",
        "\n",
        "# Test query\n",
        "test_query = \"Analyze the risks for NVDA stock\"\n",
        "\n",
        "print(f\"\\n📊 Query: {test_query}\")\n",
        "\n",
        "# Run orchestrator\n",
        "content = types.Content(role=\"user\", parts=[types.Part(text=test_query)])\n",
        "\n",
        "final_result = None\n",
        "async for event in orchestrator_runner.run_async(\n",
        "    session_id=session.id, user_id=\"test_user\", new_message=content\n",
        "):\n",
        "    if event.is_final_response():\n",
        "        if event.content and event.content.parts:\n",
        "            final_result = \"\".join(\n",
        "                part.text\n",
        "                for part in event.content.parts\n",
        "                if hasattr(part, \"text\") and part.text\n",
        "            )\n",
        "        break\n",
        "\n",
        "print(f\"\\n🤖 Final Result:\\n{final_result}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xjFxmAHb5C8B"
      },
      "source": [
        "# Agent Observability + Evaluation\n",
        "\n",
        "Now that we have created and deployed our trading agent.  We have been collecting traces on our test runs and send them to Phoenix Cloud.  Now we'll run evaluators on our traces to provide feedback on our Agent's behavior.\n",
        "\n",
        "Agents can go awry for a variety of reasons. For example:\n",
        "\n",
        "- Agent/Tool call accuracy - did our agent choose the right tool with the right arguments?\n",
        "\n",
        "- Tool call results - did the tool execute properly and respond with the right results?\n",
        "\n",
        "- Agent goal accuracy - did our agent accomplish the stated goal and get to the right outcome?"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nH0HJ8Lc5MeH"
      },
      "source": [
        "## Evaluator 1: Agent/Tool Call Accuracy\n",
        "\n",
        "Based on the user query, did the agent select the correct sub-agent or tool based on the available tools it has access to?  "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9eNgOBBF5P5w"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "from phoenix.client import Client\n",
        "from phoenix.client.types.spans import SpanQuery\n",
        "\n",
        "# Phoenix configuration\n",
        "api_key = os.environ[\"PHOENIX_API_KEY\"]\n",
        "base_url = os.environ[\"PHOENIX_BASE_URL\"]\n",
        "\n",
        "# Initialize client ## ##Update with base URL and Phoenix API key\n",
        "client = Client(base_url=base_url, api_key=api_key)\n",
        "\n",
        "# Get LLM spans for our evaluator\n",
        "query = (\n",
        "    SpanQuery()\n",
        "    .where(\"span_kind == 'LLM'\")\n",
        "    .select(\"input.value\", \"output.value\", \"llm.tools\")\n",
        ")\n",
        "orchestrator_df = client.spans.get_spans_dataframe(\n",
        "    query=query,\n",
        "    project_identifier=os.environ[\"PHOENIX_PROJECT_NAME\"],\n",
        "    limit=50,\n",
        "    timeout=120,\n",
        ")\n",
        "\n",
        "TOOL_CALL_PROMPT_TEMPLATE = \"\"\"\n",
        "You are an evaluation assistant evaluating user queries and an AI agent's chosen tool calls to\n",
        "determine whether the tool called would correctly address the user query. The tool\n",
        "calls have been generated by a AI agent, and chosen from the list of\n",
        "tools provided below. It is your job to decide whether that agent chose\n",
        "the right tool to call for the given user query.\n",
        "\n",
        "    [BEGIN DATA]\n",
        "    ************\n",
        "    [User Query]: {input.value}\n",
        "    ************\n",
        "    [Tool Called]: {output.value}\n",
        "    ************\n",
        "    [Tool Definitions]: {llm.tools}\n",
        "    ************\n",
        "    [END DATA]\n",
        "\n",
        "\n",
        "Your response must be single word, either \"correct\" or \"incorrect\",\n",
        "and should not contain any text or characters aside from that word.\n",
        "\"incorrect\" means that the chosen tool would not answer the question,\n",
        "the tool includes information that is not presented in the question,\n",
        "or that the tool signature includes parameter values that don't match\n",
        "the formats specified in the tool signatures below.\n",
        "\"correct\" means the correct tool call was chosen, the correct parameters\n",
        "were extracted from the question, the tool call generated is runnable and correct,\n",
        "and that no outside information not present in the question was used\n",
        "in the generated question.\n",
        "\n",
        "Then write out in a step by step manner an EXPLANATION to show how you determined if the tool selection was correct or incorrect.\n",
        "\n",
        "EXPLANATION\n",
        "\"\"\"\n",
        "\n",
        "# Set up and run evaluator\n",
        "from phoenix.evals import (\n",
        "    LiteLLMModel,\n",
        "    llm_classify,\n",
        ")\n",
        "\n",
        "os.environ[\"VERTEXAI_PROJECT\"] = os.environ[\"GOOGLE_CLOUD_PROJECT\"]\n",
        "os.environ[\"VERTEXAI_LOCATION\"] = os.environ[\"GOOGLE_CLOUD_LOCATION\"]\n",
        "model = LiteLLMModel(model=\"vertex_ai/gemini-2.5-pro\")\n",
        "\n",
        "# Loop through the dataframe and evaluate each row\n",
        "evaluations_df = llm_classify(\n",
        "    dataframe=orchestrator_df,\n",
        "    template=TOOL_CALL_PROMPT_TEMPLATE,\n",
        "    model=model,\n",
        "    rails=[\"correct\", \"incorrect\"],\n",
        "    provide_explanation=True,\n",
        "    concurrency=5,\n",
        ")\n",
        "\n",
        "# prep for upload\n",
        "eval_df = evaluations_df.copy()\n",
        "eval_df[\"score\"] = eval_df[\"label\"].apply(\n",
        "    lambda x: 1 if x == \"correct\" else 0\n",
        ")  # Create score column\n",
        "eval_df = eval_df.reset_index()  # Reset the index to make context.span_id a column\n",
        "eval_df = eval_df.rename(\n",
        "    columns={\"context.span_id\": \"span_id\"}\n",
        ")  # Rename context.span_id to span_id\n",
        "eval_df[\"explanation\"] = eval_df[\"explanation\"].str.replace(\n",
        "    r\"^(correct|incorrect)\\s*\\n*EXPLANATION\\s*\\n*\", \"\", regex=True\n",
        ")  # Optional: Clean up explanation text\n",
        "eval_df = eval_df[\n",
        "    [\"span_id\", \"label\", \"score\", \"explanation\"]\n",
        "]  # Select only the columns you need\n",
        "\n",
        "# Upload the eval results to Phoenix as annotations.\n",
        "\n",
        "client.spans.log_span_annotations_dataframe(\n",
        "    dataframe=eval_df, annotation_name=\"tool_call_correctness\", annotator_kind=\"LLM\"\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6HP1_tFq5YmR"
      },
      "source": [
        "## Evaluator 2: Tool Execution Correctness\n",
        "\n",
        "Did the tool itself execute properly and respond with the right results?  "
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Af7sQS3c5db7"
      },
      "outputs": [],
      "source": [
        "# Get TOOL spans for our evaluator\n",
        "query = (\n",
        "    SpanQuery()\n",
        "    .where(\"span_kind == 'TOOL'\")\n",
        "    .select(\"input.value\", \"output.value\", \"tool\")\n",
        ")\n",
        "tools_df = client.spans.get_spans_dataframe(\n",
        "    query=query,\n",
        "    project_identifier=os.environ[\"PHOENIX_PROJECT_NAME\"],\n",
        "    limit=100,\n",
        "    timeout=120,\n",
        ")\n",
        "\n",
        "\n",
        "TOOL_EXECUTION_PROMPT_TEMPLATE = \"\"\"\n",
        "\n",
        "You are comparing a function call response to its input and function definition and trying to determine if the generated call response has provided a correct and intended response based on the input. Here is the data:\n",
        "    [BEGIN DATA]\n",
        "    ************\n",
        "    [Input]: {input.value}\n",
        "    ************\n",
        "    [Function answer]: {output.value}\n",
        "    ************\n",
        "    [END DATA]\n",
        "\n",
        "\n",
        "\n",
        "Your response must be single word, either \"correct\" or \"incorrect\"\n",
        "and should not contain any text or characters aside from that word.\n",
        "Compare the input parameters in the generated function against the JSON provided below.\n",
        "The parameters extracted from the input must match the JSON below exactly.\n",
        "The function answer should be a correct response to the input.\n",
        "\n",
        "\n",
        "\"correct\" means the function call parameters match the JSON below and function answer provides only relevant information.\n",
        "\"incorrect\" means that the parameters in the function do not match the JSON schema below exactly, or the function answer does not correctly address the input. You should also respond with \"incorrect\" if the response makes up information that is not in the JSON schema.\n",
        "\n",
        "Here are details on the function call:\n",
        "{tool}\n",
        "\n",
        "Then write out in a step by step manner an EXPLANATION to show how you determined if the tool selection was correct or incorrect.\n",
        "\n",
        "EXPLANATION\n",
        "\"\"\"\n",
        "\n",
        "# Loop through the dataframe and evaluate each row\n",
        "evaluations_df = llm_classify(\n",
        "    dataframe=tools_df,\n",
        "    template=TOOL_EXECUTION_PROMPT_TEMPLATE,\n",
        "    model=model,\n",
        "    rails=[\"correct\", \"incorrect\"],\n",
        "    provide_explanation=True,\n",
        "    concurrency=2,\n",
        ")\n",
        "\n",
        "# prep for upload\n",
        "eval_df = evaluations_df.copy()\n",
        "eval_df[\"score\"] = eval_df[\"label\"].apply(\n",
        "    lambda x: 1 if x == \"correct\" else 0\n",
        ")  # Create score column\n",
        "eval_df = eval_df.reset_index()  # Reset the index to make context.span_id a column\n",
        "eval_df = eval_df.rename(\n",
        "    columns={\"context.span_id\": \"span_id\"}\n",
        ")  # Rename context.span_id to span_id\n",
        "eval_df[\"explanation\"] = eval_df[\"explanation\"].str.replace(\n",
        "    r\"^(correct|incorrect)\\s*\\n*EXPLANATION\\s*\\n*\", \"\", regex=True\n",
        ")  # Optional: Clean up explanation text\n",
        "eval_df = eval_df[\n",
        "    [\"span_id\", \"label\", \"score\", \"explanation\"]\n",
        "]  # Select only the columns you need\n",
        "\n",
        "# Upload the eval results to Phoenix as annotations.\n",
        "\n",
        "client.spans.log_span_annotations_dataframe(\n",
        "    dataframe=eval_df,\n",
        "    annotation_name=\"tool_execution_correctness\",\n",
        "    annotator_kind=\"LLM\",\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7zimSp0U5q8g"
      },
      "source": [
        "## Evaluator 3: Agent Goal Trajectory\n",
        "\n",
        "Was the Agent's overall goal achieved? Was it's trajectory correct?  Agent trajectory evaluations measure the entire sequence of tool calls an agent takes to solve a task."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uith7gEeS67u"
      },
      "source": [
        "Set the system prompt for our LLM as a Judge and export spans"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9vy5Ww4z5uYj"
      },
      "outputs": [],
      "source": [
        "TRAJECTORY_ACCURACY_PROMPT_WITHOUT_REFERENCE = \"\"\"\n",
        "You are a helpful AI bot that checks whether an AI agent’s internal trajectory is accurate and effective.\n",
        "\n",
        "You will be given:\n",
        "1. The agent’s actual trajectory of tool calls\n",
        "2. You will be given input data from a user that the agent used to make a decision\n",
        "3. You will be given a tool call definition, what the agent used to make the tool call\n",
        "\n",
        "An accurate trajectory:\n",
        "- Progresses logically from step to step\n",
        "- Follows the golden trajectory where reasonable\n",
        "- Shows a clear path toward completing a goal\n",
        "- Is reasonably efficient (doesn’t take unnecessary detours)\n",
        "\n",
        "##\n",
        "\n",
        "Actual Trajectory:\n",
        "{tool_calls}\n",
        "\n",
        "User Inputs:\n",
        "{attributes.input.value}\n",
        "\n",
        "Tool Definitions:\n",
        "{attributes.llm.tools}\n",
        "\n",
        "##\n",
        "\n",
        "Your response must be a single string, either `correct` or `incorrect`, and must not include any additional text.\n",
        "\n",
        "- Respond with `correct` if the agent’s trajectory adheres to the rubric and accomplishes the task effectively.\n",
        "- Respond with `incorrect` if the trajectory is confusing, misaligned with the goal, inefficient, or does not accomplish the task.\n",
        "\n",
        "Then write out in a step by step manner an EXPLANATION to show how you determined if the tool selection was correct or incorrect.\n",
        "\n",
        "EXPLANATION\n",
        "\"\"\"\n",
        "\n",
        "\n",
        "# Get spans for our evaluator\n",
        "trajectory_df = client.spans.get_spans_dataframe(\n",
        "    project_identifier=os.environ[\"PHOENIX_PROJECT_NAME\"], timeout=120\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7khPmZjETIqt"
      },
      "source": [
        "Create helper functions for data prep"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "rUZg12PV507-"
      },
      "outputs": [],
      "source": [
        "# Helper functions for data prep\n",
        "\n",
        "from typing import Any\n",
        "\n",
        "import pandas as pd\n",
        "\n",
        "def filter_spans_by_trace_criteria(\n",
        "    df: pd.DataFrame,\n",
        "    trace_filters: dict[str, dict[str, Any]],\n",
        "    span_filters: dict[str, dict[str, Any]],\n",
        ") -> pd.DataFrame:\n",
        "    \"\"\"Filter spans based on trace-level and span-level criteria.\n",
        "\n",
        "    Args:\n",
        "        df: DataFrame with trace data\n",
        "        trace_filters: Dictionary of column names and filtering criteria for traces\n",
        "                      Format: {\"column_name\": {\"operator\": value}}\n",
        "                      Supported operators: \">=\", \"<=\", \"==\", \"!=\", \"contains\", \"notna\", \"isna\"\n",
        "        span_filters: Dictionary of column names and filtering criteria for spans\n",
        "                     Format: {\"column_name\": {\"operator\": value}}\n",
        "                     Same supported operators as trace_filters\n",
        "\n",
        "    Returns:\n",
        "        DataFrame with filtered spans from traces that match trace_filters\n",
        "    \"\"\"\n",
        "    # Get all unique trace_ids\n",
        "    all_trace_ids = set(df[\"context.trace_id\"].unique())\n",
        "    print(f\"Total traces: {len(all_trace_ids)}\")\n",
        "\n",
        "    # Create a copy of the dataframe for filtering\n",
        "    df_copy = df.copy()\n",
        "\n",
        "    # Find traces matching the trace criteria\n",
        "    traces_df = df_copy.copy()\n",
        "    for column, criteria in trace_filters.items():\n",
        "        if column not in traces_df.columns:\n",
        "            print(f\"Warning: Column '{column}' not found in dataframe\")\n",
        "            continue\n",
        "\n",
        "        for operator, value in criteria.items():\n",
        "            if operator == \">=\":\n",
        "                matching_spans = traces_df[traces_df[column] >= value]\n",
        "            elif operator == \"<=\":\n",
        "                matching_spans = traces_df[traces_df[column] <= value]\n",
        "            elif operator == \"==\":\n",
        "                matching_spans = traces_df[traces_df[column] == value]\n",
        "            elif operator == \"!=\":\n",
        "                matching_spans = traces_df[traces_df[column] != value]\n",
        "            elif operator == \"contains\":\n",
        "                matching_spans = traces_df[\n",
        "                    traces_df[column].str.contains(value, case=False, na=False)\n",
        "                ]\n",
        "            elif operator == \"isna\":\n",
        "                matching_spans = traces_df[traces_df[column].isna()]\n",
        "            elif operator == \"notna\":\n",
        "                matching_spans = traces_df[traces_df[column].notna()]\n",
        "            else:\n",
        "                print(f\"Warning: Unsupported operator '{operator}' - skipping\")\n",
        "                continue\n",
        "\n",
        "            traces_df = matching_spans\n",
        "\n",
        "    matching_trace_ids = set(traces_df[\"context.trace_id\"].unique())\n",
        "    print(f\"Found {len(matching_trace_ids)} traces matching trace criteria\")\n",
        "\n",
        "    if not matching_trace_ids:\n",
        "        print(\"No matching traces found\")\n",
        "        return pd.DataFrame()\n",
        "\n",
        "    # Filter to keep only rows from matching traces\n",
        "    result_df = df[df[\"context.trace_id\"].isin(matching_trace_ids)].copy()\n",
        "\n",
        "    # Apply span filters\n",
        "    for column, criteria in span_filters.items():\n",
        "        if column not in result_df.columns:\n",
        "            print(f\"Warning: Column '{column}' not found in dataframe\")\n",
        "            continue\n",
        "\n",
        "        for operator, value in criteria.items():\n",
        "            if operator == \">=\":\n",
        "                result_df = result_df[result_df[column] >= value]\n",
        "            elif operator == \"<=\":\n",
        "                result_df = result_df[result_df[column] <= value]\n",
        "            elif operator == \"==\":\n",
        "                result_df = result_df[result_df[column] == value]\n",
        "            elif operator == \"!=\":\n",
        "                result_df = result_df[result_df[column] != value]\n",
        "            elif operator == \"contains\":\n",
        "                result_df = result_df[\n",
        "                    result_df[column].str.contains(value, case=False, na=False)\n",
        "                ]\n",
        "            elif operator == \"isna\":\n",
        "                result_df = result_df[result_df[column].isna()]\n",
        "            elif operator == \"notna\":\n",
        "                result_df = result_df[result_df[column].notna()]\n",
        "            else:\n",
        "                print(f\"Warning: Unsupported operator '{operator}' - skipping\")\n",
        "                continue\n",
        "\n",
        "    print(f\"Final result: {len(result_df)} spans from {len(matching_trace_ids)} traces\")\n",
        "    return result_df\n",
        "\n",
        "\n",
        "def prepare_trace_data_for_evaluation(\n",
        "    df,\n",
        "    group_by_col=\"context.trace_id\",\n",
        "    extract_cols={\"tool_calls\": \"tool_calls\"},\n",
        "    additional_data=None,\n",
        "    filter_empty=True,\n",
        "):\n",
        "    \"\"\"Prepare trace data for evaluation by grouping, sorting by start_time, and extracting specified columns.\n",
        "\n",
        "    Args:\n",
        "        df: DataFrame containing trace data\n",
        "        group_by_col: Column to group traces by (default: \"context.trace_id\")\n",
        "        extract_cols: Dict mapping {output_key: source_column} to extract from each row\n",
        "                     Can contain multiple columns to extract\n",
        "        additional_data: Dict of additional data to include with each trace (default: None)\n",
        "        filter_empty: Whether to filter out empty values (default: True)\n",
        "\n",
        "    Returns:\n",
        "        DataFrame with processed trace data ready for evaluation\n",
        "    \"\"\"\n",
        "    # Group by specified column\n",
        "    grouped = df.groupby(group_by_col)\n",
        "\n",
        "    # Prepare results list\n",
        "    results = []\n",
        "\n",
        "    for group_id, group in grouped:\n",
        "        # Always sort by start_time to ensure correct order\n",
        "        group = group.sort_values(\"start_time\")\n",
        "\n",
        "        # Initialize a dict to store extracted data\n",
        "        trace_data = {group_by_col: group[group_by_col].iloc[0]}\n",
        "\n",
        "        # Extract and process each requested column\n",
        "        for output_key, source_col in extract_cols.items():\n",
        "            ordered_extracts = []\n",
        "            # Iterate through rows as dictionaries to handle column names with dots\n",
        "            for i, (_, row_data) in enumerate(group.reset_index(drop=True).iterrows()):\n",
        "                # Convert row to dictionary for easier access\n",
        "                row_dict = row_data.to_dict()\n",
        "                value = row_dict.get(source_col)\n",
        "                if not filter_empty or (value is not None and value):\n",
        "                    ordered_extracts.append({str(i + 1): value})\n",
        "            trace_data[output_key] = ordered_extracts\n",
        "\n",
        "        # Add any additional data\n",
        "        if additional_data:\n",
        "            trace_data.update(additional_data)\n",
        "\n",
        "        # Add to results\n",
        "        results.append(trace_data)\n",
        "\n",
        "    # Convert to DataFrame\n",
        "    return pd.DataFrame(results)\n",
        "\n",
        "\n",
        "def extract_tool_calls(output_messages):\n",
        "    if not output_messages:\n",
        "        return []\n",
        "\n",
        "    tool_calls = []\n",
        "    for message in output_messages:\n",
        "        if \"message.tool_calls\" in message:\n",
        "            for tool_call in message[\"message.tool_calls\"]:\n",
        "                tool_calls.append(\n",
        "                    {\n",
        "                        \"name\": tool_call[\"tool_call.function.name\"],\n",
        "                        \"arguments\": tool_call[\"tool_call.function.arguments\"],\n",
        "                    }\n",
        "                )\n",
        "    return tool_calls"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CO2xdABHTTGN"
      },
      "source": [
        "Data prep - filter traces for each agent"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "SvsSXZYF54AD"
      },
      "outputs": [],
      "source": [
        "# Data prep - filter traces for each agent\n",
        "eval_traces = filter_spans_by_trace_criteria(\n",
        "    df=trajectory_df,\n",
        "    trace_filters={\n",
        "        \"name\": {\"contains\": \"bull_agent|bear_agent|trading_strategy_orchestrator\"}\n",
        "    },\n",
        "    span_filters={\"attributes.openinference.span.kind\": {\"==\": \"LLM\"}},\n",
        ")\n",
        "\n",
        "eval_traces[\"tool_calls\"] = eval_traces[\"attributes.llm.output_messages\"].apply(\n",
        "    extract_tool_calls\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gAvZG8rSTWg9"
      },
      "source": [
        "Aggregate tool calls by trace id\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ru7DoIQf56y2"
      },
      "outputs": [],
      "source": [
        "# aggregate tool calls by trace id\n",
        "tool_calls_df = prepare_trace_data_for_evaluation(\n",
        "    df=eval_traces,\n",
        "    extract_cols={\n",
        "        \"tool_calls\": \"tool_calls\",\n",
        "        \"attributes.llm.tools\": \"attributes.llm.tools\",\n",
        "        \"attributes.input.value\": \"attributes.input.value\",\n",
        "    },  # can also add any additional columns to the dataframe\n",
        "    # additional_data={\"reference_outputs\": reference_outputs},\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "J4dAFCQtTdMG"
      },
      "source": [
        "Run evaluations on trace level - aggregated tool calls / trajectory"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "_KH5gGtI59I9"
      },
      "outputs": [],
      "source": [
        "# Run evaluations on trace level - aggregated tool calls / trajectory\n",
        "import nest_asyncio\n",
        "\n",
        "nest_asyncio.apply()\n",
        "\n",
        "evaluations_df = llm_classify(\n",
        "    dataframe=tool_calls_df,\n",
        "    template=TRAJECTORY_ACCURACY_PROMPT_WITHOUT_REFERENCE,\n",
        "    model=model,\n",
        "    rails=[\"correct\", \"incorrect\"],\n",
        "    provide_explanation=True,\n",
        "    verbose=False,\n",
        "    concurrency=5,\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "F-NO-bRwTgvM"
      },
      "source": [
        "Prep data and Upload to Phoenix"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "hZswpaLo5_nZ"
      },
      "outputs": [],
      "source": [
        "# Prep data for upload\n",
        "# Copy evaluations and add trace_id from tool_calls_df (they're in same order)\n",
        "eval_df = evaluations_df.copy()\n",
        "eval_df[\"context.trace_id\"] = tool_calls_df[\"context.trace_id\"].values\n",
        "\n",
        "# Get the root span_id for each trace_id\n",
        "root_spans = trajectory_df[trajectory_df[\"parent_id\"].isnull()][\n",
        "    [\"context.trace_id\", \"context.span_id\"]\n",
        "]\n",
        "\n",
        "# Merge evaluations with root spans to get the span_id\n",
        "eval_df = pd.merge(eval_df, root_spans, on=\"context.trace_id\", how=\"left\")\n",
        "\n",
        "# Rename context.span_id to span_id for upload\n",
        "eval_df = eval_df.rename(columns={\"context.span_id\": \"span_id\"})\n",
        "\n",
        "# Create score column\n",
        "eval_df[\"score\"] = eval_df[\"label\"].apply(lambda x: 1 if x == \"correct\" else 0)\n",
        "\n",
        "# Clean up explanation\n",
        "eval_df[\"explanation\"] = eval_df[\"explanation\"].str.replace(\n",
        "    r\"^(correct|incorrect)\\s*\\n*EXPLANATION\\s*\\n*\", \"\", regex=True\n",
        ")\n",
        "\n",
        "# Select columns for upload\n",
        "eval_df = eval_df[[\"span_id\", \"label\", \"score\", \"explanation\"]]\n",
        "\n",
        "print(\"\\nFinal eval_df for upload:\")\n",
        "print(eval_df.head())\n",
        "\n",
        "# Upload to Phoenix\n",
        "client.spans.log_span_annotations_dataframe(\n",
        "    dataframe=eval_df, annotation_name=\"agent_trajectory\", annotator_kind=\"LLM\"\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2vbUEp0T6FuP"
      },
      "source": [
        "## Conclusion & next steps\n",
        "\n",
        "You've built a sophisticated multi-agent system that combines different AI frameworks (Pydantic AI and Google ADK), models (Gemini and Llama), and protocols (A2A and MCP). The system demonstrates how specialized agents can collaborate to provide balanced analysis through standardized communication.\n",
        "\n",
        "Key takeaways:\n",
        "- MCP tools extend agent capabilities with custom functionality\n",
        "- A2A protocol enables standardized agent communication and discovery\n",
        "- Different agent frameworks can interoperate through common protocols\n",
        "- Vertex AI Agent Engine provides production-ready infrastructure for multi-agent systems\n",
        "\n",
        "About next steps:\n",
        "\n",
        "- Explore Agent Observability and Evaluations in Phoenix\n",
        "- Implement additional agent specializations (fundamental analysis, sentiment analysis)\n",
        "- Add real market data sources instead of synthetic data\n",
        "- Implement more sophisticated orchestration strategies\n",
        "- Add session and memory fr production\n",
        "- Explore other AE and A2A features like streaming or live api."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MUrNr4Nr6LpN"
      },
      "source": [
        "## Cleaning Up\n",
        "\n",
        "To avoid incurring unnecessary charges, delete the deployed agents and associated resources or delete the entire Google Cloud project if you're done experimenting."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5Uq7SMxP6SyQ"
      },
      "outputs": [],
      "source": [
        "delete_bear_agent = False\n",
        "delete_bull_agent = False\n",
        "\n",
        "if delete_bear_agent:\n",
        "    client.agent_engines.delete(deployed_bear.api_resource.name, force=True)\n",
        "if delete_bull_agent:\n",
        "    client.agent_engines.delete(deployed_bull.api_resource.name, force=True)"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "tutorial_multi_agent_systems_vertexai_llama_arize.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
