{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ur8xi4C7S06n"
      },
      "outputs": [],
      "source": [
        "# Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JAPoU8Sm5E6e"
      },
      "source": [
        "# Get started with Code Execution on Vertex AI Agent Engine\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_get_started_with_code_execution.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fagents%2Fagent_engine%2Ftutorial_get_started_with_code_execution.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/agents/agent_engine/tutorial_get_started_with_code_execution.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_get_started_with_code_execution.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<div style=\"clear: both;\"></div>\n",
        "\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_get_started_with_code_execution.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_get_started_with_code_execution.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_get_started_with_code_execution.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_get_started_with_code_execution.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/agents/agent_engine/tutorial_get_started_with_code_execution.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "84f0f73a0f76"
      },
      "source": [
        "| Authors |\n",
        "| --- |\n",
        "| Shaoxiong Zhang |\n",
        "| [Ivan Nardini](https://github.com/inardini) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tvgnzT1CKxrO"
      },
      "source": [
        "## Overview\n",
        "\n",
        "This notebook is your comprehensive guide to the **Code Execution** feature on Vertex AI Agent Engine. We'll show you how to give your AI agents the ability to run code in a secure, managed environment, transforming them from simple conversationalists into capable problem-solvers.\n",
        "\n",
        "In this tutorial, you'll learn how to:\n",
        "\n",
        "* Create and manage a secure **Agent Engine Sandbox** for code execution.\n",
        "* Execute Python code **directly** using the Vertex AI SDK.\n",
        "* Integrate the sandbox with Large Language Models like **Gemini** and **Claude** for dynamic code generation and execution.\n",
        "* Build robust, stateful agents with the **Agent Development Kit (ADK)** that leverage code execution.\n",
        "* Manage the lifecycle of your sandboxes, from creation to cleanup.\n",
        "\n",
        "### What is Agent Engine Sandbox?\n",
        "\n",
        "**Agent Engine Sandbox** is Google's managed service for securely executing code generated by AI models. Think of it as a secure, isolated environment where your AI agents can run Python or JavaScript code without any risk to your underlying infrastructure. It's stateful, fast, and framework-agnostic, meaning you can integrate it with any agent framework and any LLM.\n",
        "\n",
        "### Why Use Agent Engine Sandbox?\n",
        "\n",
        "Key Benefits:\n",
        "\n",
        "1. **Security**: Code runs in an isolated sandbox, preventing interference with your host system's resources, files, or network.\n",
        "2. **Scalability**: Designed to handle production workloads with low latency for sandbox creation and execution.\n",
        "3. **Model-agnostic**: Works with any LLM, not just Gemini.\n",
        "4. **Managed**: No infrastructure to maintain. Google handles the environment, letting you focus on building great agents."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "61RBz8LLbxCR"
      },
      "source": [
        "## Get started\n",
        "\n",
        "Let's begin by setting up our environment.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "No17Cw5hgx12"
      },
      "source": [
        "### Install Google Gen AI SDK and other required packages\n",
        "\n",
        "Installing the Python libraries needed to interact with Vertex AI's Agent Engine and build AI agents. The installation will complete in ~30 seconds. You'll see package names and version numbers scroll by."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tFy3H3aPgx12"
      },
      "outputs": [],
      "source": [
        "%pip install --upgrade --quiet --force-reinstall \"google-cloud-aiplatform>=1.112.0\" anthropic google-adk\n",
        "\n",
        "# ✅ Installation complete!\n",
        "print(\"✅ Packages installed successfully!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dmWOrTJ3gx13"
      },
      "source": [
        "### Authenticate your notebook environment (Colab only)\n",
        "\n",
        "Authenticating your Google account so this notebook can access Vertex AI services on your behalf. If you're on Google Colab, a pop-up will ask you to sign in and grant permissions. This is a one-time setup per session."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "NyKGtVQjgx13"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DF4l8DTdWgPY"
      },
      "source": [
        "### Set Google Cloud project information\n",
        "\n",
        "Configuring the notebook to use your specific Google Cloud project and region. All Agent Engine resources will be created in this project and region. After running this cell, you'll see confirmation that Vertex AI is initialized."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Nqwi-5ufWp_B"
      },
      "outputs": [],
      "source": [
        "# Use the environment variable if the user doesn't provide Project ID.\n",
        "import os\n",
        "\n",
        "import vertexai\n",
        "\n",
        "# fmt: off\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "# fmt: on\n",
        "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "\n",
        "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
        "\n",
        "# ADK env variables\n",
        "GOOGLE_GENAI_USE_VERTEXAI = 1\n",
        "\n",
        "os.environ[\"GOOGLE_CLOUD_PROJECT\"] = PROJECT_ID\n",
        "os.environ[\"GOOGLE_CLOUD_LOCATION\"] = LOCATION\n",
        "os.environ[\"GOOGLE_GENAI_USE_VERTEXAI\"] = str(GOOGLE_GENAI_USE_VERTEXAI)\n",
        "\n",
        "# Initialize Vertex AI\n",
        "vertexai.init(project=PROJECT_ID, location=LOCATION)\n",
        "client = vertexai.Client(project=PROJECT_ID, location=LOCATION)\n",
        "\n",
        "# ✅ Confirmation\n",
        "print(\"✅ Vertex AI initialized successfully!\")\n",
        "print(f\"   Project: {PROJECT_ID}\")\n",
        "print(f\"   Location: {LOCATION}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5303c05f7aa6"
      },
      "source": [
        "### Import libraries\n",
        "\n",
        "Import all the necessary classes and types from the installed SDKs that we'll use throughout the tutorial."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "6fc324893334"
      },
      "outputs": [],
      "source": [
        "import json\n",
        "import logging\n",
        "import re\n",
        "from io import BytesIO\n",
        "\n",
        "import matplotlib.pyplot as plt\n",
        "from anthropic import AnthropicVertex\n",
        "from vertexai import types\n",
        "from vertexai.generative_models import (\n",
        "    Content,\n",
        "    FunctionDeclaration,\n",
        "    GenerationConfig,\n",
        "    GenerativeModel,\n",
        "    Part,\n",
        "    Tool,\n",
        ")\n",
        "\n",
        "logging.getLogger().setLevel(logging.INFO)\n",
        "from google.adk.agents import LlmAgent\n",
        "from google.adk.artifacts import InMemoryArtifactService\n",
        "from google.adk.code_executors import BuiltInCodeExecutor\n",
        "from google.adk.code_executors.agent_engine_sandbox_code_executor import (\n",
        "    AgentEngineSandboxCodeExecutor,\n",
        ")\n",
        "from google.adk.events import Event\n",
        "from google.adk.memory import InMemoryMemoryService\n",
        "from google.adk.runners import Runner\n",
        "from google.adk.sessions import InMemorySessionService\n",
        "from google.genai import types as genai_types\n",
        "from pydantic import BaseModel, Field"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jyMI-T-9Biga"
      },
      "source": [
        "### Helpers\n",
        "\n",
        "To make the output of our agent interactions easier to read, we'll use this helper function. It parses the event stream from the ADK Runner and prints the important parts—like tool calls, code execution steps, and final responses—in a clear, structured way."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YR2_pxrjBkjF"
      },
      "outputs": [],
      "source": [
        "def parse_event(event: Event):\n",
        "    \"\"\"Parse agent events to highlight function calls and code execution.\"\"\"\n",
        "    if event.content and event.content.parts:\n",
        "        for part in event.content.parts:\n",
        "            # Check for function call (agent using tool)\n",
        "            if hasattr(part, \"function_call\") and part.function_call:\n",
        "                print(f\"\\nTOOL CALL: {part.function_call.name}\")\n",
        "                if \"code\" in part.function_call.args:\n",
        "                    print(\"Code to execute:\")\n",
        "                    print(part.function_call.args[\"code\"].strip())\n",
        "\n",
        "            # Check for function response (tool result)\n",
        "            elif hasattr(part, \"function_response\") and part.function_response:\n",
        "                resp = part.function_response.response\n",
        "                if isinstance(resp, dict) and resp.get(\"status\") == \"success\":\n",
        "                    print(\"\\nEXECUTION RESULT:\")\n",
        "                    print(resp.get(\"output\", \"\").strip())\n",
        "\n",
        "            # Check for Code Interpreter Extension executable code\n",
        "            elif hasattr(part, \"executable_code\") and part.executable_code:\n",
        "                print(\"\\nCODE INTERPRETER EXECUTION:\")\n",
        "                print(f\"Language: {part.executable_code.language}\")\n",
        "                print(\"Code:\")\n",
        "                print(part.executable_code.code.strip())\n",
        "\n",
        "            # Check for Code Interpreter Extension execution result\n",
        "            elif hasattr(part, \"code_execution_result\") and part.code_execution_result:\n",
        "                print(\"\\nCODE INTERPRETER RESULT:\")\n",
        "                print(f\"Status: {part.code_execution_result.outcome}\")\n",
        "                if part.code_execution_result.output:\n",
        "                    print(\"Output:\")\n",
        "                    print(part.code_execution_result.output.strip())\n",
        "                if (\n",
        "                    hasattr(part.code_execution_result, \"error\")\n",
        "                    and part.code_execution_result.error\n",
        "                ):\n",
        "                    print(f\"Error: {part.code_execution_result.error}\")\n",
        "\n",
        "            # Check for text responses (explanatory text or final response)\n",
        "            elif hasattr(part, \"text\") and part.text:\n",
        "                # For final responses, show with special formatting\n",
        "                if event.is_final_response():\n",
        "                    print(\"\\nAGENT RESPONSE:\")\n",
        "                    print(part.text.strip())\n",
        "                # For intermediate text (explanations before code)\n",
        "                elif len(part.text.strip()) > 0:\n",
        "                    print(\"\\nEXPLANATION:\")\n",
        "                    print(part.text.strip())"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CDm0an9O8dfN"
      },
      "source": [
        "# Your First Code Execution\n",
        "\n",
        "Let's start with the simplest possible example—executing code directly in an Agent Engine Sandbox. Create a secure, isolated environment to run Python code safely. The sandbox protects your infrastructure by running untrusted code in isolation."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tS-m5UVd8l4V"
      },
      "outputs": [],
      "source": [
        "# Create an AgentEngine resource (top-level container)\n",
        "agent_engine = client.agent_engines.create()\n",
        "\n",
        "print(\"✅ Agent Engine created successfully!\")\n",
        "print(f\"   Resource name: {agent_engine.api_resource.name}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0nf0b5n8HXci"
      },
      "source": [
        "We're creating a sandbox within the Agent Engine. This secure runtime environment will execute our code."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "wSnSo_fB9GaL"
      },
      "outputs": [],
      "source": [
        "sandbox_operation = client.agent_engines.sandboxes.create(\n",
        "    name=agent_engine.api_resource.name,\n",
        "    config=types.CreateAgentEngineSandboxConfig(display_name=\"my_first_sandbox\"),\n",
        "    spec={\"code_execution_environment\": {}},\n",
        ")\n",
        "\n",
        "# Save the sandbox resource name for later use\n",
        "sandbox_resource_name = sandbox_operation.response.name\n",
        "\n",
        "print(\"✅ Sandbox created successfully!\")\n",
        "print(\"   Display name: my_first_sandbox\")\n",
        "print(f\"   Resource name: {sandbox_resource_name}\")\n",
        "print(f\"   State: {sandbox_operation.response.state}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cHJzqS0UHb8-"
      },
      "source": [
        "Now we'll send Python code to the sandbox for execution. The sandbox will run it and return the output."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "C8q-cnZm8uA4"
      },
      "outputs": [],
      "source": [
        "# Execute simple Python code in the sandbox\n",
        "response = client.agent_engines.sandboxes.execute_code(\n",
        "    name=sandbox_resource_name,\n",
        "    input_data={\n",
        "        \"code\": \"import math\\nprint(f'Square root of 15376: {math.sqrt(15376)}')\"\n",
        "    },\n",
        ")\n",
        "\n",
        "print(\"✅ Code executed successfully!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "C3HtiPkIHfRo"
      },
      "source": [
        "The sandbox returns output in a specific format. Let's parse it to see the results.\n",
        "\n",
        "The response contains an `outputs` array. Each output has:\n",
        "- `mime_type`: Indicates the type of content (JSON for stdout/stderr, or file types)\n",
        "- `data`: The actual content (encoded bytes)\n",
        "- `metadata`: Additional information (like file names for generated files)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "-ILv6TxV8vQV"
      },
      "outputs": [],
      "source": [
        "# Parse the response to extract stdout/stderr\n",
        "# The JSON output (with stdout/stderr) has mime_type=\"application/json\" and no metadata\n",
        "for output in response.outputs:\n",
        "    if output.mime_type == \"application/json\" and output.metadata is None:\n",
        "        # Decode the bytes to string\n",
        "        result = json.loads(output.data.decode(\"utf-8\"))\n",
        "\n",
        "        # Display stdout (standard output)\n",
        "        if result.get(\"msg_out\"):\n",
        "            print(\"📤 Output:\")\n",
        "            print(result.get(\"msg_out\"))\n",
        "\n",
        "        # Display stderr (errors) if any\n",
        "        if result.get(\"msg_err\"):\n",
        "            print(\"❌ Errors:\")\n",
        "            print(result.get(\"msg_err\"))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "y5rFVnl9-qZv"
      },
      "source": [
        "# Integration with LLMs\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pMnRiIyITJSj"
      },
      "source": [
        "## Create a sandbox with customized configs\n",
        "\n",
        "Different tasks require different resources:\n",
        "- **Language:** Python for data science, JavaScript for web-related tasks\n",
        "- **Machine Config:** More vCPUs and RAM for computationally intensive operations\n",
        "\n",
        "You can customize the sandbox environment to fit specific needs.\n",
        "\n",
        "| Configuration Option | Values | Description |\n",
        "|---------------------|--------|-------------|\n",
        "| `code_language` | `LANGUAGE_PYTHON`, `LANGUAGE_JAVASCRIPT`, `LANGUAGE_UNSPECIFIED` | Programming language runtime |\n",
        "| `machine_config` | `MACHINE_CONFIG_VCPU4_RAM4GIB`, `MACHINE_CONFIG_UNSPECIFIED` | Compute resources (4 vCPUs, 4GB RAM) |\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "VED9E8FrTfYm"
      },
      "outputs": [],
      "source": [
        "# Define the configuration parameters right before use\n",
        "# fmt: off\n",
        "language_config = \"LANGUAGE_PYTHON\"  # @param [\"LANGUAGE_UNSPECIFIED\", \"LANGUAGE_PYTHON\", \"LANGUAGE_JAVASCRIPT\"] {type:\"string\"}\n",
        "machine_config = \"MACHINE_CONFIG_VCPU4_RAM4GIB\"  # @param [\"MACHINE_CONFIG_UNSPECIFIED\", \"MACHINE_CONFIG_VCPU4_RAM4GIB\"] {type:\"string\"}\n",
        "# fmt: on\n",
        "\n",
        "# Create the custom sandbox\n",
        "sandbox_operation = client.agent_engines.sandboxes.create(\n",
        "    name=agent_engine.api_resource.name,\n",
        "    config=types.CreateAgentEngineSandboxConfig(display_name=\"my_custom_sandbox\"),\n",
        "    spec={\n",
        "        \"code_execution_environment\": {\n",
        "            \"code_language\": language_config,\n",
        "            \"machine_config\": machine_config,\n",
        "        }\n",
        "    },\n",
        ")\n",
        "\n",
        "# Update our sandbox resource name to use this new sandbox\n",
        "sandbox_resource_name = sandbox_operation.response.name\n",
        "\n",
        "print(\"✅ Custom sandbox created successfully!\")\n",
        "print(f\"   Language: {language_config}\")\n",
        "print(f\"   Machine: {machine_config}\")\n",
        "print(f\"   Resource name: {sandbox_resource_name}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_A7zchrh6x6f"
      },
      "source": [
        "## Use Code Execution with Gemini\n",
        "\n",
        "You can combine Gemini's code generation with the Agent Engine Sandbox's secure execution.\n",
        "\n",
        "**Two approaches:**\n",
        "\n",
        "1. **Direct:** Ask Gemini to generate code, then execute it manually\n",
        "2. **Tool calling:** Give Gemini the sandbox as a tool to use autonomously\n",
        "\n",
        "Let's explore both."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8CTnvW-u68Ch"
      },
      "source": [
        "### Gemini Integration - Direct Approach\n",
        "\n",
        "The most straightforward way to use the sandbox with an LLM is a two-step process: first, ask the LLM to generate code, and second, execute that code in the sandbox.\n",
        "\n",
        "This is perfect for one-off tasks where you want full control."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "asl9_6gc7Odd"
      },
      "outputs": [],
      "source": [
        "# Initialize Gemini model\n",
        "model = GenerativeModel(\"gemini-2.5-flash\")\n",
        "\n",
        "# Ask Gemini to generate code for a calculation\n",
        "prompt = \"\"\"\n",
        "Write Python code to calculate the mean and standard deviation of these numbers:\n",
        "[23, 45, 67, 89, 12, 34, 56]\n",
        "\n",
        "Return only the Python code, no explanations.\n",
        "\"\"\"\n",
        "\n",
        "response = model.generate_content(prompt)\n",
        "generated_code = response.text.replace(\"```python\", \"\").replace(\"```\", \"\").strip()\n",
        "\n",
        "print(\"🤖 Gemini generated code:\")\n",
        "print(generated_code)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "sCd1ovI_7TQx"
      },
      "outputs": [],
      "source": [
        "# Execute the generated code in Agent Engine Sandbox using new pattern\n",
        "exec_response = client.agent_engines.sandboxes.execute_code(\n",
        "    name=sandbox_resource_name,\n",
        "    input_data={\"code\": generated_code},\n",
        ")\n",
        "\n",
        "print(\"✅ Code executed successfully!\")\n",
        "print(\"\\n📤 Results:\")\n",
        "\n",
        "# Use the new file handling pattern\n",
        "for output in exec_response.outputs:\n",
        "    if output.mime_type == \"application/json\" and output.metadata is None:\n",
        "        result = json.loads(output.data.decode(\"utf-8\"))\n",
        "        if result.get(\"msg_out\"):\n",
        "            print(result.get(\"msg_out\"))\n",
        "        if result.get(\"msg_err\"):\n",
        "            print(f\"❌ Error: {result.get('msg_err')}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iiaex-Sz7XKh"
      },
      "source": [
        "### Gemini with Tool Calling\n",
        "\n",
        "We expose the sandbox as a \"tool\" that Gemini can call. This enables the model to:\n",
        "\n",
        "* Decide *when* to execute code\n",
        "* Generate appropriate code\n",
        "* Interpret results\n",
        "* Provide natural language responses\n",
        "\n",
        "This is important for building AI agents that can act autonomously.\n",
        "\n",
        "| Parameter | Purpose |\n",
        "|-----------|---------|\n",
        "| `tools` | List of tools the model can use |\n",
        "| `function_call` | Model's request to use a tool |\n",
        "| `function_response` | Result we send back to the model |"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Sl4W-P5P7ksM"
      },
      "outputs": [],
      "source": [
        "# Define the code execution as a function for Gemini\n",
        "def execute_python_code(code: str) -> str:\n",
        "    \"\"\"Execute Python code in a secure sandbox.\n",
        "\n",
        "    Args:\n",
        "        code: Python code to execute\n",
        "    Returns:\n",
        "        The output from code execution\n",
        "    \"\"\"\n",
        "    # Extract code block if wrapped in markdown\n",
        "    code_match = re.search(r\"```python\\n(.*?)\\n```\", code, re.DOTALL)\n",
        "    if code_match:\n",
        "        code_to_execute = code_match.group(1)\n",
        "    else:\n",
        "        code_to_execute = code\n",
        "\n",
        "    # Execute in sandbox\n",
        "    response = client.agent_engines.sandboxes.execute_code(\n",
        "        name=sandbox_resource_name, input_data={\"code\": code_to_execute}\n",
        "    )\n",
        "\n",
        "    # Parse response using new pattern\n",
        "    for output in response.outputs:\n",
        "        if output.mime_type == \"application/json\" and output.metadata is None:\n",
        "            result = json.loads(output.data.decode(\"utf-8\"))\n",
        "\n",
        "            if result.get(\"msg_err\"):\n",
        "                return f\"Error: {result.get('msg_err')}\"\n",
        "            return result.get(\"msg_out\", \"Code executed successfully\")\n",
        "\n",
        "    return \"Code executed (no output)\""
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "zGRQ9DO_7lig"
      },
      "outputs": [],
      "source": [
        "# Create a tool from the function\n",
        "code_tool = Tool(\n",
        "    function_declarations=[FunctionDeclaration.from_func(execute_python_code)]\n",
        ")\n",
        "\n",
        "# Send a request that will trigger tool use\n",
        "response = model.generate_content(\n",
        "    contents=[\n",
        "        Content(\n",
        "            role=\"user\",\n",
        "            parts=[\n",
        "                Part.from_text(\n",
        "                    \"Calculate the factorial of 10 and check if it's divisible by 100\"\n",
        "                ),\n",
        "            ],\n",
        "        )\n",
        "    ],\n",
        "    generation_config=GenerationConfig(temperature=0),\n",
        "    tools=[code_tool],\n",
        ")\n",
        "\n",
        "response"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "MFBG34re7rig"
      },
      "outputs": [],
      "source": [
        "# Process the function call from Gemini\n",
        "function_response_parts = []\n",
        "\n",
        "for function_call in response.candidates[0].function_calls:\n",
        "    print(f\"🔧 Gemini wants to call: {function_call.name}\")\n",
        "    print(f\"📝 Generated code:\\n{function_call.args['code']}\\n\")\n",
        "\n",
        "    # Execute the code in Agent Engine Sandbox using new pattern\n",
        "    exec_response = client.agent_engines.sandboxes.execute_code(\n",
        "        name=sandbox_resource_name, input_data={\"code\": function_call.args[\"code\"]}\n",
        "    )\n",
        "\n",
        "    # Parse response with new pattern\n",
        "    execution_output = \"\"\n",
        "    for output in exec_response.outputs:\n",
        "        if output.mime_type == \"application/json\" and output.metadata is None:\n",
        "            result = json.loads(output.data.decode(\"utf-8\"))\n",
        "\n",
        "            if result.get(\"msg_err\"):\n",
        "                execution_output = f\"Error: {result.get('msg_err')}\"\n",
        "            else:\n",
        "                execution_output = result.get(\"msg_out\", \"Code executed successfully\")\n",
        "\n",
        "    print(f\"✅ Execution result:\\n{execution_output}\\n\")\n",
        "\n",
        "    # Prepare the function response for Gemini\n",
        "    function_response_parts.append(\n",
        "        Part.from_function_response(\n",
        "            name=function_call.name, response={\"result\": execution_output}\n",
        "        )\n",
        "    )\n",
        "\n",
        "# Send the function response back to the model\n",
        "function_response_content = Content(role=\"function\", parts=function_response_parts)\n",
        "\n",
        "# Get the final response from Gemini\n",
        "final_response = model.generate_content(\n",
        "    [\n",
        "        Content(\n",
        "            role=\"user\",\n",
        "            parts=[\n",
        "                Part.from_text(\n",
        "                    \"Calculate the factorial of 10 and check if it's divisible by 100\"\n",
        "                )\n",
        "            ],\n",
        "        ),\n",
        "        response.candidates[0].content,  # Original function call\n",
        "        function_response_content,  # Function execution results\n",
        "    ],\n",
        "    tools=[code_tool],\n",
        ")\n",
        "\n",
        "print(\"🤖 Gemini's final response:\")\n",
        "print(final_response.text)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TbkXs2eWH_sd"
      },
      "source": [
        "## Use Code Execution with Other Models\n",
        "\n",
        "One of the **key benefit of Agent Engine Sandbox** is model-agnostic! You can use it with any LLM on Vertex AI.\n",
        "\n",
        "Let's try with Anthropic's Claude, demonstrating that the same sandbox works across different models.\n",
        "\n",
        "**Note:** Claude model availability varies by region. We'll use `us-east5` for this example.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HyFGAaC08Huq"
      },
      "source": [
        "### Claude - Direct Approach\n",
        "\n",
        "The direct, two-step approach works seamlessly with Claude. First, we initialize the Claude client on Vertex AI.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4xIvvVhv8FaY"
      },
      "outputs": [],
      "source": [
        "# Initialize Claude on Vertex\n",
        "claude = AnthropicVertex(\n",
        "    project_id=PROJECT_ID,\n",
        "    region=\"us-east5\",  # Claude availability varies by region\n",
        ")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VECRZwTC8Lxi"
      },
      "source": [
        "Now, we ask Claude to generate Python code to perform a more complex task: calculating prime numbers and creating a data visualization with matplotlib.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "zXdHRPb68KD7"
      },
      "outputs": [],
      "source": [
        "# Ask Claude to generate code\n",
        "message = claude.messages.create(\n",
        "    model=\"claude-sonnet-4@20250514\",\n",
        "    max_tokens=1000,\n",
        "    messages=[\n",
        "        {\n",
        "            \"role\": \"user\",\n",
        "            \"content\": \"\"\"Generate Python code to:\n",
        "        1. Create a list of the first 10 prime numbers\n",
        "        2. Calculate their sum and average\n",
        "        3. Create a simple bar chart showing each prime number\n",
        "\n",
        "        Use matplotlib for the chart. Save the chart as 'primes_chart.png'.\n",
        "        Return only the Python code.\"\"\",\n",
        "        }\n",
        "    ],\n",
        ")\n",
        "\n",
        "# Extract code from Claude's response\n",
        "claude_response = message.content[0].text\n",
        "\n",
        "# Extract code block if wrapped in markdown\n",
        "code_match = re.search(r\"```python\\n(.*?)\\n```\", claude_response, re.DOTALL)\n",
        "if code_match:\n",
        "    code_to_execute = code_match.group(1)\n",
        "else:\n",
        "    code_to_execute = claude_response\n",
        "\n",
        "print(\"🤖 Claude generated code:\")\n",
        "print(code_to_execute)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "91Q0pZvh8R91"
      },
      "source": [
        "We execute the code generated by Claude. The sandbox handles the matplotlib library and file I/O, generating the chart image.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "I2lStaxA8Uso"
      },
      "outputs": [],
      "source": [
        "# Execute Claude's generated code in the Agent Engine Sandbox\n",
        "exec_response = client.agent_engines.sandboxes.execute_code(\n",
        "    name=sandbox_resource_name, input_data={\"code\": code_to_execute}\n",
        ")\n",
        "\n",
        "print(\"✅ Code executed successfully!\\n\")\n",
        "\n",
        "# Parse response with new pattern\n",
        "for output in exec_response.outputs:\n",
        "    if output.mime_type == \"application/json\" and output.metadata is None:\n",
        "        result = json.loads(output.data.decode(\"utf-8\"))\n",
        "\n",
        "        if result.get(\"msg_out\"):\n",
        "            print(\"📤 Execution Output:\")\n",
        "            print(result.get(\"msg_out\"))\n",
        "\n",
        "        if result.get(\"msg_err\"):\n",
        "            print(\"❌ Errors:\")\n",
        "            print(result.get(\"msg_err\"))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5wvvD1uf8agN"
      },
      "source": [
        "The sandbox can return multiple types of outputs, and we need to handle each appropriately.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WEU_nnbk8cMx"
      },
      "outputs": [],
      "source": [
        "# Process all outputs from the sandbox\n",
        "for output in exec_response.outputs:\n",
        "    # Handle JSON output (stdout/stderr)\n",
        "    if output.mime_type == \"application/json\" and output.metadata is None:\n",
        "        result = json.loads(output.data.decode(\"utf-8\"))\n",
        "\n",
        "        if result.get(\"msg_out\"):\n",
        "            print(\"📤 Output:\")\n",
        "            print(result.get(\"msg_out\"))\n",
        "\n",
        "    # Handle generated files (like the chart)\n",
        "    elif output.metadata and output.metadata.attributes:\n",
        "        # Extract file name from metadata\n",
        "        file_name = output.metadata.attributes.get(\"file_name\")\n",
        "        if isinstance(file_name, bytes):\n",
        "            file_name = file_name.decode(\"utf-8\")\n",
        "\n",
        "        print(f\"\\n📊 Generated file: {file_name}\")\n",
        "        print(f\"   MIME type: {output.mime_type}\")\n",
        "        print(f\"   Size: {len(output.data)} bytes\")\n",
        "\n",
        "        # If it's an image file, display it\n",
        "        if file_name.endswith((\".png\", \".jpg\", \".jpeg\")):\n",
        "            print(\"   Displaying chart...\")\n",
        "\n",
        "            # Decode and display the image\n",
        "            from io import BytesIO\n",
        "\n",
        "            import matplotlib.pyplot as plt\n",
        "\n",
        "            img = plt.imread(BytesIO(output.data))\n",
        "            fig, ax = plt.subplots(figsize=(8, 6))\n",
        "            ax.imshow(img)\n",
        "            ax.axis(\"off\")\n",
        "            plt.title(f\"Generated by Claude: {file_name}\")\n",
        "            plt.show()\n",
        "\n",
        "            # Optionally save locally\n",
        "            with open(file_name, \"wb\") as f:\n",
        "                f.write(output.data)\n",
        "            print(f\"   ✅ Saved locally as: {file_name}\")\n",
        "        else:\n",
        "            # For text files, display content\n",
        "            print(f\"   Content: {output.data.decode('utf-8', errors='ignore')}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WYPSd0XE8pKZ"
      },
      "source": [
        "### Using Claude's Native Tool Support\n",
        "\n",
        "Claude on Vertex AI also supports native tool calling. We can define a tool schema that describes our code execution function.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3rHmop-58pfW"
      },
      "outputs": [],
      "source": [
        "# Define the tool schema for Claude\n",
        "code_execution_tool = {\n",
        "    \"name\": \"execute_python\",\n",
        "    \"description\": \"Execute Python code in a secure Agent Engine Sandbox\",\n",
        "    \"input_schema\": {\n",
        "        \"type\": \"object\",\n",
        "        \"properties\": {\n",
        "            \"code\": {\"type\": \"string\", \"description\": \"Python code to execute\"}\n",
        "        },\n",
        "        \"required\": [\"code\"],\n",
        "    },\n",
        "}"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_gURHGAH8vRp"
      },
      "source": [
        "This is the implementation of our tool. It will be called when the Claude model decides to use it."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "un70f3r18tbW"
      },
      "outputs": [],
      "source": [
        "# Function to handle tool execution with new file handling pattern\n",
        "def execute_code_tool(code: str) -> str:\n",
        "    \"\"\"Execute code when Claude calls the tool.\"\"\"\n",
        "    try:\n",
        "        response = client.agent_engines.sandboxes.execute_code(\n",
        "            name=sandbox_resource_name, input_data={\"code\": code}\n",
        "        )\n",
        "\n",
        "        # Parse response with new pattern\n",
        "        for output in response.outputs:\n",
        "            if output.mime_type == \"application/json\" and output.metadata is None:\n",
        "                result = json.loads(output.data.decode(\"utf-8\"))\n",
        "\n",
        "                if result.get(\"msg_err\"):\n",
        "                    return f\"Error: {result.get('msg_err')}\"\n",
        "                return result.get(\"msg_out\", \"Code executed successfully\")\n",
        "\n",
        "        return \"Code executed (no output)\"\n",
        "    except Exception as e:\n",
        "        return f\"Execution failed: {e!s}\""
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CVjYYZMA8z1G"
      },
      "source": [
        "Now we orchestrate the multi-turn conversation. Claude responds with a tool_use request, we execute the tool, send the result back, and Claude generates the final answer.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YrSZ-x128yKf"
      },
      "outputs": [],
      "source": [
        "# Orchestrate the multi-turn conversation with Claude\n",
        "message = claude.messages.create(\n",
        "    model=\"claude-sonnet-4@20250514\",\n",
        "    max_tokens=1000,\n",
        "    messages=[\n",
        "        {\"role\": \"user\", \"content\": \"Calculate the 10th Fibonacci number for me.\"}\n",
        "    ],\n",
        "    tools=[code_execution_tool],\n",
        ")\n",
        "\n",
        "# Handle tool use in the response\n",
        "if message.stop_reason == \"tool_use\":\n",
        "    tool_results = []\n",
        "\n",
        "    for content in message.content:\n",
        "        if content.type == \"tool_use\":\n",
        "            print(f\"🔧 Claude wants to use tool: {content.name}\")\n",
        "            print(f\"📝 With parameters: {content.input}\\n\")\n",
        "\n",
        "            # Execute the tool\n",
        "            if content.name == \"execute_python\":\n",
        "                result = execute_code_tool(content.input[\"code\"])\n",
        "                print(f\"✅ Execution result: {result}\\n\")\n",
        "\n",
        "                tool_results.append(\n",
        "                    {\n",
        "                        \"type\": \"tool_result\",\n",
        "                        \"tool_use_id\": content.id,\n",
        "                        \"content\": result,\n",
        "                    }\n",
        "                )\n",
        "\n",
        "    # Send tool results back to Claude for final response\n",
        "    final_response = claude.messages.create(\n",
        "        model=\"claude-sonnet-4@20250514\",\n",
        "        max_tokens=1000,\n",
        "        messages=[\n",
        "            {\"role\": \"user\", \"content\": \"Calculate the 10th Fibonacci number for me.\"},\n",
        "            {\"role\": \"assistant\", \"content\": message.content},\n",
        "            {\"role\": \"user\", \"content\": tool_results},\n",
        "        ],\n",
        "        tools=[code_execution_tool],\n",
        "    )\n",
        "\n",
        "    print(f\"🤖 Claude's final answer:\\n{final_response.content[0].text}\")\n",
        "else:\n",
        "    # Claude responded without using tools\n",
        "    print(f\"🤖 Claude's response:\\n{message.content[0].text}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "K0iG4G7F87Dk"
      },
      "source": [
        "# Building Agents with ADK to Code Execution\n",
        "\n",
        "At this point, you can build AI agents using the Agent Development Kit (ADK). While direct LLM integration works for simple tasks, ADK provides:\n",
        "\n",
        "* Structured agent architecture\n",
        "* Built-in session management\n",
        "* Artifact handling (file persistence)\n",
        "* Event streaming for real-time updates\n",
        "* Production-ready deployment capabilities"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SKpTQECTDN1F"
      },
      "source": [
        "## Two Approaches to Code Execution\n",
        "\n",
        "The **two code execution options** are:\n",
        "\n",
        "1. **`AgentEngineSandboxCodeExecutor`** - Uses Vertex AI's managed sandbox (what we've been using)\n",
        "2. **`BuiltInCodeExecutor`** - ADK's native code execution (simpler, integrated)\n",
        "\n",
        "Let's compare both approaches so you can choose the right one for your use case.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7o8iixwh9Lzc"
      },
      "source": [
        "### Approach 1: Agent with AgentEngineSandboxCodeExecutor\n",
        "\n",
        "It connects your ADK agent to the Vertex AI Agent Engine Sandbox we created earlier.\n",
        "\n",
        "**Key features:**\n",
        "\n",
        "* Uses the managed, isolated sandbox environment\n",
        "* Artifacts automatically saved to Google Cloud Storage (GCS)\n",
        "* Stateful execution (variables persist across calls) for up to 14 days\n",
        "* Full control over sandbox configuration (CPU, memory, language)\n",
        "* Enterprise-grade security and isolation\n",
        "\n",
        "Use it with production agents that need secure isolation, artifact management(users cannot manage artifacts stored in gcs at this moment, it will be some feature we could add later), and scalability."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vCcx7Bx39eGW"
      },
      "source": [
        "Create an agent using AgentEngineSandboxCodeExecutor.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "aM1mirwI83Y7"
      },
      "outputs": [],
      "source": [
        "# Define the agent with Vertex AI sandbox executor\n",
        "vertex_agent = LlmAgent(\n",
        "    model=\"gemini-2.5-flash\",\n",
        "    name=\"vertex_code_executor_agent\",\n",
        "    description=\"An agent that uses Vertex AI Agent Engine Sandbox for code execution\",\n",
        "    instruction=\"\"\"You are a helpful coding assistant. When asked to perform calculations or data processing:\n",
        "\n",
        "    1. Write clear, well-commented Python code\n",
        "    2. Include print statements to show intermediate steps\n",
        "    3. Use the code executor to run your code\n",
        "    4. Explain the results in a user-friendly way\n",
        "\n",
        "    Always ensure your code is complete and executable.\n",
        "    \"\"\",\n",
        "    code_executor=AgentEngineSandboxCodeExecutor(\n",
        "        # Use the sandbox we created earlier\n",
        "        sandbox_resource_name=sandbox_resource_name\n",
        "    ),\n",
        ")\n",
        "\n",
        "print(\"✅ Agent with AgentEngineSandboxCodeExecutor created!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "P13np5ux9cJW"
      },
      "source": [
        "Set up the session and runner. ADK requires a session service to manage conversation state.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "IYZBg79S9Xtr"
      },
      "outputs": [],
      "source": [
        "# Set up session management\n",
        "vertex_session_service = InMemorySessionService()\n",
        "await vertex_session_service.create_session(\n",
        "    app_name=\"vertex_code_app\", user_id=\"user1\", session_id=\"session1\"\n",
        ")\n",
        "\n",
        "artifact_session_service = InMemoryArtifactService()\n",
        "\n",
        "# Create the runner\n",
        "vertex_runner = Runner(\n",
        "    agent=vertex_agent,\n",
        "    app_name=\"vertex_code_app\",\n",
        "    session_service=vertex_session_service,\n",
        "    artifact_service=artifact_session_service,\n",
        ")\n",
        "\n",
        "print(\"✅ Session and runner configured!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "873IENBr9mIj"
      },
      "source": [
        "Run the agent. Notice how the event stream shows the agent's step-by-step execution, including code generation and results.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "NeVpQWAE9isT"
      },
      "outputs": [],
      "source": [
        "# Run the agent with a computational task\n",
        "query = \"Calculate compound interest for $1000 at 5% annual rate for 10 years\"\n",
        "message = genai_types.Content(role=\"user\", parts=[genai_types.Part(text=query)])\n",
        "\n",
        "print(f\"🙋 User query: {query}\\n\")\n",
        "print(\"=\" * 60)\n",
        "\n",
        "async for event in vertex_runner.run_async(\n",
        "    user_id=\"user1\", session_id=\"session1\", new_message=message\n",
        "):\n",
        "    # Use our helper to parse events\n",
        "    parse_event(event=event)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hGqzyAZ-9-4s"
      },
      "source": [
        "### Approach 2: Agent with BuiltInCodeExecutor\n",
        "\n",
        "It is ADK's native code execution capability, tightly integrated with the Gemini models.\n",
        "\n",
        "**Key features:**\n",
        "\n",
        "* No separate sandbox creation needed\n",
        "* Simpler setup (works out-of-the-box)\n",
        "* Direct integration with Gemini's Code Execution Extension\n",
        "* Good for rapid prototyping and development\n",
        "* Works only with Gemini models\n",
        "\n",
        "Use it for quick prototyping, demos, or when you don't need the full isolation of a managed sandbox.\n",
        "\n",
        "**Note:** This approach uses a different event structure (`executable_code` and `code_execution_result` parts).\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Q4xyfzLV-Js7"
      },
      "source": [
        "Create an agent using BuiltInCodeExecutor\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "B5d943wN9oKu"
      },
      "outputs": [],
      "source": [
        "builtin_agent = LlmAgent(\n",
        "    model=\"gemini-2.5-flash\",  # Must use Gemini for BuiltInCodeExecutor\n",
        "    name=\"builtin_code_executor_agent\",\n",
        "    description=\"An agent that uses ADK's built-in code execution\",\n",
        "    instruction=\"\"\"You are a helpful coding assistant. When asked to perform calculations or data processing:\n",
        "\n",
        "    1. Write clear, well-commented Python code\n",
        "    2. Include print statements to show intermediate steps\n",
        "    3. Use the code execution tool to run your code\n",
        "    4. Explain the results in a user-friendly way\n",
        "\n",
        "    Always ensure your code is complete and executable.\n",
        "    \"\"\",\n",
        "    code_executor=BuiltInCodeExecutor(),  # Much simpler setup!\n",
        ")\n",
        "\n",
        "print(\"✅ Agent with BuiltInCodeExecutor created!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LedV5fXr-P0E"
      },
      "source": [
        "Set up session for built-in executor agent"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5HNUQPTz-MjG"
      },
      "outputs": [],
      "source": [
        "builtin_session_service = InMemorySessionService()\n",
        "await builtin_session_service.create_session(\n",
        "    app_name=\"builtin_code_app\", user_id=\"user1\", session_id=\"session1\"\n",
        ")\n",
        "\n",
        "builtin_runner = Runner(\n",
        "    agent=builtin_agent,\n",
        "    app_name=\"builtin_code_app\",\n",
        "    session_service=builtin_session_service,\n",
        ")\n",
        "\n",
        "print(\"✅ Session and runner configured!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xuSMC3DZ-TK5"
      },
      "source": [
        "Run the same query with the built-in executor. Notice the event structure will show `CODE INTERPRETER EXECUTION` instead of `TOOL CALL`.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "b6hDjlsh-Qvr"
      },
      "outputs": [],
      "source": [
        "# Run the same query\n",
        "print(f\"🙋 User query: {query}\\n\")\n",
        "print(\"=\" * 60)\n",
        "\n",
        "async for event in builtin_runner.run_async(\n",
        "    user_id=\"user1\", session_id=\"session1\", new_message=message\n",
        "):\n",
        "    # Our helper handles both event types\n",
        "    parse_event(event)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XNAI4Hmh-6Fb"
      },
      "source": [
        "### Comparison: Which Approach Should You Use?\n",
        "\n",
        "| Feature | AgentEngineSandboxCodeExecutor | BuiltInCodeExecutor |\n",
        "| ----- | ----- | ----- |\n",
        "| **Setup Complexity** | Moderate (requires sandbox and reasoning engine creation) | Simple (no extra setup) |\n",
        "| **Model Support** | Any LLM (Gemini, Claude, etc.) | Only Gemini |\n",
        "| **Artifact Management** | GCS auto-save | In-memory only |\n",
        "| **Resource Control** | Configurable (CPU, RAM) | Fixed resources |\n",
        "| **Statefulness** | Stateful (variables persist) up to 14 days | Maintains **stateful** across the sequential turns of a single model chat session. |\n",
        "| **Production Ready** | ✅ Yes (enterprise-grade) | ⚠️ Prototyping/demos |\n",
        "| **input** | `Executable code` | `prompt` |\n",
        "| **output** | `Code execution result` | `Code execution result` |\n",
        "| **Best For** | Production agents, multi-model, artifacts, customize code execution input/output | Quick prototypes, Gemini-only apps |\n",
        "\n",
        "### Recommendations\n",
        "\n",
        "**Choose `AgentEngineSandboxCodeExecutor` when:**\n",
        "\n",
        "* Building production agents  \n",
        "* Require artifact persistence (files saved to GCS) for long periods of time (14 days)  \n",
        "* Need configurable compute resources  \n",
        "* Need control over code execution input and output\n",
        "\n",
        "**Choose `BuiltInCodeExecutor` when:**\n",
        "\n",
        "* Rapid prototyping and experimentation  \n",
        "* Don't need artifact persistence  \n",
        "* Want simplest possible setup  \n",
        "* Building demos or tutorials\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GOrmHUvyCZi3"
      },
      "source": [
        "## Advanced Example: Data Analyst Agent\n",
        "\n",
        "Let's build a more sophisticated agent that demonstrates real-world usage. This Data Analyst agent will use the `AgentEngineSandboxCodeExecutor` with advanced instructions for analyzing data using the pandas library."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "u5oNwvBzCknr"
      },
      "source": [
        "### Define structured output for data analysis\n",
        "\n",
        "Pydantic models can be used to define a desired output schema for an agent. While we won't enforce it strictly in this example, it's a good practice for ensuring reliable, structured data from your agents.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "UCXhDew6CnTf"
      },
      "outputs": [],
      "source": [
        "class DataAnalysisResult(BaseModel):\n",
        "    \"\"\"Structured output for data analysis results.\"\"\"\n",
        "\n",
        "    total_sales: float = Field(description=\"Total sales amount\")\n",
        "    average_sales: float = Field(description=\"Average sales per product\")\n",
        "    top_product: str = Field(description=\"Product with highest sales\")\n",
        "    insights: str = Field(description=\"Key insights from the analysis\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aLgFsBitCrfj"
      },
      "source": [
        "### Create a data analysis agent\n",
        "\n",
        "Note the more detailed instruction prompt, guiding the agent to act as an expert data analyst.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "xiN8RdanCuNm"
      },
      "outputs": [],
      "source": [
        "data_analyst = LlmAgent(\n",
        "    model=\"gemini-2.5-flash\",\n",
        "    name=\"data_analyst\",\n",
        "    description=\"Expert data analyst for sales and business metrics\",\n",
        "    instruction=\"\"\"You are an expert data analyst. When given data:\n",
        "\n",
        "    1. First, load and explore the data structure\n",
        "    2. Calculate key metrics (totals, averages, trends) using code executor\n",
        "    3. Identify top performers and outliers\n",
        "    4. Generate actionable insights\n",
        "\n",
        "    Always use pandas for data manipulation and include clear print statements.\n",
        "    Format numbers nicely (e.g., currency with commas).\n",
        "    \"\"\",\n",
        "    code_executor=AgentEngineSandboxCodeExecutor(\n",
        "        sandbox_resource_name=sandbox_resource_name\n",
        "    ),\n",
        "    output_key=\"analysis_result\",  # Store result in session state\n",
        ")\n",
        "\n",
        "print(\"✅ Data Analyst agent created!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rTETmD76CyKZ"
      },
      "source": [
        "### Run the agent\n",
        "\n",
        "We set up a new runner for our analyst agent with proper session, memory, and artifact services."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "CAL0spOiC1OS"
      },
      "outputs": [],
      "source": [
        "# Initialize session with all services\n",
        "analyst_session_service = InMemorySessionService()\n",
        "analyst_memory_service = InMemoryMemoryService()\n",
        "analyst_artifact_service = InMemoryArtifactService()\n",
        "\n",
        "analyst_session = await analyst_session_service.create_session(\n",
        "    app_name=\"sales_analysis\",\n",
        "    user_id=\"analyst_001\",\n",
        "    session_id=\"analysis_123\",\n",
        "    state={},  # Empty initial state\n",
        ")\n",
        "\n",
        "# Create runner\n",
        "analyst_runner = Runner(\n",
        "    agent=data_analyst,\n",
        "    app_name=\"sales_analysis\",\n",
        "    session_service=analyst_session_service,\n",
        "    memory_service=analyst_memory_service,\n",
        "    artifact_service=analyst_artifact_service,\n",
        ")\n",
        "\n",
        "print(\"✅ Data Analyst runner configured!\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "2cfbjWmmC4BD"
      },
      "outputs": [],
      "source": [
        "# Prepare the analysis request with CSV data\n",
        "analysis_request = genai_types.Content(\n",
        "    role=\"user\",\n",
        "    parts=[\n",
        "        genai_types.Part(\n",
        "            text=\"\"\"\n",
        "Analyze this sales data and provide insights:\n",
        "\n",
        "Product,Sales,Units,Region\n",
        "Widget A,5000,100,North\n",
        "Widget B,7500,150,South\n",
        "Widget C,3000,60,East\n",
        "Widget D,9000,180,West\n",
        "Widget E,6500,130,North\n",
        "\n",
        "Calculate:\n",
        "1. Total and average sales\n",
        "2. Best performing product\n",
        "3. Sales per unit for each product\n",
        "4. Regional performance summary\n",
        "\"\"\"\n",
        "        )\n",
        "    ],\n",
        ")\n",
        "\n",
        "print(\"🙋 User query: Sales data analysis\\n\")\n",
        "print(\"=\" * 60)\n",
        "\n",
        "# Run the analysis\n",
        "async for event in analyst_runner.run_async(\n",
        "    user_id=\"analyst_001\", session_id=\"analysis_123\", new_message=analysis_request\n",
        "):\n",
        "    parse_event(event)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PXOl6ZGVDGPK"
      },
      "source": [
        "**What makes this example advanced:**\n",
        "\n",
        "* **Real-world data processing**: Uses pandas for CSV data manipulation\n",
        "* **Complex instructions**: Multi-step analysis workflow\n",
        "* **Structured thinking**: Agent follows a systematic approach\n",
        "* **Output formatting**: Produces human-readable, actionable insights\n",
        "* **Session state management**: Can maintain context across multiple queries\n",
        "\n",
        "This demonstrates how you can build specialized agents for domain-specific tasks like data analysis, financial modeling, or scientific computing."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "59gCnkLU_I5o"
      },
      "source": [
        "# Sandbox Management and Operations\n",
        "\n",
        "Learn how to manage the lifecycle of your sandboxes and work with file I/O.\n",
        "\n",
        "**Why this matters:** Proper resource management and understanding file operations helps you:\n",
        "\n",
        "* Avoid unnecessary costs\n",
        "* Keep your project organized\n",
        "* Work with real-world data and outputs\n",
        "* Troubleshoot issues with specific sandboxes\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Dx-Ez_iB_Tcg"
      },
      "source": [
        "### Listing Sandboxes\n",
        "\n",
        "You can list all sandboxes created within a specific AgentEngine resource.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "cM7eIZL6-Waa"
      },
      "outputs": [],
      "source": [
        "# List all sandboxes in the Agent Engine\n",
        "sandboxes = client.agent_engines.sandboxes.list(name=agent_engine.api_resource.name)\n",
        "\n",
        "print(f\"✅ Found {len(sandboxes)} sandbox(es)\\n\")\n",
        "print(\"=\" * 60)\n",
        "\n",
        "for i, sandbox in enumerate(sandboxes, 1):\n",
        "    print(f\"\\n📦 Sandbox {i}:\")\n",
        "    print(f\"   Display name: {sandbox.display_name}\")\n",
        "    print(f\"   Resource name: {sandbox.name}\")\n",
        "    print(f\"   State: {sandbox.state}\")\n",
        "    print(f\"   Created: {sandbox.create_time}\")\n",
        "    if hasattr(sandbox, \"expire_time\") and sandbox.expire_time:\n",
        "        print(f\"   Expires: {sandbox.expire_time}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WuKza2Cs_Z29"
      },
      "source": [
        "## Get details of a specific sandbox\n",
        "\n",
        "Retrieve comprehensive information about a single sandbox."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "-1X6yluC_V5Z"
      },
      "outputs": [],
      "source": [
        "# Get detailed information about a specific sandbox\n",
        "if sandboxes:\n",
        "    sandbox_name = sandboxes[0].name\n",
        "    sandbox = client.agent_engines.sandboxes.get(name=sandbox_name)\n",
        "\n",
        "    print(\"✅ Sandbox details retrieved!\\n\")\n",
        "    print(f\"📦 Sandbox: {sandbox.display_name}\")\n",
        "    print(f\"   State: {sandbox.state}\")\n",
        "    print(f\"   Created: {sandbox.create_time}\")\n",
        "    print(f\"   Spec: {sandbox.spec}\")\n",
        "else:\n",
        "    print(\"⚠️ No sandboxes found to inspect\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jTSQChma_gcf"
      },
      "source": [
        "## Delete a specific sandbox\n",
        "\n",
        "Delete sandboxes you no longer need to avoid incurring costs.The sandbox and all its resources are permanently removed."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WV7-N4b5_cJH"
      },
      "outputs": [],
      "source": [
        "# Delete a specific sandbox (with error handling)\n",
        "if sandboxes and len(sandboxes) > 1:  # Only if we have more than one\n",
        "    try:\n",
        "        # Delete the first sandbox (not the one we're actively using)\n",
        "        delete_operation = client.agent_engines.sandboxes.delete(name=sandboxes[0].name)\n",
        "\n",
        "        if delete_operation.done:\n",
        "            print(\"✅ Sandbox deleted successfully!\")\n",
        "            print(f\"   Deleted: {sandboxes[0].display_name}\")\n",
        "        else:\n",
        "            print(\"⏳ Deletion in progress...\")\n",
        "            print(f\"   Resource: {sandboxes[0].display_name}\")\n",
        "    except Exception as e:\n",
        "        print(f\"⚠️ Error during deletion: {e!s}\")\n",
        "else:\n",
        "    print(\"ℹ️ Skipping deletion (only one sandbox or none available)\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UiUUAIET_pQU"
      },
      "source": [
        "## Working with Files in the Sandbox\n",
        "\n",
        "Learn how to send files to the sandbox and retrieve generated files.\n",
        "\n",
        "Real-world code often involves file I/O—reading CSV data, generating charts, creating reports.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YgIOvENo_vYp"
      },
      "source": [
        "### Understanding Output Types\n",
        "\n",
        "The sandbox returns two types of outputs:\n",
        "\n",
        "1. **JSON output** (stdout/stderr): `mime_type=\"application/json\"` with `metadata=None`\n",
        "2. **Generated files**: Various mime_types (e.g., `image/png`, `text/plain`) with `metadata.attributes`\n",
        "\n",
        "**Input Files** are sent via the `files` array in `input_data`. **Output Files** are retrieved from `response.outputs` with specific mime_types.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pg00n7HdAEom"
      },
      "source": [
        "### Example 1: Text File I/O\n",
        "\n",
        "Let's start with a simple example: reading from an input file and writing to an output file.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "GnShqw_e_nCs"
      },
      "outputs": [],
      "source": [
        "# Define the code to read and write text files\n",
        "my_code = \"\"\"\n",
        "with open(\"input.txt\", \"r\") as input_file:\n",
        "    with open(\"output.txt\", \"w\") as output_file:\n",
        "        for line in input_file:\n",
        "            # Echo each line to stdout for visibility\n",
        "            print(f\"Processing: {line.strip()}\")\n",
        "            # Write to output file\n",
        "            output_file.write(line)\n",
        "\"\"\"\n",
        "\n",
        "# Prepare input data with a file\n",
        "input_data = {\n",
        "    \"code\": my_code,\n",
        "    \"files\": [\n",
        "        {\n",
        "            \"name\": \"input.txt\",\n",
        "            \"content\": b\"Hello, Agent Engine Sandbox!\\nThis is a test file.\\nFile I/O is working!\",\n",
        "        }\n",
        "    ],\n",
        "}\n",
        "\n",
        "# Execute the code\n",
        "response = client.agent_engines.sandboxes.execute_code(\n",
        "    name=sandbox_resource_name, input_data=input_data\n",
        ")\n",
        "\n",
        "print(\"✅ Code executed with file I/O!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fu_5ABowARz1"
      },
      "source": [
        "Now let's parse the response. We need to handle both stdout/stderr (JSON) and generated files separately.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YQYHqzCbAK_g"
      },
      "outputs": [],
      "source": [
        "# Process the response outputs\n",
        "for output in response.outputs:\n",
        "    # Check if this is JSON output (stdout/stderr)\n",
        "    if output.mime_type == \"application/json\" and output.metadata is None:\n",
        "        # Decode the JSON response\n",
        "        result = json.loads(output.data.decode(\"utf-8\"))\n",
        "\n",
        "        # Display stdout\n",
        "        if result.get(\"msg_out\"):\n",
        "            print(\"📤 Standard Output:\")\n",
        "            print(result.get(\"msg_out\"))\n",
        "\n",
        "        # Display stderr if any\n",
        "        if result.get(\"msg_err\"):\n",
        "            print(\"❌ Errors:\")\n",
        "            print(result.get(\"msg_err\"))\n",
        "\n",
        "    # Check if this is a generated file\n",
        "    elif output.metadata and output.metadata.attributes:\n",
        "        # Extract the file name from metadata\n",
        "        file_name = output.metadata.attributes.get(\"file_name\")\n",
        "        if isinstance(file_name, bytes):\n",
        "            file_name = file_name.decode(\"utf-8\")\n",
        "\n",
        "        print(f\"\\n📁 Generated File: {file_name}\")\n",
        "        print(f\"   MIME Type: {output.mime_type}\")\n",
        "        print(f\"   Size: {len(output.data)} bytes\")\n",
        "        print(\n",
        "            f\"   Content preview: {output.data[:100].decode('utf-8', errors='ignore')}...\"\n",
        "        )"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QiCEWPl2AbsI"
      },
      "source": [
        "### Example 2: Generating and Retrieving Image Files\n",
        "\n",
        "Now let's try something more visual—generating a chart with matplotlib and retrieving the PNG file.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "pXR3Kki8AWf6"
      },
      "outputs": [],
      "source": [
        "# Code to generate a matplotlib chart\n",
        "chart_code = \"\"\"\n",
        "import matplotlib.pyplot as plt\n",
        "\n",
        "# Create data\n",
        "x = [1, 2, 3, 4, 5]\n",
        "y = [2, 4, 6, 8, 10]\n",
        "\n",
        "# Create the plot\n",
        "plt.figure(figsize=(8, 6))\n",
        "plt.plot(x, y, marker='o', linewidth=2, markersize=8)\n",
        "plt.xlabel('X-axis')\n",
        "plt.ylabel('Y-axis')\n",
        "plt.title('Simple Line Plot')\n",
        "plt.grid(True, alpha=0.3)\n",
        "\n",
        "# Save the chart\n",
        "plt.savefig('chart_out.png', dpi=150, bbox_inches='tight')\n",
        "print(\"Chart saved to 'chart_out.png'\")\n",
        "\"\"\"\n",
        "\n",
        "# Execute the code (no input files needed this time)\n",
        "response = client.agent_engines.sandboxes.execute_code(\n",
        "    name=sandbox_resource_name, input_data={\"code\": chart_code}\n",
        ")\n",
        "\n",
        "print(\"✅ Chart generation code executed!\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "q3pbdwAXBFIY"
      },
      "source": [
        "Retrieve and display the generated image. Image files come back as binary data that we can decode and display.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "3gYIkfkJBDJA"
      },
      "outputs": [],
      "source": [
        "# Process outputs to find and display the image\n",
        "for output in response.outputs:\n",
        "    # Handle JSON output (stdout/stderr)\n",
        "    if output.mime_type == \"application/json\" and output.metadata is None:\n",
        "        result = json.loads(output.data.decode(\"utf-8\"))\n",
        "        if result.get(\"msg_out\"):\n",
        "            print(\"📤 Output:\")\n",
        "            print(result.get(\"msg_out\"))\n",
        "\n",
        "    # Handle image files\n",
        "    elif output.metadata and output.metadata.attributes:\n",
        "        file_name = output.metadata.attributes.get(\"file_name\")\n",
        "        if isinstance(file_name, bytes):\n",
        "            file_name = file_name.decode(\"utf-8\")\n",
        "\n",
        "        print(f\"\\n📊 Generated Image: {file_name}\")\n",
        "        print(f\"   MIME Type: {output.mime_type}\")\n",
        "        print(f\"   Size: {len(output.data)} bytes\")\n",
        "\n",
        "        # Display the image if it's a PNG/JPG\n",
        "        if file_name.endswith((\".png\", \".jpg\", \".jpeg\")):\n",
        "            # Decode the binary data and display\n",
        "            img = plt.imread(BytesIO(output.data))\n",
        "            fig, ax = plt.subplots(figsize=(8, 6))\n",
        "            ax.imshow(img)\n",
        "            ax.axis(\"off\")\n",
        "            plt.title(f\"Retrieved: {file_name}\")\n",
        "            plt.show()\n",
        "\n",
        "            # Optionally save locally\n",
        "            with open(file_name, \"wb\") as f:\n",
        "                f.write(output.data)\n",
        "            print(f\"   ✅ Saved locally as: {file_name}\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9BCeGr7yBMnU"
      },
      "source": [
        "### Key Takeaways: File I/O Patterns\n",
        "\n",
        "**Pattern for parsing sandbox responses:**\n",
        "\n",
        "```python\n",
        "for output in response.outputs:\n",
        "    # Case 1: JSON output (stdout/stderr)\n",
        "    if output.mime_type == \"application/json\" and output.metadata is None:\n",
        "        result = json.loads(output.data.decode(\"utf-8\"))\n",
        "        stdout = result.get(\"msg_out\")\n",
        "        stderr = result.get(\"msg_err\")\n",
        "\n",
        "    # Case 2: Generated files\n",
        "    elif output.metadata and output.metadata.attributes:\n",
        "        file_name = output.metadata.attributes.get(\"file_name\")\n",
        "        file_data = output.data  # Binary content\n",
        "        mime_type = output.mime_type\n",
        "```\n",
        "\n",
        "**This pattern will work for all file types:** text files, images, PDFs, CSV files, etc."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "O2AZLNS7BTjF"
      },
      "source": [
        "# Cleaning up\n",
        "\n",
        "Finally, clean up the top-level AgentEngine resource. Using `force=True` will also delete any remaining child resources, like other sandboxes you may have created.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "X5rYHGdvBJAN"
      },
      "outputs": [],
      "source": [
        "# Clean up the Agent Engine and all child resources\n",
        "delete_agent_engine = True  # Set to False to keep resources\n",
        "\n",
        "if delete_agent_engine:\n",
        "    try:\n",
        "        # Using force=True will delete all child sandboxes automatically\n",
        "        agent_engine.delete(force=True)\n",
        "        print(\"✅ Agent Engine and all sandboxes deleted successfully!\")\n",
        "        print(\"   All resources have been cleaned up.\")\n",
        "    except Exception as e:\n",
        "        print(f\"⚠️ Error during cleanup: {e!s}\")\n",
        "else:\n",
        "    print(\"ℹ️ Keeping Agent Engine resources (delete_agent_engine = False)\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9BWEjjIDBiaz"
      },
      "source": [
        "# Next Steps\n",
        "\n",
        "You've completed the Code Execution tutorial! You now know how to:\n",
        "\n",
        "* Create and manage Agent Engine Sandboxes\n",
        "* Execute code directly and handle file I/O\n",
        "* Integrate sandboxes with Gemini and Claude\n",
        "* Build production-ready agents with ADK\n",
        "* Choose between AgentEngineSandboxCodeExecutor and BuiltInCodeExecutor\n",
        "* Manage and clean up resources\n",
        "\n",
        "There is more to explore. Here some ideas:\n",
        "\n",
        "* Build your own agent with custom tools\n",
        "* Deploy your agent to production with ADK\n",
        "* Experiment with multi-agent systems"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "tutorial_get_started_with_code_execution.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
