{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "ur8xi4C7S06n"
   },
   "outputs": [],
   "source": [
    "# Copyright 2025 Google LLC\n",
    "#\n",
    "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
    "# you may not use this file except in compliance with the License.\n",
    "# You may obtain a copy of the License at\n",
    "#\n",
    "#     https://www.apache.org/licenses/LICENSE-2.0\n",
    "#\n",
    "# Unless required by applicable law or agreed to in writing, software\n",
    "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
    "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
    "# See the License for the specific language governing permissions and\n",
    "# limitations under the License."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "JAPoU8Sm5E6e"
   },
   "source": [
    "# Get Started with Vertex AI Prompt Optimizer - Tool usage\n",
    "\n",
    "<table align=\"left\">\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/prompt_optimizer/get_started_with_vertex_ai_prompt_optimizer_tool_usage.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fprompts%2Fprompt_optimizer%2Fget_started_with_vertex_ai_prompt_optimizer_tool_usage.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/prompts/prompt_optimizer/get_started_with_vertex_ai_prompt_optimizer_tool_usage.ipynb\">\n",
    "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
    "    </a>\n",
    "  </td>\n",
    "  <td style=\"text-align: center\">\n",
    "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/prompt_optimizer/get_started_with_vertex_ai_prompt_optimizer_tool_usage.ipynb\">\n",
    "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
    "    </a>\n",
    "  </td>\n",
    "</table>\n",
    "\n",
    "<div style=\"clear: both;\"></div>\n",
    "\n",
    "<b>Share to:</b>\n",
    "\n",
    "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/prompt_optimizer/get_started_with_vertex_ai_prompt_optimizer_tool_usage.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/prompt_optimizer/get_started_with_vertex_ai_prompt_optimizer_tool_usage.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/prompt_optimizer/get_started_with_vertex_ai_prompt_optimizer_tool_usage.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/prompt_optimizer/get_started_with_vertex_ai_prompt_optimizer_tool_usage.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
    "</a>\n",
    "\n",
    "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/prompts/prompt_optimizer/get_started_with_vertex_ai_prompt_optimizer_tool_usage.ipynb\" target=\"_blank\">\n",
    "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
    "</a>  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "84f0f73a0f76"
   },
   "source": [
    "| Author(s) |\n",
    "| --- |\n",
    "| [Ivan Nardini](https://github.com/inardini) |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "tvgnzT1CKxrO"
   },
   "source": [
    "## Overview\n",
    "\n",
    "When developing with large language models, crafting the perfect prompt—a process known as prompt engineering—is both an art and a science. It can be time-consuming and challenging to write prompts that consistently produce the desired results. Furthermore, as new and improved models are released, prompts that worked well before may need to be updated.\n",
    "\n",
    "To address these challenges, Vertex AI offers the **Prompt Optimizer**, a prompt optimization tool to help you refine and enhance your prompts automatically. This notebook serves as a comprehensive guide to both of its  approaches: the **Zero-Shot Optimizer** and the **Data-Driven Optimizer**.\n",
    "\n",
    "### The two approaches to prompt optimization\n",
    "\n",
    "#### 1\\. Zero-Shot Optimizer\n",
    "\n",
    "This is your go-to tool for rapid prompt refinement and generation *without* needing an evaluation dataset.\n",
    "\n",
    "  * **Generate from Scratch**: Simply describe a task in plain language, and it will generate a complete, well-structured system instruction for you.\n",
    "  * **Refine Existing Prompts**: Provide an existing prompt, and it will rewrite it based on established best practices for clarity, structure, and effectiveness.\n",
    "\n",
    "#### 2\\. Data-Driven Optimizer\n",
    "\n",
    "This tool performs a deep, performance-based optimization that uses your data to measure success.\n",
    "\n",
    "  * **Tune for Performance**: You provide a dataset of sample inputs and expected outputs, and it systematically tests and rewrites your system instructions to find the version that scores highest on the evaluation metrics you define.\n",
    "  * **Task-Specific**: It's the ideal choice when you want to fine-tune a prompt for a specific task and have data to prove what \"better\" looks like.\n",
    "\n",
    "In this tutorial, we'll show how to leverage the **Data-Driven Optimizer** to optimize for tool usage with a Gemini model. The goal is to use Vertex AI prompt optimizer to find a new prompt template which improves the model's ability to predict valid tool (function) calls given user's request.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "61RBz8LLbxCR"
   },
   "source": [
    "## Get started"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "No17Cw5hgx12"
   },
   "source": [
    "### Install required packages\n",
    "\n",
    "This command installs the necessary Python libraries.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "tFy3H3aPgx12"
   },
   "outputs": [],
   "source": [
    "%pip install \"google-cloud-aiplatform>=1.108.0\" \"pydantic\" \"etils\" \"protobuf==4.25.3\" --force-reinstall --quiet"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "dmWOrTJ3gx13"
   },
   "source": [
    "### Authenticate your notebook environment (Colab only)\n",
    "\n",
    "If you are running this notebook in Google Colab, this cell handles authentication, allowing the notebook to securely access your Google Cloud resources."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "NyKGtVQjgx13"
   },
   "outputs": [],
   "source": [
    "import sys\n",
    "\n",
    "if \"google.colab\" in sys.modules:\n",
    "    from google.colab import auth\n",
    "\n",
    "    auth.authenticate_user()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "DF4l8DTdWgPY"
   },
   "source": [
    "### Set Google Cloud project information\n",
    "\n",
    "Here, we define essential variables for our Google Cloud project. The Prompt Optimizer job will run within a Google Cloud project. You need to [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com) and use the specified Cloud Storage bucket to read input data and write results.\n",
    "\n",
    "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Nqwi-5ufWp_B"
   },
   "outputs": [],
   "source": [
    "# Use the environment variable if the user doesn't provide Project ID.\n",
    "import os\n",
    "\n",
    "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
    "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
    "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
    "\n",
    "PROJECT_NUMBER = !gcloud projects describe {PROJECT_ID} --format=\"get(projectNumber)\"[0]\n",
    "PROJECT_NUMBER = PROJECT_NUMBER[0]\n",
    "\n",
    "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"us-central1\")\n",
    "\n",
    "BUCKET_NAME = \"[your-bucket-name]\"  # @param {type: \"string\", placeholder: \"[your-bucket-name]\", isTemplate: true}\n",
    "BUCKET_URI = f\"gs://{BUCKET_NAME}\"\n",
    "\n",
    "! gsutil mb -l {LOCATION} -p {PROJECT_ID} {BUCKET_URI}\n",
    "\n",
    "import vertexai\n",
    "\n",
    "client = vertexai.Client(project=PROJECT_ID, location=LOCATION)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "AaksyUomxawt"
   },
   "source": [
    "### Service account and permissions\n",
    "\n",
    "The Prompt Optimizer runs as a backend job that needs permission to perform actions on your behalf. We grant the necessary IAM roles to the default Compute Engine service account, which the job uses to operate.\n",
    "\n",
    "  * `Vertex AI User`: Allows the job to call Vertex AI models.\n",
    "  * `Storage Object Admin`: Allows the job to read your dataset from and write results to your GCS bucket.\n",
    "  * `Artifact Registry Reader`: Allows the job to download necessary components.\n",
    "\n",
    "[Check out the documentation](https://cloud.google.com/iam/docs/manage-access-service-accounts#iam-view-access-sa-gcloud) to learn how to grant those permissions to a single service account."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "g7MNJEFP7-S9"
   },
   "outputs": [],
   "source": [
    "SERVICE_ACCOUNT = f\"{PROJECT_NUMBER}-compute@developer.gserviceaccount.com\"\n",
    "\n",
    "for role in ['aiplatform.user', 'storage.objectAdmin', 'artifactregistry.reader']:\n",
    "\n",
    "    ! gcloud projects add-iam-policy-binding {PROJECT_ID} \\\n",
    "      --member=serviceAccount:{SERVICE_ACCOUNT} \\\n",
    "      --role=roles/{role} --condition=None"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "5303c05f7aa6"
   },
   "source": [
    "### Import libraries"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "6fc324893334"
   },
   "outputs": [],
   "source": [
    "import json\n",
    "import logging\n",
    "from typing import Any, Dict, List, Optional, Tuple\n",
    "\n",
    "from jsonschema import ValidationError, validate\n",
    "import pandas as pd\n",
    "from etils import epath\n",
    "from google.cloud import storage\n",
    "from vertexai.generative_models import FunctionDeclaration, Tool, ToolConfig\n",
    "from pydantic import BaseModel, Field\n",
    "\n",
    "logging.basicConfig(level=logging.INFO, force=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9EfCy5RI19vt"
   },
   "source": [
    "### Helpers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "PCIt8uAxYZKZ"
   },
   "outputs": [],
   "source": [
    "def get_company_information_api(content: dict[str, Any]) -> str:\n",
    "    \"A function to simulate an API call to collect company information.\"\n",
    "\n",
    "    company_overviews = {\n",
    "        \"AAPL\": \"Apple maintains a robust financial position with substantial cash reserves and consistent profitability, fueled by its strong brand and loyal customer base. However, growth is slowing and the company faces competition.\",\n",
    "        \"ADBE\": \"Adobe financials are robust, driven by its successful transition to a subscription-based model for its creative and document cloud software.  Profitability and revenue growth are strong.\",\n",
    "        \"AMD\": \"AMD exhibits strong financial performance, gaining market share in the CPU and GPU markets.  Revenue growth and profitability are healthy, driven by strong product offerings.\",\n",
    "        \"AMZN\": \"Amazon financials are mixed, with its e-commerce business facing margin pressure while its cloud computing division (AWS) delivers strong profitability and growth. Its overall revenue remains high but profitability is a concern.\",\n",
    "        \"ASML\": \"ASML boasts a strong financial position due to its monopoly in the extreme ultraviolet lithography market, essential for advanced semiconductor manufacturing.  High profitability and growth are key strengths.\",\n",
    "        \"AVGO\": \"Broadcom maintains healthy financials, driven by its semiconductor and infrastructure software solutions. Acquisitions have played a role in its growth strategy, with consistent profitability and cash flow.\",\n",
    "        \"BABA\": \"Alibaba financials are substantial but facing challenges from regulatory scrutiny in China and increased competition.  E-commerce revenue remains strong but growth is slowing.\",\n",
    "        \"BKNG\": \"Booking Holdings financials are closely tied to the travel industry.  Revenue growth is recovering post-pandemic but profitability can fluctuate based on global travel trends.\",\n",
    "        \"CRM\": \"Salesforce shows robust revenue growth from its cloud-based CRM solutions.  Profitability is improving but competition remains strong.\",\n",
    "        \"CSCO\": \"Cisco financials show moderate growth, transitioning from hardware to software and services.  Profitability is stable but the company faces competition in the networking market.\",\n",
    "        \"GOOGL\": \"Alphabet exhibits strong financials driven by advertising revenue, though facing regulatory scrutiny.  Diversification into other ventures provides growth opportunities but profitability varies.\",\n",
    "        \"IBM\": \"IBM financials are in a state of transformation, shifting focus to hybrid cloud and AI.  Revenue growth is modest, with profitability impacted by legacy businesses.\",\n",
    "        \"INTU\": \"Intuit showcases healthy financials, benefiting from its strong position in tax and financial management software.  Revenue growth and profitability are consistent, fueled by recurring subscription revenue.\",\n",
    "        \"META\": \"Meta Platforms financial performance is tied closely to advertising revenue, facing headwinds from competition and changing privacy regulations.  Investments in the metaverse represent a long-term, high-risk bet.\",\n",
    "        \"MSFT\": \"Microsoft demonstrates healthy financials, benefiting from diversified revenue streams including cloud computing (Azure), software, and hardware.  The company exhibits consistent growth and profitability.\",\n",
    "        \"NFLX\": \"Netflix exhibits strong revenue but faces challenges in maintaining subscriber growth and managing content costs. Profitability varies, and competition in the streaming market is intense.\",\n",
    "        \"NOW\": \"ServiceNow demonstrates strong financials, fueled by its cloud-based workflow automation platform.  Revenue growth and profitability are high, reflecting increased enterprise adoption.\",\n",
    "        \"NVDA\": \"NVIDIA boasts strong financials, driven by its dominance in the GPU market for gaming, AI, and data centers.  High revenue growth and profitability are key strengths.\",\n",
    "        \"ORCL\": \"Oracle financials are in transition, shifting towards cloud-based services. Revenue growth is moderate, and profitability remains stable.  Legacy businesses still contribute significantly.\",\n",
    "        \"QCOM\": \"QUALCOMM financials show strong performance driven by its leadership in mobile chipsets and licensing.  Profitability is high, and growth is tied to the mobile market and 5G adoption.\",\n",
    "        \"SAP\": \"SAP demonstrates steady financials with its enterprise software solutions.  Transition to the cloud is ongoing and impacting revenue growth and profitability.\",\n",
    "        \"SMSN\": \"Samsung financials are diverse, reflecting its presence in various sectors including mobile phones, consumer electronics, and semiconductors. Profitability varies across divisions but the company holds significant cash reserves.\",\n",
    "        \"TCEHY\": \"Tencent financials are driven by its dominant position in the Chinese gaming and social media market. Revenue growth is strong but regulatory risks in China impact its performance.\",\n",
    "        \"TSLA\": \"Tesla financials show strong revenue growth driven by electric vehicle demand, but profitability remains volatile due to production and investment costs. The company high valuation reflects market optimism for future growth.\",\n",
    "        \"TSM\": \"TSMC, a dominant player in semiconductor manufacturing, showcases robust financials fueled by high demand for its advanced chips. Profitability is strong and the company enjoys a technologically advanced position.\",\n",
    "    }\n",
    "    return company_overviews.get(content[\"ticker\"], \"No company overwiew found\")\n",
    "\n",
    "\n",
    "def get_stock_price_api(content: dict[str, Any]) -> str:\n",
    "    \"A function to simulate an API call to collect most recent stock price for a given company.\"\n",
    "    stock_prices = {\n",
    "        \"AAPL\": 225,\n",
    "        \"ADBE\": 503,\n",
    "        \"AMD\": 134,\n",
    "        \"AMZN\": 202,\n",
    "        \"ASML\": 658,\n",
    "        \"AVGO\": 164,\n",
    "        \"BABA\": 88,\n",
    "        \"BKNG\": 4000,\n",
    "        \"CRM\": 325,\n",
    "        \"CSCO\": 57,\n",
    "        \"GOOGL\": 173,\n",
    "        \"IBM\": 201,\n",
    "        \"INTU\": 607,\n",
    "        \"META\": 553,\n",
    "        \"MSFT\": 415,\n",
    "        \"NFLX\": 823,\n",
    "        \"NOW\": 1000,\n",
    "        \"NVDA\": 141,\n",
    "        \"ORCL\": 183,\n",
    "        \"QCOM\": 160,\n",
    "        \"SAP\": 228,\n",
    "        \"SMSN\": 38,\n",
    "        \"TCEHY\": 51,\n",
    "        \"TSLA\": 302,\n",
    "        \"TSM\": 186,\n",
    "    }\n",
    "    return stock_prices.get(str(content[\"ticker\"]), \"No stock price found\")\n",
    "\n",
    "\n",
    "def get_company_news_api(content: dict[str, Any]) -> str:\n",
    "    \"A function to simulate an API call to collect recent news for a given company.\"\n",
    "    news_data = {\n",
    "        \"AAPL\": \"Apple unveils new iPhone, market reaction muted amid concerns about slowing growth.\",\n",
    "        \"ADBE\": \"Adobe integrates AI features into Creative Suite, attracting creative professionals.\",\n",
    "        \"AMD\": \"AMD gains market share in server CPUs, competing with Intel.\",\n",
    "        \"AMZN\": \"Amazon stock dips after reporting lower-than-expected Q3 profits due to increased shipping costs.\",\n",
    "        \"ASML\": \"ASML benefits from high demand for advanced chip manufacturing equipment.\",\n",
    "        \"AVGO\": \"Broadcom announces new acquisition in the semiconductor space.\",\n",
    "        \"BABA\": \"Alibaba stock faces uncertainty amid ongoing regulatory scrutiny in China.\",\n",
    "        \"BKNG\": \"Booking Holdings stock recovers as travel demand rebounds post-pandemic.\",\n",
    "        \"CRM\": \"Salesforce launches new AI-powered CRM tools for enterprise customers.\",\n",
    "        \"CSCO\": \"Cisco stock rises after positive earnings report, focus on networking solutions.\",\n",
    "        \"GOOGL\": \"Alphabet announces new AI-powered search features, aiming to compete with Microsoft.\",\n",
    "        \"IBM\": \"IBM focuses on hybrid cloud solutions, showing steady growth in enterprise segment.\",\n",
    "        \"INTU\": \"Intuit stock dips after announcing price increases for its tax software.\",\n",
    "        \"META\": \"Meta shares rise after positive user growth figures in emerging markets.\",\n",
    "        \"MSFT\": \"Microsoft expands AI integration across its product suite, boosting investor confidence.\",\n",
    "        \"NFLX\": \"Netflix subscriber growth slows, competition heats up in streaming landscape.\",\n",
    "        \"NOW\": \"ServiceNow sees strong growth in its cloud-based workflow automation platform.\",\n",
    "        \"NVDA\": \"Nvidia stock jumps on strong earnings forecast, driven by AI demand.\",\n",
    "        \"ORCL\": \"Oracle cloud revenue continues strong growth, exceeding market expectations.\",\n",
    "        \"QCOM\": \"Qualcomm expands its 5G modem business, partnering with major smartphone manufacturers.\",\n",
    "        \"SAP\": \"SAP cloud transition continues, but faces challenges in attracting new clients.\",\n",
    "        \"SMSN\": \"Samsung unveils new foldable phones, looking to gain market share.\",\n",
    "        \"TCEHY\": \"Tencent faces regulatory pressure in China, impacting investor sentiment.\",\n",
    "        \"TSLA\": \"Tesla stock volatile after price cuts and production increases announced.\",\n",
    "        \"TSM\": \"TSMC reports record chip demand but warns of potential supply chain disruptions.\",\n",
    "    }\n",
    "    return news_data.get(content[\"ticker\"], \"No news available\")\n",
    "\n",
    "\n",
    "def get_company_sentiment_api(content: dict[str, Any]) -> str:\n",
    "    \"A function to simulate an API call to collect market company sentiment for a given company.\"\n",
    "\n",
    "    company_sentiment = {\n",
    "        \"AAPL\": \"Neutral\",\n",
    "        \"ADBE\": \"Neutral\",\n",
    "        \"AMD\": \"Neutral\",\n",
    "        \"AMZN\": \"Neutral\",\n",
    "        \"ASML\": \"Bearish/Undervalued\",\n",
    "        \"AVGO\": \"Neutral\",\n",
    "        \"BABA\": \"Neutral\",\n",
    "        \"BKNG\": \"Neutral\",\n",
    "        \"CRM\": \"Neutral\",\n",
    "        \"CSCO\": \"Neutral\",\n",
    "        \"GOOGL\": \"Neutral\",\n",
    "        \"IBM\": \"Neutral\",\n",
    "        \"INTU\": \"Mixed/Bullish\",\n",
    "        \"META\": \"Neutral\",\n",
    "        \"MSFT\": \"Neutral\",\n",
    "        \"NFLX\": \"Neutral\",\n",
    "        \"NOW\": \"Bullish/Overvalued\",\n",
    "        \"NVDA\": \"Neutral\",\n",
    "        \"ORCL\": \"Neutral\",\n",
    "        \"QCOM\": \"Neutral\",\n",
    "        \"SAP\": \"Neutral\",\n",
    "        \"SMSN\": \"Neutral\",\n",
    "        \"TCEHY\": \"Neutral\",\n",
    "        \"TSLA\": \"Slightly Overvalued\",\n",
    "        \"TSM\": \"Neutral\",\n",
    "    }\n",
    "    return company_sentiment.get(content[\"ticker\"], \"No sentiment available\")\n",
    "\n",
    "def replace_type_key(data: dict[str, Any]) -> dict[str, Any]:\n",
    "    \"\"\"Recursively replaces \"type_\" with \"type\" in a dictionary or list.\"\"\"\n",
    "\n",
    "    def _recursive_replace(item: Any) -> Any:\n",
    "        if isinstance(item, dict):\n",
    "            return {\n",
    "                (\"type\" if k == \"type_\" else k): _recursive_replace(v)\n",
    "                for k, v in item.items()\n",
    "            }\n",
    "        elif isinstance(item, list):\n",
    "            return [_recursive_replace(elem) for elem in item]\n",
    "        else:\n",
    "            return item\n",
    "\n",
    "    new_data = {}\n",
    "    for key, value in data.items():\n",
    "        if key == \"function_declarations\" and isinstance(value, list):\n",
    "            new_data[key] = [_recursive_replace(tool) for tool in value]\n",
    "        else:\n",
    "            new_data[key] = value\n",
    "\n",
    "    return new_data\n",
    "\n",
    "def tool_config_to_dict(tool_config: ToolConfig | None) -> dict[str, Any] | None:\n",
    "    \"\"\"Converts a ToolConfig object to a dictionary.\"\"\"\n",
    "\n",
    "    if tool_config is None:\n",
    "        return None\n",
    "\n",
    "    # pylint: disable=protected-access\n",
    "    config = tool_config._gapic_tool_config.function_calling_config\n",
    "    return {\n",
    "        \"function_calling_config\": {\n",
    "            \"mode\": config.mode.name,\n",
    "            \"allowed_function_names\": list(config.allowed_function_names),\n",
    "        }\n",
    "    }\n",
    "\n",
    "\n",
    "def validate_tools(spec: str) -> None:\n",
    "    \"\"\"Validates the tools specification.\"\"\"\n",
    "    # Define the JSON schema for validation\n",
    "    schema = {\n",
    "        \"type\": \"object\",\n",
    "        \"properties\": {\n",
    "            \"tools\": {\n",
    "                \"type\": \"array\",\n",
    "                \"minItems\": 1,  # Ensures that 'tools' is not an empty array\n",
    "                \"items\": {\n",
    "                    \"type\": \"object\",\n",
    "                    \"properties\": {\n",
    "                        \"function_declarations\": {\n",
    "                            \"type\": \"array\",\n",
    "                            # Ensures this is not an empty array\n",
    "                            \"minItems\": 1,\n",
    "                            \"items\": {\n",
    "                                \"type\": \"object\",\n",
    "                                \"properties\": {\n",
    "                                    \"name\": {\"type\": \"string\"},\n",
    "                                    \"description\": {\"type\": \"string\"},\n",
    "                                    \"parameters\": {\n",
    "                                        \"type\": \"object\",\n",
    "                                        \"properties\": {\n",
    "                                            \"type\": {\"type\": \"string\"},\n",
    "                                            \"properties\": {\"type\": \"object\"},\n",
    "                                            \"required\": {\n",
    "                                                \"type\": \"array\",\n",
    "                                                \"items\": {\"type\": \"string\"},\n",
    "                                            },\n",
    "                                        },\n",
    "                                        \"required\": [\"type\", \"properties\"],\n",
    "                                    },\n",
    "                                },\n",
    "                                \"required\": [\"name\", \"description\", \"parameters\"],\n",
    "                            },\n",
    "                        }\n",
    "                    },\n",
    "                    \"required\": [\"function_declarations\"],\n",
    "                },\n",
    "            }\n",
    "        },\n",
    "        \"required\": [\"tools\"],\n",
    "    }\n",
    "\n",
    "    json_spec = json.loads(spec)\n",
    "    try:\n",
    "        # Validate the JSON specification against the schema\n",
    "        validate(instance=json_spec, schema=schema)\n",
    "    except ValidationError as e:\n",
    "        raise ValueError(f\"Invalid Tools specification: {e}\") from e\n",
    "\n",
    "\n",
    "def validate_tool_config(tool_config: str) -> None:\n",
    "    \"\"\"Validates the format of the tool_config.\"\"\"\n",
    "\n",
    "    schema = {\n",
    "        \"type\": \"object\",\n",
    "        \"properties\": {\n",
    "            \"function_calling_config\": {\n",
    "                \"type\": \"object\",\n",
    "                \"properties\": {\n",
    "                    \"mode\": {\"type\": \"string\", \"enum\": [\"AUTO\", \"ANY\", \"NONE\"]},\n",
    "                    \"allowed_function_names\": {\n",
    "                        \"type\": \"array\",\n",
    "                        \"items\": {\"type\": \"string\"},\n",
    "                    },\n",
    "                },\n",
    "                \"required\": [\"mode\"],\n",
    "            }\n",
    "        },\n",
    "        \"required\": [\"function_calling_config\"],\n",
    "    }\n",
    "\n",
    "    try:\n",
    "        validate(instance=json.loads(tool_config), schema=schema)\n",
    "    except ValidationError as e:\n",
    "        raise ValueError(f\"Invalid tool_config: {tool_config}\") from e\n",
    "\n",
    "def format_demonstrations(demos: Any) -> List[str]:\n",
    "    \"\"\"Format demonstrations into readable strings.\"\"\"\n",
    "    if isinstance(demos, str):\n",
    "        try:\n",
    "            demos = json.loads(demos)\n",
    "        except (json.JSONDecodeError, ValueError):\n",
    "            return []\n",
    "\n",
    "    if not isinstance(demos, list):\n",
    "        return []\n",
    "\n",
    "    formatted = []\n",
    "    for demo in demos:\n",
    "        if isinstance(demo, dict):\n",
    "            demo_str = \"\\n\".join(f\"{k}: {v}\" for k, v in demo.items())\n",
    "            formatted.append(demo_str)\n",
    "        else:\n",
    "            formatted.append(str(demo))\n",
    "\n",
    "    return formatted\n",
    "\n",
    "\n",
    "def split_gcs_path(gcs_path: str) -> Tuple[str, str]:\n",
    "    \"\"\"Split GCS path into bucket name and prefix.\"\"\"\n",
    "    if not gcs_path.startswith(\"gs://\"):\n",
    "        raise ValueError(f\"Invalid GCS path. Must start with gs://: {gcs_path}\")\n",
    "\n",
    "    path = gcs_path[len(\"gs://\"):]\n",
    "    parts = path.split(\"/\", 1)\n",
    "    return parts[0], parts[1] if len(parts) > 1 else \"\"\n",
    "\n",
    "\n",
    "def list_gcs_objects(gcs_path: str) -> List[str]:\n",
    "    \"\"\"List all objects under given GCS path.\"\"\"\n",
    "    bucket_name, prefix = parse_gcs_path(gcs_path)\n",
    "\n",
    "    client = storage.Client()\n",
    "    bucket = client.bucket(bucket_name)\n",
    "    blobs = bucket.list_blobs(prefix=prefix)\n",
    "\n",
    "    return [blob.name for blob in blobs]\n",
    "\n",
    "\n",
    "def find_directories_with_files(\n",
    "    base_path: str, required_files: List[str]\n",
    ") -> List[str]:\n",
    "    \"\"\"Find directories containing all required files.\"\"\"\n",
    "    bucket_name, prefix = split_gcs_path(base_path)\n",
    "    all_paths = list_gcs_objects(base_path)\n",
    "\n",
    "    # Group files by directory\n",
    "    directories: Dict[str, set] = {}\n",
    "    for path in all_paths:\n",
    "        dir_path = \"/\".join(path.split(\"/\")[:-1])\n",
    "        filename = path.split(\"/\")[-1]\n",
    "\n",
    "        if dir_path not in directories:\n",
    "            directories[dir_path] = set()\n",
    "        directories[dir_path].add(filename)\n",
    "\n",
    "    # Find directories with all required files\n",
    "    matching_dirs = []\n",
    "    for dir_path, files in directories.items():\n",
    "        if all(req_file in files for req_file in required_files):\n",
    "            matching_dirs.append(f\"gs://{bucket_name}/{dir_path}\")\n",
    "\n",
    "    return matching_dirs\n",
    "\n",
    "def parse_gcs_path(gcs_path: str) -> Tuple[str, str]:\n",
    "    \"\"\"Parse GCS path into bucket name and prefix.\"\"\"\n",
    "    if not gcs_path.startswith(\"gs://\"):\n",
    "        raise ValueError(\"Invalid GCS path. Must start with gs://\")\n",
    "\n",
    "    path_without_prefix = gcs_path[5:]  # Remove 'gs://'\n",
    "    parts = path_without_prefix.split(\"/\", 1)\n",
    "    bucket_name = parts[0]\n",
    "    prefix = parts[1] if len(parts) > 1 else \"\"\n",
    "\n",
    "    return bucket_name, prefix\n",
    "\n",
    "def get_best_vapo_results(\n",
    "    base_path: str, metric_name: Optional[str] = None\n",
    ") -> Tuple[str, List[str]]:\n",
    "    \"\"\"Get the best system instruction and demonstrations across all VAPO runs.\"\"\"\n",
    "    # Find all valid runs\n",
    "    required_files = [\"eval_results.json\", \"templates.json\"]\n",
    "    runs = find_directories_with_files(base_path, required_files)\n",
    "\n",
    "    if not runs:\n",
    "        raise ValueError(f\"No valid runs found in {base_path}\")\n",
    "\n",
    "    best_score = float(\"-inf\")\n",
    "    best_instruction = \"\"\n",
    "    best_demonstrations: List[str] = []\n",
    "\n",
    "    for run_path in runs:\n",
    "        try:\n",
    "            # Check main templates.json first\n",
    "            templates_path = f\"{run_path}/templates.json\"\n",
    "            with epath.Path(templates_path).open(\"r\") as f:\n",
    "                templates_data = json.load(f)\n",
    "\n",
    "            if templates_data:\n",
    "                df = pd.json_normalize(templates_data)\n",
    "\n",
    "                # Find metric column\n",
    "                metric_columns = [\n",
    "                    col for col in df.columns\n",
    "                    if \"metric\" in col and \"mean\" in col\n",
    "                ]\n",
    "\n",
    "                if metric_columns:\n",
    "                    # Select appropriate metric\n",
    "                    if metric_name:\n",
    "                        metric_col = next(\n",
    "                            (col for col in metric_columns if metric_name in col),\n",
    "                            None\n",
    "                        )\n",
    "                    else:\n",
    "                        composite_cols = [\n",
    "                            col for col in metric_columns\n",
    "                            if \"composite_metric\" in col\n",
    "                        ]\n",
    "                        metric_col = (\n",
    "                            composite_cols[0] if composite_cols else metric_columns[0]\n",
    "                        )\n",
    "\n",
    "                    if metric_col and metric_col in df.columns:\n",
    "                        best_idx = df[metric_col].argmax()\n",
    "                        score = float(df.iloc[best_idx][metric_col])\n",
    "\n",
    "                        if score > best_score:\n",
    "                            best_score = score\n",
    "                            best_row = df.iloc[best_idx]\n",
    "\n",
    "                            # Extract instruction if present\n",
    "                            if \"prompt\" in best_row or \"instruction\" in best_row:\n",
    "                                instruction = best_row.get(\n",
    "                                    \"prompt\", best_row.get(\"instruction\", \"\")\n",
    "                                )\n",
    "                                if instruction:\n",
    "                                    instruction = instruction.replace(\n",
    "                                        \"store('answer', llm())\", \"{{llm()}}\"\n",
    "                                    )\n",
    "                                    best_instruction = instruction\n",
    "\n",
    "                            # Extract demonstrations if present\n",
    "                            if \"demonstrations\" in best_row or \"demo_set\" in best_row:\n",
    "                                demos = best_row.get(\n",
    "                                    \"demonstrations\", best_row.get(\"demo_set\", [])\n",
    "                                )\n",
    "                                best_demonstrations = format_demonstrations(demos)\n",
    "\n",
    "            # Check instruction-specific optimization\n",
    "            instruction_path = f\"{run_path}/instruction/templates.json\"\n",
    "            try:\n",
    "                with epath.Path(instruction_path).open(\"r\") as f:\n",
    "                    instruction_data = json.load(f)\n",
    "\n",
    "                if instruction_data:\n",
    "                    inst_df = pd.json_normalize(instruction_data)\n",
    "                    metric_columns = [\n",
    "                        col for col in inst_df.columns\n",
    "                        if \"metric\" in col and \"mean\" in col\n",
    "                    ]\n",
    "\n",
    "                    if metric_columns:\n",
    "                        if metric_name:\n",
    "                            metric_col = next(\n",
    "                                (col for col in metric_columns if metric_name in col),\n",
    "                                None,\n",
    "                            )\n",
    "                        else:\n",
    "                            composite_cols = [\n",
    "                                col for col in metric_columns\n",
    "                                if \"composite_metric\" in col\n",
    "                            ]\n",
    "                            metric_col = (\n",
    "                                composite_cols[0] if composite_cols else metric_columns[0]\n",
    "                            )\n",
    "\n",
    "                        if metric_col and metric_col in inst_df.columns:\n",
    "                            inst_best_idx = inst_df[metric_col].argmax()\n",
    "                            inst_score = float(inst_df.iloc[inst_best_idx][metric_col])\n",
    "\n",
    "                            if inst_score > best_score:\n",
    "                                best_score = inst_score\n",
    "                                best_row = inst_df.iloc[inst_best_idx]\n",
    "\n",
    "                                instruction = best_row.get(\n",
    "                                    \"prompt\", best_row.get(\"instruction\", \"\")\n",
    "                                )\n",
    "                                if instruction:\n",
    "                                    instruction = instruction.replace(\n",
    "                                        \"store('answer', llm())\", \"{{llm()}}\"\n",
    "                                    )\n",
    "                                    best_instruction = instruction\n",
    "                                # In instruction-only mode, there might not be demonstrations\n",
    "                                if \"demonstrations\" not in best_row and \"demo_set\" not in best_row:\n",
    "                                    best_demonstrations = []\n",
    "            except FileNotFoundError:\n",
    "                pass\n",
    "\n",
    "            # Check demonstration-specific optimization\n",
    "            demo_path = f\"{run_path}/demonstration/templates.json\"\n",
    "            try:\n",
    "                with epath.Path(demo_path).open(\"r\") as f:\n",
    "                    demo_data = json.load(f)\n",
    "\n",
    "                if demo_data:\n",
    "                    demo_df = pd.json_normalize(demo_data)\n",
    "                    metric_columns = [\n",
    "                        col for col in demo_df.columns\n",
    "                        if \"metric\" in col and \"mean\" in col\n",
    "                    ]\n",
    "\n",
    "                    if metric_columns:\n",
    "                        if metric_name:\n",
    "                            metric_col = next(\n",
    "                                (col for col in metric_columns if metric_name in col),\n",
    "                                None,\n",
    "                            )\n",
    "                        else:\n",
    "                            composite_cols = [\n",
    "                                col for col in metric_columns\n",
    "                                if \"composite_metric\" in col\n",
    "                            ]\n",
    "                            metric_col = (\n",
    "                                composite_cols[0] if composite_cols else metric_columns[0]\n",
    "                            )\n",
    "\n",
    "                        if metric_col and metric_col in demo_df.columns:\n",
    "                            demo_best_idx = demo_df[metric_col].argmax()\n",
    "                            demo_score = float(demo_df.iloc[demo_best_idx][metric_col])\n",
    "\n",
    "                            if demo_score > best_score:\n",
    "                                best_score = demo_score\n",
    "                                best_row = demo_df.iloc[demo_best_idx]\n",
    "\n",
    "                                demos = best_row.get(\n",
    "                                    \"demonstrations\", best_row.get(\"demo_set\", [])\n",
    "                                )\n",
    "                                best_demonstrations = format_demonstrations(demos)\n",
    "                                # In demo-only mode, there might not be an instruction\n",
    "                                if \"prompt\" not in best_row and \"instruction\" not in best_row:\n",
    "                                    best_instruction = \"\"\n",
    "                                else:\n",
    "                                    instruction = best_row.get(\n",
    "                                        \"prompt\", best_row.get(\"instruction\", \"\")\n",
    "                                    )\n",
    "                                    if instruction:\n",
    "                                        instruction = instruction.replace(\n",
    "                                            \"store('answer', llm())\", \"{{llm()}}\"\n",
    "                                        )\n",
    "                                        best_instruction = instruction\n",
    "            except (FileNotFoundError, json.JSONDecodeError):\n",
    "                pass\n",
    "\n",
    "        except Exception as e:\n",
    "            logging.warning(f\"Error processing run {run_path}: {e}\")\n",
    "            continue\n",
    "\n",
    "    if best_score == float(\"-inf\"):\n",
    "        raise ValueError(\"Could not find any valid results\")\n",
    "\n",
    "    return best_instruction, best_demonstrations"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "FUr06qOWxuy9"
   },
   "source": [
    "## Using the Data-Driven Optimizer for long prompt optimization\n",
    "\n",
    "The following sections will guide you through setting up your environment, preparing your data, and running an optimization job to find a better prompt using the data-driven optimizer"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "tfpGmIWrVEt1"
   },
   "source": [
    "### Preparing the Data and Running the Job"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "h1650lf3X8xW"
   },
   "source": [
    "#### The prompt template to optimize\n",
    "\n",
    "A prompt consists of two key parts:\n",
    "\n",
    "* **System Instruction Template** which is a fixed part of the prompt that control or alter the model's behavior across all queries for a given task.\n",
    "\n",
    "* **Prompt Template** which is a dynamic part of the prompt that changes based on the task. Prompt template includes context, task and more. To learn more, see [components of a prompt](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/prompt-design-strategies#components-of-a-prompt) in the official documentation.\n",
    "\n",
    "In this scenario, you use Vertex AI prompt optimizer to optimize a simple system instruction template. And you use some examples in the remaining prompt template for evaluating different instruction templates along the optimization process.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Db8rHNC6DmtY"
   },
   "outputs": [],
   "source": [
    "system_instruction = \"\"\"\n",
    "Answer the question using correct tools.\n",
    "\"\"\"\n",
    "\n",
    "prompt_template = \"\"\"\n",
    "Some examples of correct tools associated to a question are:\n",
    "Question: {question}\n",
    "Target tools: {target}\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "0xXys6YmDS2v"
   },
   "source": [
    "#### The optimization dataset\n",
    "\n",
    "The optimizer's performance depends heavily on the quality of your sample data.\n",
    "\n",
    "For this example, we use a question-answering dataset where each row contains a `question`, and a ground-truth `target` representing a JSON string of expected tool calls. The representation is aligned with the JSON serialized string expected by Gen AI Evaluation service to evaluate [Tool use and function calling.\n",
    "](https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval#tool-use)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "QWMSgAdWDWwW"
   },
   "outputs": [],
   "source": [
    "input_data_path = \"gs://github-repo/prompts/prompt_optimizer/qa_tool_calls_opt_dataset.jsonl\"\n",
    "prompt_optimization_df = pd.read_json(input_data_path, lines=True)\n",
    "prompt_optimization_df.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "uskluxJrA5Ss"
   },
   "source": [
    "#### The optimization configuration\n",
    "\n",
    "Now, we'll create a dictionary with our specific settings and use it to instantiate our `OptimizationConfig` class.\n",
    "\n",
    "The `OptimizationConfig` class, built using `pydantic`, acts as a structured and validated blueprint for our optimization task. It ensures all necessary parameters are defined before we submit the job.\n",
    "\n",
    "In this scenario, you set two additional parameters:\n",
    "\n",
    "* `tools` parameter to pass tool definitions\n",
    "* `tool_config` parameter to pass tool configuration\n",
    "\n",
    "For more advanced control, you can learn and explore more about all the parameters and how to best use them in the [detailed documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/prompts/prompt-optimizer).\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "JGbRSlHT0pCU"
   },
   "outputs": [],
   "source": [
    "class OptimizationConfig(BaseModel):\n",
    "    \"\"\"\n",
    "    A comprehensive prompt optimization configuration model.\n",
    "    \"\"\"\n",
    "\n",
    "    # Basic Configuration\n",
    "    system_instruction: str = Field(\n",
    "        ...,\n",
    "        description=\"System instructions for the target model. String. This field is required.\",\n",
    "    )\n",
    "    prompt_template: str = Field(\n",
    "        ..., description=\"Template for prompts. String. This field is required.\"\n",
    "    )\n",
    "    target_model: str = Field(\n",
    "        \"gemini-2.5-flash\",\n",
    "        description='Target model for optimization. Supported models: \"gemini-2.5-flash\", \"gemini-2.5-pro\"',\n",
    "    )\n",
    "    thinking_budget: int = Field(\n",
    "        -1,\n",
    "        description=\"Thinking budget for thinking models. -1 means auto/no thinking. Integer.\",\n",
    "    )\n",
    "    optimization_mode: str = Field(\n",
    "        \"instruction\",\n",
    "        description='Optimization mode. Supported modes: \"instruction\", \"demonstration\", \"instruction_and_demo\".',\n",
    "    )\n",
    "    project: str = Field(\n",
    "        ..., description=\"Google Cloud project ID. This field is required.\"\n",
    "    )\n",
    "\n",
    "    # Evaluation Settings\n",
    "    eval_metrics_types: List[str] = Field(\n",
    "        description='List of evaluation metrics. E.g., \"bleu\", \"rouge_l\", \"safety\".'\n",
    "    )\n",
    "    eval_metrics_weights: List[float] = Field(\n",
    "        description=\"Weights for evaluation metrics. Length must match eval_metrics_types and should sum to 1.\"\n",
    "    )\n",
    "    aggregation_type: str = Field(\n",
    "        \"weighted_sum\",\n",
    "        description='Aggregation type for metrics. Supported: \"weighted_sum\", \"weighted_average\".',\n",
    "    )\n",
    "    custom_metric_name: str = Field(\n",
    "        \"\",\n",
    "        description=\"Metric name, as defined by the key that corresponds in the dictionary returned from Cloud function. String.\",\n",
    "    )\n",
    "    custom_metric_cloud_function_name: str = Field(\n",
    "        \"\",\n",
    "        description=\"Cloud Run function name you previously deployed. String.\",\n",
    "    )\n",
    "\n",
    "    # Data and I/O Paths\n",
    "    input_data_path: str = Field(\n",
    "        ...,\n",
    "        description=\"Cloud Storage URI to input optimization data. This field is required.\",\n",
    "    )\n",
    "    output_path: str = Field(\n",
    "        ...,\n",
    "        description=\"Cloud Storage URI to save optimization results. This field is required.\",\n",
    "    )\n",
    "\n",
    "    # (Optional) Advanced Configuration\n",
    "    num_steps: int = Field(\n",
    "        10,\n",
    "        ge=10,\n",
    "        le=20,\n",
    "        description=\"Number of iterations in instruction optimization mode. Integer between 10 and 20.\",\n",
    "    )\n",
    "    num_demo_set_candidates: int = Field(\n",
    "        10,\n",
    "        ge=10,\n",
    "        le=30,\n",
    "        description=\"Number of demonstrations evaluated. Integer between 10 and 30.\",\n",
    "    )\n",
    "    demo_set_size: int = Field(\n",
    "        3,\n",
    "        ge=3,\n",
    "        le=6,\n",
    "        description=\"Number of demonstrations generated per prompt. Integer between 3 and 6.\",\n",
    "    )\n",
    "\n",
    "    # (Optional) Model Locations and QPS\n",
    "    target_model_location: str = Field(\n",
    "        \"us-central1\", description=\"Location of the target model. Default us-central1.\"\n",
    "    )\n",
    "    target_model_qps: int = Field(\n",
    "        1,\n",
    "        ge=1,\n",
    "        description=\"QPS for the target model. Integer >= 1, based on your quota.\",\n",
    "    )\n",
    "    optimizer_model_location: str = Field(\n",
    "        \"us-central1\",\n",
    "        description=\"Location of the optimizer model. Default us-central1.\",\n",
    "    )\n",
    "    optimizer_model_qps: int = Field(\n",
    "        1,\n",
    "        ge=1,\n",
    "        description=\"QPS for the optimization model. Integer >= 1, based on your quota.\",\n",
    "    )\n",
    "    source_model: str = Field(\n",
    "        \"\",\n",
    "        description=\"Google model previously used with these prompts. Not needed if providing a target column.\",\n",
    "    )\n",
    "    source_model_location: str = Field(\n",
    "        \"us-central1\", description=\"Location of the source model. Default us-central1.\"\n",
    "    )\n",
    "    source_model_qps: Optional[int] = Field(\n",
    "        None, ge=1, description=\"Optional QPS for the source model. Integer >= 1.\"\n",
    "    )\n",
    "    eval_qps: int = Field(\n",
    "        1,\n",
    "        ge=1,\n",
    "        description=\"QPS for the eval model. Integer >= 1, based on your quota.\",\n",
    "    )\n",
    "\n",
    "    # (Optional) Response, Language, and Data Handling\n",
    "    response_mime_type: str = Field(\n",
    "        \"text/plain\",\n",
    "        description=\"MIME response type from the target model. E.g., 'text/plain', 'application/json'.\",\n",
    "    )\n",
    "    response_schema: str = Field(\n",
    "        \"\", description=\"The Vertex AI Controlled Generation response schema.\"\n",
    "    )\n",
    "    language: str = Field(\n",
    "        \"English\",\n",
    "        description='Language of the system instructions. E.g., \"English\", \"Japanese\".',\n",
    "    )\n",
    "    placeholder_to_content: Dict[str, Any] = Field(\n",
    "        {},\n",
    "        description=\"Dictionary of placeholders to replace parameters in the system instruction.\",\n",
    "    )\n",
    "    data_limit: int = Field(\n",
    "        10,\n",
    "        ge=5,\n",
    "        le=100,\n",
    "        description=\"Amount of data used for validation. Integer between 5 and 100.\",\n",
    "    )\n",
    "    translation_source_field_name: str = Field(\n",
    "        \"\",\n",
    "        description=\"Field name for source text if using translation metrics (Comet, MetricX).\",\n",
    "    )\n",
    "    has_multimodal_inputs: bool = Field(\n",
    "        False, description=\"Whether the input data is multimodal.\"\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "82VNGt07_erb"
   },
   "source": [
    "##### Set tools and tools configuration\n",
    "\n",
    "To optimize prompts for using external tools with the Vertex AI SDK, define the tools' functionalities using the `FunctionDeclaration` class. This class uses an OpenAPI-compatible schema to structure the tool definitions.  Your system prompt should be designed to effectively leverage these defined functions.  See the [Introduction to function calling](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling) for more information.  \n",
    "\n",
    "Example function definitions for a financial assistant are provided below.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "_gEJ5Dd9rsOX"
   },
   "outputs": [],
   "source": [
    "get_company_information = FunctionDeclaration(\n",
    "    name=\"get_company_information\",\n",
    "    description=\"Retrieves financial performance to provide an overview for a company.\",\n",
    "    parameters={\n",
    "        \"type\": \"object\",\n",
    "        \"properties\": {\n",
    "            \"ticker\": {\n",
    "                \"type\": \"string\",\n",
    "                \"description\": \"Stock ticker for a given company\",\n",
    "            }\n",
    "        },\n",
    "        \"required\": [\"ticker\"],\n",
    "    },\n",
    ")\n",
    "\n",
    "get_stock_price = FunctionDeclaration(\n",
    "    name=\"get_stock_price\",\n",
    "    description=\"Only returns the current stock price (in dollars) for a company.\",\n",
    "    parameters={\n",
    "        \"type\": \"object\",\n",
    "        \"properties\": {\n",
    "            \"ticker\": {\n",
    "                \"type\": \"integer\",\n",
    "                \"description\": \"Stock ticker for a company\",\n",
    "            }\n",
    "        },\n",
    "        \"required\": [\"ticker\"],\n",
    "    },\n",
    ")\n",
    "\n",
    "get_company_news = FunctionDeclaration(\n",
    "    name=\"get_company_news\",\n",
    "    description=\"Get the latest news headlines for a given company.\",\n",
    "    parameters={\n",
    "        \"type\": \"object\",\n",
    "        \"properties\": {\n",
    "            \"ticker\": {\n",
    "                \"type\": \"string\",\n",
    "                \"description\": \"Stock ticker for a company.\",\n",
    "            }\n",
    "        },\n",
    "        \"required\": [\"ticker\"],\n",
    "    },\n",
    ")\n",
    "\n",
    "get_company_sentiment = FunctionDeclaration(\n",
    "    name=\"get_company_sentiment\",\n",
    "    description=\"Returns the overall market sentiment for a company.\",\n",
    "    parameters={\n",
    "        \"type\": \"object\",\n",
    "        \"properties\": {\n",
    "            \"ticker\": {\n",
    "                \"type\": \"string\",\n",
    "                \"description\": \"Stock ticker for a company\",\n",
    "            },\n",
    "        },\n",
    "        \"required\": [\"ticker\"],\n",
    "    },\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "52T9TTwu4LbW"
   },
   "source": [
    "After implementing your functions, wrap each one as a `Tool` object. This allows the Gemini model to discover and execute these functions.  `ToolConfig` provides additional parameters to control how the model interacts with the tools and chooses which function to call.  \n",
    "\n",
    "Further information can be found in the [Introduction to function calling](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling).\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "bq9M_-jt4KoB"
   },
   "outputs": [],
   "source": [
    "tools = Tool(\n",
    "    function_declarations=[\n",
    "        get_company_information,\n",
    "        get_stock_price,\n",
    "        get_company_news,\n",
    "        get_company_sentiment,\n",
    "    ]\n",
    ")\n",
    "\n",
    "tool_config = ToolConfig(\n",
    "    function_calling_config=ToolConfig.FunctionCallingConfig(\n",
    "        mode=ToolConfig.FunctionCallingConfig.Mode.ANY,\n",
    "        allowed_function_names=[\n",
    "            \"get_company_information\",\n",
    "            \"get_stock_price\",\n",
    "            \"get_company_news\",\n",
    "            \"get_company_sentiment\",\n",
    "        ],\n",
    "    )\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "4nMEMYU6sNUA"
   },
   "source": [
    "To use Vertex AI Prompt Optimizer for tool calling optimization, provide `FunctionDeclaration` and `ToolConfig` as JSON structures (see example below). Vertex AI Prompt Optimizer uses those structures along the optimization process.\n",
    "\n",
    "Tool Calls json:\n",
    "\n",
    "```json\n",
    "{\"tools\": [{\"function_declarations\": [{\"name\": \"function_1\", \"description\": \"My function 1\", \"parameters\": {\"type\": \"OBJECT\", \"properties\": {\"argument_1\": {\"type\": \"STRING\", \"description\": \"My argument 1\"}}, \"required\": [\"argument_1\"], \"property_ordering\": [\"argument_1\"]}}, ...]}]}\n",
    "```\n",
    "Function Calling Configuration json:\n",
    "\n",
    "```json\n",
    "{\"function_calling_config\": {\"mode\": \"your_mode\", \"allowed_function_names\": [\"tool_name_1\", ...]}}\n",
    "```\n",
    "\n",
    "Below you have some helper functions to get those structures and validate them.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "jxHLX921xdHJ"
   },
   "outputs": [],
   "source": [
    "vapo_tools = json.dumps({\"tools\": [replace_type_key(tools.to_dict())]})\n",
    "vapo_tool_config = json.dumps(tool_config_to_dict(tool_config))\n",
    "\n",
    "validate_tools(vapo_tools)\n",
    "validate_tool_config(vapo_tool_config)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ceZNbD_YzLEY"
   },
   "source": [
    "##### Set the optimization configuration\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "40Pyzkot040M"
   },
   "outputs": [],
   "source": [
    "output_path = f\"{BUCKET_URI}/optimization_results/\"\n",
    "\n",
    "vapo_data_settings = {\n",
    "    \"system_instruction\": system_instruction,\n",
    "    \"prompt_template\": prompt_template,\n",
    "    \"target_model\": \"gemini-2.5-flash\",\n",
    "    \"thinking_budget\": -1,\n",
    "    \"optimization_mode\": \"instruction\",\n",
    "    \"tools\": vapo_tools,\n",
    "    \"tool_config\": vapo_tool_config,\n",
    "    \"eval_metrics_types\": [\"tool_name_match\", \"tool_parameter_key_match\", \"tool_parameter_kv_match\"],\n",
    "    \"eval_metrics_weights\": [0.4, 0.3, 0.3],\n",
    "    \"aggregation_type\": \"weighted_sum\",\n",
    "    \"input_data_path\": input_data_path,\n",
    "    \"output_path\": output_path,\n",
    "    \"project\": PROJECT_ID,\n",
    "}\n",
    "\n",
    "vapo_data_config = OptimizationConfig(**vapo_data_settings)\n",
    "vapo_data_config_json = vapo_data_config.model_dump()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "92qSHhIT838O"
   },
   "source": [
    "#### Upload configuration to Cloud Storage\n",
    "\n",
    "Write the Prompt Optimizer configuration to the file in your GCS bucket.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "6PG_a6ss4J1l"
   },
   "outputs": [],
   "source": [
    "config_path = f\"{BUCKET_URI}/config.json\"\n",
    "\n",
    "with epath.Path(config_path).open(\"w\") as config_file:\n",
    "    json.dump(vapo_data_config_json, config_file)\n",
    "config_file.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "FpRGZTk68-Nu"
   },
   "source": [
    "#### Run the prompt optimization job\n",
    "\n",
    "This is the final step. We pass the path to our configuration file and the service account to the Vertex AI client. The `optimize` method starts the custom job on the Vertex AI backend. We set `wait_for_completion` to `True` so the script will pause until the job is finished.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "uGZKNjsu6EEw"
   },
   "outputs": [],
   "source": [
    "vapo_data_run_config = {\n",
    "    \"config_path\": config_path,\n",
    "    \"wait_for_completion\": True,\n",
    "    \"service_account\": SERVICE_ACCOUNT,\n",
    "}\n",
    "\n",
    "result = client.prompt_optimizer.optimize(method=\"vapo\", config=vapo_data_run_config)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "wJL6tRAWKyXz"
   },
   "source": [
    "### Get and use the best prompt programmatically\n",
    "\n",
    "For use in an application, you can programmatically retrieve the top-performing instruction from the output files stored in GCS.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "8b_LRAhyxOvQ"
   },
   "outputs": [],
   "source": [
    "best_instruction, _ = get_best_vapo_results(output_path)\n",
    "print(\"The optimized instruction is:\\n\", best_instruction)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "2a4e033321ad"
   },
   "source": [
    "## Cleaning up"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "U_n-_B-Ekpk_"
   },
   "outputs": [],
   "source": [
    "delete_job = True\n",
    "delete_bucket = True\n",
    "\n",
    "if delete_job:\n",
    "    from google.cloud import aiplatform\n",
    "    aiplatform.init(project=PROJECT_ID, location=LOCATION)\n",
    "    custom_job_list = aiplatform.CustomJob.list()\n",
    "    latest_job = custom_job_list[0]\n",
    "    latest_job.delete()\n",
    "\n",
    "if delete_bucket:\n",
    "    ! gsutil -m rm -r $BUCKET_URI"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "name": "get_started_with_vertex_ai_prompt_optimizer_tool_usage.ipynb",
   "toc_visible": true
  },
  "environment": {
   "kernel": "python3",
   "name": "workbench-notebooks.m131",
   "type": "gcloud",
   "uri": "us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-notebooks:m131"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "name": "python3"
  },
  "language_info": {
   "name": ""
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
