{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "sqi5B7V_Rjim"
      },
      "outputs": [],
      "source": [
        "# @title Copyright 2025 Google LLC\n",
        "#\n",
        "# Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "#     https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VyPmicX9RlZX"
      },
      "source": [
        "# Intro to thought signatures with REST API\n",
        "\n",
        "<table align=\"left\">\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/generative-ai/blob/main/gemini/thinking/intro_thought_signatures_rest.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://www.gstatic.com/pantheon/images/bigquery/welcome_page/colab-logo.svg\" alt=\"Google Colaboratory logo\"><br> Open in Colab\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/colab/import/https:%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fgenerative-ai%2Fmain%2Fgemini%2Fthinking%2Fintro_thought_signatures_rest.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://lh3.googleusercontent.com/JmcxdQi-qOpctIvWKgPtrzZdJJK-J3sWE1RsfjZNwshCFgE_9fULcNpuXYTilIR2hjwN\" alt=\"Google Cloud Colab Enterprise logo\"><br> Open in Colab Enterprise\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/generative-ai/main/gemini/thinking/intro_thought_signatures_rest.ipynb\">\n",
        "      <img src=\"https://www.gstatic.com/images/branding/gcpiconscolors/vertexai/v1/32px.svg\" alt=\"Vertex AI logo\"><br> Open in Vertex AI Workbench\n",
        "    </a>\n",
        "  </td>\n",
        "  <td style=\"text-align: center\">\n",
        "    <a href=\"https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/thinking/intro_thought_signatures_rest.ipynb\">\n",
        "      <img width=\"32px\" src=\"https://raw.githubusercontent.com/primer/octicons/refs/heads/main/icons/mark-github-24.svg\" alt=\"GitHub logo\"><br> View on GitHub\n",
        "    </a>\n",
        "  </td>\n",
        "</table>\n",
        "\n",
        "<p>\n",
        "<b>Share to:</b>\n",
        "\n",
        "<a href=\"https://www.linkedin.com/sharing/share-offsite/?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/thinking/intro_thought_signatures_rest.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/8/81/LinkedIn_icon.svg\" alt=\"LinkedIn logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://bsky.app/intent/compose?text=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/thinking/intro_thought_signatures_rest.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/7/7a/Bluesky_Logo.svg\" alt=\"Bluesky logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://twitter.com/intent/tweet?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/thinking/intro_thought_signatures_rest.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/5a/X_icon_2.svg\" alt=\"X logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://reddit.com/submit?url=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/thinking/intro_thought_signatures_rest.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://redditinc.com/hubfs/Reddit%20Inc/Brand/Reddit_Logo.png\" alt=\"Reddit logo\">\n",
        "</a>\n",
        "\n",
        "<a href=\"https://www.facebook.com/sharer/sharer.php?u=https%3A//github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/thinking/intro_thought_signatures_rest.ipynb\" target=\"_blank\">\n",
        "  <img width=\"20px\" src=\"https://upload.wikimedia.org/wikipedia/commons/5/51/Facebook_f_logo_%282019%29.svg\" alt=\"Facebook logo\">\n",
        "</a>\n",
        "</p>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8MqT58L6Rm_q"
      },
      "source": [
        "| Authors |\n",
        "| --- |\n",
        "| [Eric Dong](https://github.com/gericdong) |"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nVxnv1D5RoZw"
      },
      "source": [
        "## Overview\n",
        "\n",
        "A thought signature is an encrypted representation of the model's internal reasoning process for a given turn in a conversation. By passing this signature back to the model in subsequent requests, you provide it with the context of its previous thoughts, allowing it to build upon its reasoning and maintain a coherent line of inquiry.\n",
        "\n",
        "This tutorial explores how to use the thought signatures feature of the Gemini API with cURL and the REST API."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gPiTOAHURvTM"
      },
      "source": [
        "## Getting Started"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5f7c203ffaa1"
      },
      "source": [
        "### Install required libraries"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4e66b2f6d36f"
      },
      "outputs": [],
      "source": [
        "%%capture\n",
        "\n",
        "!sudo apt install -q jq"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dmWOrTJ3gx13"
      },
      "source": [
        "### Authenticate your notebook environment (Colab only)\n",
        "\n",
        "If you are running this notebook on Google Colab, run the following cell to authenticate your environment."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "NyKGtVQjgx13"
      },
      "outputs": [],
      "source": [
        "import sys\n",
        "\n",
        "if \"google.colab\" in sys.modules:\n",
        "    from google.colab import auth\n",
        "\n",
        "    auth.authenticate_user()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "O6ZGaZlxP9L0"
      },
      "source": [
        "### Set Google Cloud project\n",
        "\n",
        "To get started using Vertex AI, you must have an existing Google Cloud project and [enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).\n",
        "\n",
        "Learn more about [setting up a project and a development environment](https://cloud.google.com/vertex-ai/docs/start/cloud-environment)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "u8IivOG5SqY6"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "\n",
        "# fmt: off\n",
        "PROJECT_ID = \"[your-project-id]\"  # @param {type: \"string\", placeholder: \"[your-project-id]\", isTemplate: true}\n",
        "# fmt: on\n",
        "if not PROJECT_ID or PROJECT_ID == \"[your-project-id]\":\n",
        "    PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
        "\n",
        "LOCATION = os.environ.get(\"GOOGLE_CLOUD_REGION\", \"global\")\n",
        "\n",
        "os.environ[\"GOOGLE_CLOUD_PROJECT\"] = PROJECT_ID\n",
        "os.environ[\"GOOGLE_CLOUD_REGION\"] = LOCATION"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "854fbf388e2b"
      },
      "source": [
        "## Use the Gemini 2.5 Flash model"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7eeb063ac6d4"
      },
      "outputs": [],
      "source": [
        "MODEL_ID = \"gemini-2.5-flash\"\n",
        "\n",
        "api_host = \"aiplatform.googleapis.com\"\n",
        "if LOCATION != \"global\":\n",
        "    api_host = f\"{LOCATION}-aiplatform.googleapis.com\"\n",
        "\n",
        "os.environ[\"API_ENDPOINT\"] = (\n",
        "    f\"{api_host}/v1/projects/{PROJECT_ID}/locations/{LOCATION}/publishers/google/models/{MODEL_ID}\"\n",
        ")\n",
        "API_ENDPOINT = os.environ[\"API_ENDPOINT\"]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "37CH91ddY9kG"
      },
      "source": [
        "## Use Thought Signatures\n",
        "\n",
        "When thinking is enabled, the API response includes a `thought_signature` field containing an encrypted representation of the model's reasoning. When a function's execution result is sent back to the server, including the `thought_signature` allows the model to restore its previous thinking context, which will likely improve function calling performance.\n",
        "\n",
        "Optionally, to enable thought summaries, include the `\"thinking_config\": { \"include_thoughts\": true }` object in your request body. "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hlVCCsncIlYJ"
      },
      "source": [
        "## Example: Conditional Thermostat Control\n",
        "\n",
        "In this scenario, a user wants to set a thermostat based on the current weather. The request is: \"If it's too hot or too cold in London, set the thermostat to a comfortable temperature.\"\n",
        "\n",
        "This requires the model to:\n",
        "\n",
        "- Call a tool to get the weather in London.\n",
        "- Use the returned weather information to decide if another tool needs to be called.\n",
        "- Call the tool to set the thermostat if the condition is met."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vTWbrR1RJP_K"
      },
      "source": [
        "### **Step 1**: First Turn - Get the Weather\n",
        "\n",
        "We send the initial prompt to the model, along with the definitions of the tools it can use and the configuration to enable thinking. We expect it to call the `get_current_temperature` function and return a thought signature to maintain the context of the user's conditional request."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "p8-eTvw5oDE8"
      },
      "outputs": [],
      "source": [
        "%%bash\n",
        "\n",
        "curl -X POST \\\n",
        "-H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n",
        "-H \"Content-Type: application/json\" \\\n",
        "https://${API_ENDPOINT}:generateContent \\\n",
        "-d @- \\\n",
        "> response1.json 2>/dev/null <<EOF\n",
        "{\n",
        "    \"contents\": {\n",
        "      \"role\": \"user\",\n",
        "      \"parts\": { \"text\": \"If it is too hot or too cold in London, set the temperature to a comfortable level. Make your own reasonable assumption for what comfortable means and do not ask for clarification.\" }\n",
        "    },\n",
        "    \"tools\": [\n",
        "      {\n",
        "        \"function_declarations\": [\n",
        "          {\n",
        "            \"name\": \"get_current_temperature\",\n",
        "            \"description\": \"Gets the current weather temperature for a given location.\",\n",
        "            \"parameters\": { \"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\"}}, \"required\": [\"location\"] }\n",
        "          },\n",
        "          {\n",
        "            \"name\": \"set_thermostat_temperature\",\n",
        "            \"description\": \"Sets the thermostat to a desired temperature.\",\n",
        "            \"parameters\": { \"type\": \"object\", \"properties\": {\"temperature\": {\"type\": \"integer\"}}, \"required\": [\"temperature\"] }\n",
        "          }\n",
        "        ]\n",
        "      }\n",
        "    ],\n",
        "    \"generationConfig\": {\n",
        "      \"thinking_config\": {\n",
        "        \"include_thoughts\": true\n",
        "      },\n",
        "    }\n",
        "}\n",
        "EOF"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "cGJElxiGuDbc"
      },
      "outputs": [],
      "source": [
        "%%bash\n",
        "\n",
        "echo \"--- Function Call ---\"\n",
        "jq -r '.candidates[0].content.parts[] | select(.functionCall) | .functionCall.name' response1.json\n",
        "\n",
        "echo \"--- Thought Signature ---\"\n",
        "jq -r '.candidates[0].content.parts[] | select(.functionCall) | .thoughtSignature' response1.json\n",
        "\n",
        "echo \"--- Thought Summary ---\"\n",
        "jq -r '.candidates[0].content.parts[] | select(.thought) | .text' response1.json"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DOWwBqHuLUQ-"
      },
      "source": [
        "### **Step 2**: Second Turn - Set the Thermostat\n",
        "\n",
        "Now, we send the model's previous response (containing the `functionCall` and the `thought_signature`) and the result from our first tool call back to the model. We will hardcode the values from the previous response for this demonstration.\n",
        "\n",
        "⚠️ In this case, you modify the conversation history manually, instead of sending the complete previous response and want to benefit from thinking you must correctly handle the `thought_signature` included in the model's turn.\n",
        "\n",
        "Follow these rules to ensure the model's context is preserved:\n",
        "\n",
        "- Always send the `thought_signature` back to the model inside its original Part.\n",
        "- Don't merge a Part containing a signature with one that does not. This breaks the positional context of the thought.\n",
        "- Don't combine two Parts that both contain signatures, as the signature strings cannot be merged."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "R4jrEbRNc-H1"
      },
      "outputs": [],
      "source": [
        "jq_output = !jq -r '.candidates[0].content.parts[] | select(.functionCall) | .thoughtSignature' response1.json\n",
        "\n",
        "os.environ[\"THOUGHT_SIGNATURE_1\"] = jq_output[0]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "xP0jQlZokYfj"
      },
      "outputs": [],
      "source": [
        "%%bash\n",
        "\n",
        "curl -X POST \\\n",
        "-H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n",
        "-H \"Content-Type: application/json\" \\\n",
        "\"https://${API_ENDPOINT}:generateContent\" \\\n",
        "-d @- \\\n",
        "> response2.json 2>/dev/null <<EOF\n",
        "{\n",
        "  \"contents\": [\n",
        "      {\n",
        "        \"role\": \"user\",\n",
        "        \"parts\": { \"text\": \"If it is too hot or too cold in London, set the temperature to a comfortable level. Make your own reasonable assumption for what comfortable means and do not ask for clarification.\" }\n",
        "      },\n",
        "      {\n",
        "        \"role\": \"model\",\n",
        "        \"parts\": [\n",
        "          {\n",
        "            \"function_call\": {\n",
        "              \"name\": \"get_current_temperature\",\n",
        "              \"args\": { \"location\": \"London\" }\n",
        "            },\n",
        "            \"thought_signature\": \"${THOUGHT_SIGNATURE_1}\"\n",
        "          }\n",
        "        ]\n",
        "      },\n",
        "      {\n",
        "        \"role\": \"tool\",\n",
        "        \"parts\": [ { \"function_response\": { \"name\": \"get_current_temperature\", \"response\": { \"temperature\": 30, \"unit\": \"celsius\" } } } ]\n",
        "      }\n",
        "    ],\n",
        "    \"tools\": [\n",
        "      {\n",
        "        \"function_declarations\": [\n",
        "          {\n",
        "            \"name\": \"get_current_temperature\",\n",
        "            \"description\": \"Gets the current weather temperature for a given location.\",\n",
        "            \"parameters\": { \"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\"}}, \"required\": [\"location\"] }\n",
        "          },\n",
        "          {\n",
        "            \"name\": \"set_thermostat_temperature\",\n",
        "            \"description\": \"Sets the thermostat to a desired temperature.\",\n",
        "            \"parameters\": { \"type\": \"object\", \"properties\": {\"temperature\": {\"type\": \"integer\"}}, \"required\": [\"temperature\"] }\n",
        "          }\n",
        "        ]\n",
        "      }\n",
        "    ],\n",
        "    \"generationConfig\": {\n",
        "      \"thinking_config\": {\n",
        "        \"include_thoughts\": true\n",
        "      },\n",
        "    }\n",
        "}\n",
        "EOF"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZqkhB0X21C6G"
      },
      "source": [
        "Using this context, the model will decide if it needs to call the `set_thermostat_temperature` function."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "9JWuzrpIfaw1"
      },
      "outputs": [],
      "source": [
        "%%bash\n",
        "\n",
        "echo \"--- Function Call ---\"\n",
        "jq -r '.candidates[0].content.parts[] | select(.functionCall) | .functionCall.name' response2.json\n",
        "\n",
        "echo \"--- Thought Signature ---\"\n",
        "jq -r '.candidates[0].content.parts[] | select(.functionCall) | .thoughtSignature' response2.json\n",
        "\n",
        "echo \"--- Thought Summary ---\"\n",
        "jq -r '.candidates[0].content.parts[] | select(.thought) | .text' response2.json"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MH0bJ2QBSCAg"
      },
      "source": [
        "### **Step 3**: Generate a user-friendly response\n",
        "\n",
        "Finally, we send the full conversation history, including the second function call and its result, back to the model to get a final, user-friendly text response."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "gXoystqdutIG"
      },
      "outputs": [],
      "source": [
        "jq_output = !jq -r '.candidates[0].content.parts[] | select(.functionCall) | .thoughtSignature' response2.json\n",
        "\n",
        "os.environ[\"THOUGHT_SIGNATURE_2\"] = jq_output[0]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "ktEwDwf5R22J"
      },
      "outputs": [],
      "source": [
        "%%bash\n",
        "# NOTE: The history is copied from the previous steps.\n",
        "\n",
        "curl -X POST \\\n",
        "-H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n",
        "-H \"Content-Type: application/json\" \\\n",
        "\"https://${API_ENDPOINT}:generateContent\" \\\n",
        "-d @- \\\n",
        "> response3.json 2>/dev/null <<EOF\n",
        "{\n",
        "    \"contents\": [\n",
        "      {\n",
        "        \"role\": \"user\",\n",
        "        \"parts\": { \"text\": \"If it is too hot or too cold in London, set the temperature to a comfortable level. Make your own reasonable assumption for what 'comfortable' means and do not ask for clarification.\" }\n",
        "      },\n",
        "      {\n",
        "        \"role\": \"model\",\n",
        "        \"parts\": [\n",
        "          {\n",
        "            \"function_call\": {\n",
        "              \"name\": \"get_current_temperature\",\n",
        "              \"args\": { \"location\": \"London\" }\n",
        "            },\n",
        "            \"thought_signature\": \"${THOUGHT_SIGNATURE_1}\"\n",
        "          }\n",
        "        ]\n",
        "      },\n",
        "      {\n",
        "        \"role\": \"tool\",\n",
        "        \"parts\": [ { \"function_response\": { \"name\": \"get_current_temperature\", \"response\": { \"temperature\": 30, \"unit\": \"celsius\" } } } ]\n",
        "      },\n",
        "      {\n",
        "        \"role\": \"model\",\n",
        "        \"parts\": [\n",
        "          {\n",
        "            \"function_call\": {\n",
        "              \"name\": \"set_thermostat_temperature\",\n",
        "              \"args\": { \"temperature\": 21 }\n",
        "            },\n",
        "            \"thought_signature\": \"${THOUGHT_SIGNATURE_2}\"\n",
        "          }\n",
        "        ]\n",
        "      },\n",
        "      {\n",
        "        \"role\": \"tool\",\n",
        "        \"parts\": [ { \"function_response\": { \"name\": \"set_thermostat_temperature\", \"response\": { \"status\": \"success\" } } } ]\n",
        "      }\n",
        "    ],\n",
        "    \"tools\": [\n",
        "      {\n",
        "        \"function_declarations\": [\n",
        "          {\n",
        "            \"name\": \"get_current_temperature\",\n",
        "            \"description\": \"Gets the current weather temperature for a given location.\",\n",
        "            \"parameters\": { \"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\"}}, \"required\": [\"location\"] }\n",
        "          },\n",
        "          {\n",
        "            \"name\": \"set_thermostat_temperature\",\n",
        "            \"description\": \"Sets the thermostat to a desired temperature.\",\n",
        "            \"parameters\": { \"type\": \"object\", \"properties\": {\"temperature\": {\"type\": \"integer\"}}, \"required\": [\"temperature\"] }\n",
        "          }\n",
        "        ]\n",
        "      }\n",
        "    ]\n",
        "}\n",
        "EOF\n",
        "\n",
        "echo \"--- Final Text Response ---\"\n",
        "jq -r '.candidates[0].content.parts[0].text' response3.json"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MvFXTZZ31yjm"
      },
      "source": [
        "### **Cleanup**\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "cleanup"
      },
      "outputs": [],
      "source": [
        "!rm response*.json"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "290584e06e15"
      },
      "source": [
        "## Next Steps\n",
        "\n",
        "Explores the thought signatures feature of the Gemini API with Gen AI SDK: [intro_thought_signatures.ipynb](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/thinking/intro_thought_signatures.ipynb)"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "intro_thought_signatures_rest.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
