{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "4657bd35",
   "metadata": {},
   "source": [
    "# Installation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e846cbe0",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install langchain-google-vertexai langchain-google-community[search] langgraph"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "IfIzVgBxClFL",
   "metadata": {
    "id": "IfIzVgBxClFL"
   },
   "source": [
    "# Prompt design"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "hleQYEaLb5rX",
   "metadata": {
    "id": "hleQYEaLb5rX"
   },
   "source": [
    "Let's start with a simple math problem:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "rpot8rsTIb7F",
   "metadata": {
    "id": "rpot8rsTIb7F"
   },
   "outputs": [],
   "source": [
    "math_problem1 = (\n",
    "    \"John had 1097 candies. They ate 14 yesterday, 18 today and shared 341 more with \"\n",
    "    \"their classmates. How many candies to they have left? \"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0bmFLsWRguWq",
   "metadata": {
    "id": "0bmFLsWRguWq"
   },
   "source": [
    "Let's try how LLM handles it:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "wK5MXi3qjsZw",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 993,
     "status": "ok",
     "timestamp": 1726490101154,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "wK5MXi3qjsZw",
    "outputId": "2d9f84ef-93af-4549-e901-be05207bfa1a"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "704 \n",
      "\n"
     ]
    }
   ],
   "source": [
    "from langchain_google_vertexai import ChatVertexAI\n",
    "from langchain_core.messages import SystemMessage, HumanMessage\n",
    "\n",
    "llm = ChatVertexAI(\n",
    "   model_name=\"gemini-1.5-pro-001\",\n",
    "   temperature=0.)\n",
    "result = llm.invoke([\n",
    "    SystemMessage(content=\"Give a short answer (a single number only).\"),\n",
    "    HumanMessage(content=math_problem1)\n",
    "])\n",
    "print(result.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "vmcadJnAgw3i",
   "metadata": {
    "id": "vmcadJnAgw3i"
   },
   "source": [
    "It's not the right answer, since we expected *1097-14-18-341=724*. Let's try a **sampling** (or self-consistency) technique. We're going to retrieve an answer multiple time from a LLM, and then we'll take a look at the probability distribution:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "xL39YPL1mVdv",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 14746,
     "status": "ok",
     "timestamp": 1726489726113,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "xL39YPL1mVdv",
    "outputId": "9ce619e5-2438-47bb-8cfd-0ae3abd77fec"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Counter({'704 \\n': 17, '724 \\n': 3})\n"
     ]
    }
   ],
   "source": [
    "from collections import Counter\n",
    "\n",
    "answers = Counter()\n",
    "llm_high_temperature = ChatVertexAI(\n",
    "   model_name=\"gemini-1.5-pro-001\",\n",
    "   temperature=0.7)\n",
    "\n",
    "for _ in range(20):\n",
    " answer = llm_high_temperature.invoke([\n",
    "    SystemMessage(content=\"Give a short answer (a single number only).\"),\n",
    "    HumanMessage(content=math_problem1)\n",
    " ]).content\n",
    " answers[answer] += 1\n",
    "\n",
    "print(answers)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c12e24e1-50cc-4c86-a57e-12aa6fdce63d",
   "metadata": {},
   "source": [
    "We saw that the model was able to figure out the right answer at least a few times, but it's still not enough (since this answer doesn't have the highest frequency in the sample)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "i_0XbEoB3gNX",
   "metadata": {
    "id": "i_0XbEoB3gNX"
   },
   "source": [
    "## Controlled generation (naive)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "-KT-WII7qDxd",
   "metadata": {
    "id": "-KT-WII7qDxd"
   },
   "source": [
    "Let's remove our system message:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "u2IylDpZomei",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 4424,
     "status": "ok",
     "timestamp": 1726490118052,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "u2IylDpZomei",
    "outputId": "ef8b1206-65f6-4d18-929e-8ddcc7869d96"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Here's how to solve the problem step-by-step:\n",
      "\n",
      "* **Step 1: Find the total eaten.** \n",
      "   John ate 14 + 18 = 32 candies.\n",
      "\n",
      "* **Step 2: Find the total gone.**\n",
      "   John ate 32 candies and gave away 341, for a total of 32 + 341 = 373 candies.\n",
      "\n",
      "* **Step 3: Subtract to find the remaining candies.**\n",
      "   John started with 1097 and lost 373, leaving 1097 - 373 = 724 candies.\n",
      "\n",
      "**Answer:** John has 724 candies left. \n",
      "\n"
     ]
    }
   ],
   "source": [
    "print(llm.invoke([\n",
    "    HumanMessage(content=math_problem1)\n",
    "]).content)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "FDd5B2HoqFte",
   "metadata": {
    "id": "FDd5B2HoqFte"
   },
   "source": [
    "Now we get some reasoning traces, but the answer is right. How can we parse it reliably?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "aJEG9viHCXBu",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 11024,
     "status": "ok",
     "timestamp": 1726534413597,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "aJEG9viHCXBu",
    "outputId": "164a8c8e-5794-4277-ed21-0168eff5c857"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Here's how to solve the problem:\n",
      "\n",
      "* **Total eaten:** 14 + 18 = 32 candies\n",
      "* **Total given away:** 341 candies\n",
      "* **Total candies gone:** 32 + 341 = 373 candies\n",
      "* **Candies left:** 1097 - 373 = 724 candies\n",
      "\n",
      "FINAL_ANSWER=724 \n",
      "\n"
     ]
    }
   ],
   "source": [
    "result = llm.invoke([\n",
    "    SystemMessage(content=\"Always give a final answer in a form FINAL_ANSWER=...\"),\n",
    "    HumanMessage(content=math_problem1)\n",
    " ]).content\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "MYSUydP4oqb1",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 869,
     "status": "ok",
     "timestamp": 1726535096980,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "MYSUydP4oqb1",
    "outputId": "3beaae6d-cf58-48d9-be89-ed1b40ec71e6"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'answer': '724'}\n"
     ]
    }
   ],
   "source": [
    "from langchain.output_parsers.regex import RegexParser\n",
    "\n",
    "parser = RegexParser(regex=\"FINAL_ANSWER=(\\d+)\", output_keys=[\"answer\"])\n",
    "print(parser.invoke(result))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "AEZN2eipgwea",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 2765,
     "status": "ok",
     "timestamp": 1726536065317,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "AEZN2eipgwea",
    "outputId": "de32d488-5b4f-4727-f897-638d495eef5c"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'answer': '724'}"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.prompts import ChatPromptTemplate, PromptTemplate, MessagesPlaceholder\n",
    "\n",
    "prompt = ChatPromptTemplate(\n",
    "    [(\"system\", \"Always give a final answer in a form FINAL_ANSWER=...\"),\n",
    "    (\"human\", \"{user_input}\")]\n",
    ")\n",
    "chain = prompt | llm | parser\n",
    "chain.invoke(math_problem1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "sYY84x0VjFhV",
   "metadata": {
    "id": "sYY84x0VjFhV"
   },
   "outputs": [],
   "source": [
    "assert prompt.invoke({\"user_input\": \"test\"}) == prompt.invoke(\"test\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cb6ecf85-209e-4502-a477-6eb72dddbe02",
   "metadata": {},
   "source": [
    "We will use a Gemma 2 27B model as an example of a small model. You need to go to the Vertex AI Model Garden, search for Gemma 2 and deploy the model first (it will use one Nvidia A100 GPU). [This](https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/deploy-and-inference-tutorial) guide has more instructions. After deployment is finished (it might take 15-20 minutes), you need to copy the endpoint_id from the Vertex AI and update the parameters below:\n",
    "\n",
    "If you don't want to do that, feel free to use Gemini-1.5-flash-001 as an example of a smaller model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "13b2c2f4-86e6-45bd-90d1-0ce0e4ff4230",
   "metadata": {},
   "outputs": [],
   "source": [
    "small_llm = ChatVertexAI(model_name=\"gemini-1.5-flash-001\", location=\"us-central1\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "17d59ac5-50ca-4f09-b593-058763238507",
   "metadata": {},
   "outputs": [],
   "source": [
    "GEMMA_ENDPOINT_ID = \"\"\n",
    "PROJECT_ID = \"\"\n",
    "LOCATION = \"us-central1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "Ranlb7s84eTh",
   "metadata": {
    "id": "Ranlb7s84eTh"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\n",
      "I0000 00:00:1729773888.955651 8206579 fork_posix.cc:77] Other threads are currently calling into gRPC, skipping fork() handlers\n",
      "I0000 00:00:1729773889.583101 8206579 fork_posix.cc:77] Other threads are currently calling into gRPC, skipping fork() handlers\n"
     ]
    }
   ],
   "source": [
    "from langchain_google_vertexai import VertexAIModelGarden\n",
    "small_llm = VertexAIModelGarden(\n",
    "    endpoint_id=GEMMA_ENDPOINT_ID,\n",
    "    project=PROJECT_ID,\n",
    "    location=LOCATION,\n",
    "    prompt_arg=\"inputs\",\n",
    "    allowed_model_args=[\"temperature\", \"max_tokens\"]\n",
    ").bind(max_tokens=512)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "K4BCNLmF5PwB",
   "metadata": {
    "id": "K4BCNLmF5PwB"
   },
   "outputs": [],
   "source": [
    "prompt = ChatPromptTemplate(\n",
    "    [\n",
    "    (\"human\", \"Always give a final answer in a form FINAL_ANSWER=...\\n{user_input}\")]\n",
    ")\n",
    "parser = RegexParser(regex=\"FINAL_ANSWER=\\s?(\\d+)\", output_keys=[\"answer\"])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "OEuxpN_z4RS5",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 4973,
     "status": "ok",
     "timestamp": 1726542263339,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "OEuxpN_z4RS5",
    "outputId": "62b0217a-d647-459e-d09c-e74779856cc0"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'answer': '724'}"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "\n",
    "chain_small = prompt | small_llm | parser\n",
    "chain_small.invoke(math_problem1)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "KSOnjqdF3jp7",
   "metadata": {
    "id": "KSOnjqdF3jp7"
   },
   "source": [
    "## CoT"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6f23nMLy3qnn",
   "metadata": {
    "id": "6f23nMLy3qnn"
   },
   "source": [
    "Let's try it with a CoT prompting:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "pvGBT6OiaMxf",
   "metadata": {
    "id": "pvGBT6OiaMxf"
   },
   "outputs": [],
   "source": [
    "cot_prompt = (\n",
    "    \"Always think step-by-step. Explain your reasoning.\"\n",
    "    \" Split a problem into sequence of reasoning steps and try to solve it.\"\n",
    "    \" Always give a final answer in a form FINAL_ANSWER=...\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "Bl4NkGL33wqc",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 3459,
     "status": "ok",
     "timestamp": 1726541176671,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "Bl4NkGL33wqc",
    "outputId": "3bb1d7f4-875f-40da-ba50-49225f18828f"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'answer': '724'}"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "cot_prompt_template = ChatPromptTemplate(\n",
    "    [(\"system\", cot_prompt),\n",
    "     (\"human\", \"{user_input}\")]\n",
    ")\n",
    "chain_cot = cot_prompt_template | llm | parser\n",
    "chain_cot.invoke(math_problem1)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8013499c-f336-486d-ab4c-e6ed94507f82",
   "metadata": {},
   "source": [
    "Now we got it right!"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "SXyK7qyZCozn",
   "metadata": {
    "id": "SXyK7qyZCozn"
   },
   "source": [
    "# Calculator"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "yRqAKR3Y2zAB",
   "metadata": {
    "id": "yRqAKR3Y2zAB"
   },
   "outputs": [],
   "source": [
    "from langchain_google_vertexai import ChatVertexAI\n",
    "llm = ChatVertexAI(model_name=\"gemini-1.5-pro-001\", temperature=1.0)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1b1e985b-0202-4073-9929-cd47114fdd74",
   "metadata": {},
   "source": [
    "Let's define a more complex math problem:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "7rptoTad7kW5",
   "metadata": {
    "id": "7rptoTad7kW5"
   },
   "outputs": [],
   "source": [
    "math_problem2 = \"How much is 23*2**2+156/4-18?\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "YyCY0iDt7Zof",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 4702,
     "status": "ok",
     "timestamp": 1726542517684,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "YyCY0iDt7Zof",
    "outputId": "a1b3aba8-f26a-4378-e954-e39d8ba578ac"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "First, we follow the order of operations (PEMDAS/BODMAS):\n",
      "\n",
      "1. **Exponents:** 2**2 = 4\n",
      "\n",
      "2. **Multiplication:** 23*4 = 92 and 156/4 = 39\n",
      "\n",
      "3. **Addition and Subtraction** (from left to right):\n",
      "   92 + 39 = 131\n",
      "   131 - 18 = 113\n",
      "\n",
      "\n",
      "\n",
      "Therefore, 23*2**2+156/4-18 = **113**.\n",
      "\n"
     ]
    }
   ],
   "source": [
    "print(small_llm.invoke(math_problem2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "id": "3JOAg6ZQ77Wk",
   "metadata": {
    "id": "3JOAg6ZQ77Wk"
   },
   "outputs": [],
   "source": [
    "calculator_prompt = (\n",
    "   \"You have access to calculator that can solve mathematical problems. \"\n",
    "   \"If you want to ask a calculator, start with CALCULATOR: and generate an expression \"\n",
    "   \"to be evaluated by a calculator (it should have only numbers and mathematical operators).\"\n",
    "   \"If you ask CALCULATOR, don't do anything else.\"\n",
    "   \"If you think you have a final solution, start it with FINAL_ANSWER=.\\n\"\n",
    ")\n",
    "\n",
    "calculator_prompt_template = ChatPromptTemplate(\n",
    "    [(\"system\", calculator_prompt),\n",
    "     MessagesPlaceholder(variable_name=\"messages\")]\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "id": "dZYW8IXr9XjA",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 1434,
     "status": "ok",
     "timestamp": 1726542966449,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "dZYW8IXr9XjA",
    "outputId": "cba080bd-1289-4bf3-b333-394015fdcfc5"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "I0000 00:00:1729773946.311823 8206579 fork_posix.cc:77] Other threads are currently calling into gRPC, skipping fork() handlers\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CALCULATOR: 23*2**2+156/4-18 \n",
      "\n"
     ]
    }
   ],
   "source": [
    "step1 = (calculator_prompt_template | llm).invoke(\n",
    "    [HumanMessage(content=math_problem2)])\n",
    "print(step1.content)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "id": "UtLB0ikI_Nyy",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 571,
     "status": "ok",
     "timestamp": 1726543495745,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "UtLB0ikI_Nyy",
    "outputId": "5c76f4ef-f0d1-4fec-f4fc-2ab6fa6292ad"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "113.0\n"
     ]
    }
   ],
   "source": [
    "step2 = eval(step1.content.replace(\"CALCULATOR\", \"\").strip(\" \\n:\"))\n",
    "print(step2)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ibSznQK64yU8",
   "metadata": {
    "id": "ibSznQK64yU8"
   },
   "source": [
    "Now we can pass the result based to the LLM and ask it to generate the final result:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "id": "13axXrLY-kzm",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 1115,
     "status": "ok",
     "timestamp": 1726543303223,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "13axXrLY-kzm",
    "outputId": "d4c6513b-3585-4497-ed9e-79a23c7010ae"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "FINAL_ANSWER=113 \n",
      "\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.messages import AIMessage\n",
    "\n",
    "print((calculator_prompt_template | llm).invoke(\n",
    "    [HumanMessage(content=math_problem2),\n",
    "     step1,\n",
    "     HumanMessage(content=\"113\")\n",
    "    ]).content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "yVPJ4GrsDO8J",
   "metadata": {
    "id": "yVPJ4GrsDO8J"
   },
   "source": [
    "### Inherit from BaseTool"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "id": "6rHb-76oG0EY",
   "metadata": {
    "id": "6rHb-76oG0EY"
   },
   "outputs": [],
   "source": [
    "from pydantic import BaseModel, Field\n",
    "from langchain.tools import Tool\n",
    "from langchain_core.tools import BaseTool\n",
    "from typing import Optional, Type\n",
    "from langchain.callbacks.manager import (\n",
    "    AsyncCallbackManagerForToolRun,\n",
    "    CallbackManagerForToolRun,\n",
    ")\n",
    "\n",
    "\n",
    "\n",
    "class CalculatorInput(BaseModel):\n",
    "    \"\"\"Input to the Calculator.\"\"\"\n",
    "\n",
    "    expression: str = Field(\n",
    "        description=\"evaluates mathematical expressions\"\n",
    "    )\n",
    "\n",
    "class CalculatorTool(BaseTool):\n",
    "  name: str = \"Calculator\"\n",
    "  args_schema: Optional[Type[BaseModel]] = CalculatorInput\n",
    "  description: str = (\n",
    "      \"Useful for when you need to evaluate a mathematical expression.\"\n",
    "  )\n",
    "\n",
    "\n",
    "  def _run(\n",
    "        self, expression: str, run_manager: Optional[CallbackManagerForToolRun] = None\n",
    "    ) -> str:\n",
    "      \"\"\"Run the Calcualtor tool.\"\"\"\n",
    "      return eval(expression)\n",
    "\n",
    "\n",
    "calculator_tool = CalculatorTool()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "id": "dbiQSNF_C_C8",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 1450,
     "status": "ok",
     "timestamp": 1726544376791,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "dbiQSNF_C_C8",
    "outputId": "e7b15a3f-5a1f-4d68-e6ce-ca8deafd250f"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'name': 'Calculator', 'args': {'expression': '23*2**2+156/4-18'}, 'id': '3fa3e5a6-a343-489f-b730-ac3768aa4ae6', 'type': 'tool_call'}]\n"
     ]
    }
   ],
   "source": [
    "step2a = llm.invoke(math_problem2, tools=[calculator_tool])\n",
    "print(step2a.tool_calls)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "id": "yuD-XPq1DGNf",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 1193,
     "status": "ok",
     "timestamp": 1726544206039,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "yuD-XPq1DGNf",
    "outputId": "0a53b354-bfa9-4177-eefa-716fc2bf3c26"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "FINAL_ANSWER=113 \n",
      "\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.messages import ToolMessage\n",
    "\n",
    "print((calculator_prompt_template | llm).invoke(\n",
    "    [HumanMessage(content=math_problem2),\n",
    "     step2a,\n",
    "     ToolMessage(content=\"113\", tool_call_id=step2a.tool_calls[0][\"id\"])\n",
    "    ]).content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "hjB1OWxoE8H3",
   "metadata": {
    "id": "hjB1OWxoE8H3"
   },
   "source": [
    "### Decorator"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "id": "I7fTeaVUEyhs",
   "metadata": {
    "id": "I7fTeaVUEyhs"
   },
   "outputs": [],
   "source": [
    "from langchain.tools import tool\n",
    "\n",
    "@tool\n",
    "def calculator(expression: str) -> str:\n",
    "    \"\"\"Evaluates mathematical expressions.\"\"\"\n",
    "    return eval(expression)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "id": "9PYXM6dXE-w-",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 1609,
     "status": "ok",
     "timestamp": 1726544627944,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "9PYXM6dXE-w-",
    "outputId": "9df1c466-a7cf-48b8-b870-b9952dc4ac01"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'name': 'calculator', 'args': {'expression': '23*2**2+156/4-18'}, 'id': '5ab408a9-7f79-48d9-a90e-32bfe3ee2428', 'type': 'tool_call'}]\n"
     ]
    }
   ],
   "source": [
    "step2b = llm.invoke(math_problem2, tools=[calculator])\n",
    "print(step2b.tool_calls)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "nc_R9zpLHLXQ",
   "metadata": {
    "id": "nc_R9zpLHLXQ"
   },
   "source": [
    "### OpenAPI spec"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "id": "UDz43FE9FNgt",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 1367,
     "status": "ok",
     "timestamp": 1726544735099,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "UDz43FE9FNgt",
    "outputId": "98c60187-135c-4f64-f085-25f9a7c9e32a"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'name': 'Calculator', 'args': {'expression': '23*2**2+156/4-18'}, 'id': '76b66c7e-3a64-41f0-a16d-ff5f890a61ff', 'type': 'tool_call'}\n"
     ]
    }
   ],
   "source": [
    "calculator_declaration = {\n",
    "    \"name\": \"Calculator\",\n",
    "    \"description\": \"Useful for when you need to evaluate a mathematical expression.\",\n",
    "    \"parameters\": {\n",
    "        \"properties\": {\n",
    "            \"expression\": {\"type\": \"string\", \"title\": \"expression\"}\n",
    "        },\n",
    "        \"title\": 'CalculatorInput',\n",
    "        \"required\": [\"expression\"],\n",
    "        \"description\": 'Input to the Calculator tool.',\n",
    "        \"type\": \"object\"\n",
    "    }\n",
    "}\n",
    "step2c = llm.invoke(\n",
    "    math_problem2,\n",
    "    tools=[{\"function_declarations\": [calculator_declaration]}])\n",
    "print(step2c.tool_calls[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "g_8xxHCMFAbi",
   "metadata": {
    "id": "g_8xxHCMFAbi"
   },
   "source": [
    "## Pydantic models"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "id": "X6SNCIo610lJ",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 16509,
     "status": "ok",
     "timestamp": 1726624957375,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "X6SNCIo610lJ",
    "outputId": "6d8c38e1-515e-46de-ad6d-fec99dbf97cf"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'name': 'Plan', 'args': {'steps': ['Set realistic goals: Define motivation, establish achievable goals, and create a timeline.', 'Choose your resources: Find a course or method that suits you and gather supplementary materials.', 'Establish a study routine: Schedule dedicated learning time, focus on all language skills, and review regularly.', 'Immerse yourself in the language: Surround yourself with German and connect with German speakers.', \"Track your progress and stay motivated: Use a language learning journal, don\\\\'t be afraid to make mistakes, and reward your efforts.\"]}, 'id': '907994c1-350c-49b0-8935-2bcff6d649da', 'type': 'tool_call'}\n"
     ]
    }
   ],
   "source": [
    "from typing import List\n",
    "from pydantic import BaseModel, Field\n",
    "\n",
    "class Plan(BaseModel):\n",
    "    \"\"\"Plan to execute a task.\"\"\"\n",
    "\n",
    "    steps: List[str] = Field(\n",
    "        description=\"a plan with steps\"\n",
    "    )\n",
    "\n",
    "output = llm.invoke(\"Prepare a plan how to solve the following task: Learn German as a foreign language. It should be an enumerated list of actions.\")\n",
    "output1 = llm.invoke(output.content, functions=[Plan])\n",
    "print(output1.tool_calls[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "puJFYGqw3LZf",
   "metadata": {
    "id": "puJFYGqw3LZf"
   },
   "source": [
    "Now we can exectute it and get the Pydantic model:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "id": "cafk9AnN3NWJ",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 2,
     "status": "ok",
     "timestamp": 1726624912864,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "cafk9AnN3NWJ",
    "outputId": "a175ce81-2c48-4c17-9f28-cbd211041fd5"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class '__main__.Plan'>\n",
      "['Set realistic goals: Define motivation, establish achievable goals, and create a timeline.', 'Choose your resources: Find a course or method that suits you and gather supplementary materials.', 'Establish a study routine: Schedule dedicated learning time, focus on all language skills, and review regularly.', 'Immerse yourself in the language: Surround yourself with German and connect with German speakers.', \"Track your progress and stay motivated: Use a language learning journal, don\\\\'t be afraid to make mistakes, and reward your efforts.\"]\n"
     ]
    }
   ],
   "source": [
    "plan = Plan(**output1.tool_calls[0]['args'])\n",
    "print(type(plan))\n",
    "print(plan.steps)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "hq-1rdiY3YYH",
   "metadata": {
    "id": "hq-1rdiY3YYH"
   },
   "source": [
    "# ReACT pattern"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "id": "9b59fdbf-9e7b-4bd6-9b36-70937e31a31a",
   "metadata": {},
   "outputs": [],
   "source": [
    "react_prompt = ChatPromptTemplate(\n",
    "    [(\"system\", (\n",
    "        \"You are a helpful assistant. Try to use available tools \"\n",
    "        \"when appropriate to better answer the question.\"\n",
    "      )),\n",
    "      MessagesPlaceholder(variable_name=\"messages\"),\n",
    "    ]\n",
    ")\n",
    "llm_with_calculator = llm.bind_tools(tools=[calculator])\n",
    "chain = react_prompt | llm_with_calculator\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "id": "3abe946a-6071-461a-9f00-a6997969c343",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The answer is 5611859298.\n"
     ]
    }
   ],
   "source": [
    "input_message = HumanMessage(content=\"how much is 45546*123213\")\n",
    "message1 = chain.invoke([input_message])\n",
    "\n",
    "if message1.tool_calls and message1.tool_calls[0][\"name\"] == \"calculator\":\n",
    "    calculator_result = calculator.invoke(message1.tool_calls[0][\"args\"])\n",
    "    tool_message = ToolMessage(content=calculator_result, tool_call_id=message1.tool_calls[0][\"id\"])\n",
    "    final_message = llm_with_calculator.invoke([input_message, message1, tool_message])\n",
    "else:\n",
    "    final_message = message1\n",
    "\n",
    "print(final_message.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "082d3c6f-8f27-4693-81ee-06f60a868f49",
   "metadata": {},
   "source": [
    "This example is very naive, what if our flow is more complicated? Instead of writing everything ourselves, we can use a ready ReACT agent available on LangChain:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 87,
   "id": "zEpn0nIstQDU",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 1,
     "status": "ok",
     "timestamp": 1726672741891,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "zEpn0nIstQDU",
    "outputId": "6297570f-ec54-4ffe-c4b4-03d74fcac155"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/var/folders/5r/vzx_gv654ss9pc9897_bh5nm00gtqf/T/ipykernel_54638/859564805.py:3: LangGraphDeprecationWarning: Parameter 'messages_modifier' in function 'create_react_agent' is deprecated as of version 0.1.9 and will be removed in version 0.3.0. Use 'state_modifier' parameter instead.\n",
      "  react_agent = create_react_agent(llm, [calculator], messages_modifier=react_prompt)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The answer is 5611859296.\n"
     ]
    }
   ],
   "source": [
    "from langgraph.prebuilt import create_react_agent\n",
    "\n",
    "react_agent = create_react_agent(llm, [calculator], messages_modifier=react_prompt)\n",
    "result = react_agent.invoke(\n",
    "    {\"messages\": [(\"user\", \"how much is 45546*123213-2\")]})\n",
    "print(result[\"messages\"][-1].content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7b83e720-5c7e-4684-a455-d9a10c262e5f",
   "metadata": {
    "id": "6HBPAUfY3ZRG"
   },
   "source": [
    "Let's inspect the output:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 88,
   "id": "uteLNoqnuKA-",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 2,
     "status": "ok",
     "timestamp": 1726672865297,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "uteLNoqnuKA-",
    "outputId": "b7183842-9f80-4697-9c6f-329e312c6f39"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'langchain_core.messages.human.HumanMessage'>\n",
      "how much is 45546*123213-2\n",
      "<class 'langchain_core.messages.ai.AIMessage'>\n",
      "[{'name': 'calculator', 'args': {'expression': '45546*123213-2'}, 'id': '04a3c482-6e04-4ef7-b829-027f497e8c2e', 'type': 'tool_call'}]\n",
      "<class 'langchain_core.messages.tool.ToolMessage'>\n",
      "5611859296\n",
      "<class 'langchain_core.messages.ai.AIMessage'>\n",
      "[]\n"
     ]
    }
   ],
   "source": [
    "for message in result[\"messages\"]:\n",
    "  print(type(message))\n",
    "  if hasattr(message, \"tool_calls\"):\n",
    "    print(message.tool_calls)\n",
    "  else:\n",
    "    print(message.content)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "G-LPJSS5AR7u",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 654,
     "status": "ok",
     "timestamp": 1714799979653,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "G-LPJSS5AR7u",
    "outputId": "87b3a823-d9e5-452b-b156-e0bcaf08ef04"
   },
   "outputs": [],
   "source": [
    "llm = ChatVertexAI(model_name=\"gemini-1.0-pro-001\")\n",
    "response = llm.invoke(\"What is the capital of Germany?\", functions=[search_tool])\n",
    "response.tool_calls[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "lMCrROnpZ5C0",
   "metadata": {
    "id": "lMCrROnpZ5C0"
   },
   "source": [
    "We can create a tool from a function using a LangChain decorator:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "tZrOWsQPDP3Y",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 400,
     "status": "ok",
     "timestamp": 1717588776260,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "tZrOWsQPDP3Y",
    "outputId": "4f285cf6-46dd-49ee-af3c-4f027e5e8ace"
   },
   "outputs": [],
   "source": [
    "from langchain.tools import tool\n",
    "\n",
    "@tool\n",
    "def search(query: str) -> str:\n",
    "    \"\"\"Run the query execution results.\"\"\"\n",
    "    return s.run(query)\n",
    "\n",
    "llm = ChatVertexAI(model_name=\"gemini-1.0-pro-001\")\n",
    "response = llm.invoke(\"What is the capital of Germany?\", functions=[search])\n",
    "response.tool_calls[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "gZ820lbnRONH",
   "metadata": {
    "id": "gZ820lbnRONH"
   },
   "outputs": [],
   "source": [
    "@tool\n",
    "def multiply(a: int, b: int) -> int:\n",
    "    \"\"\"Multiplies a and b.\n",
    "\n",
    "    Args:\n",
    "        a: first int\n",
    "        b: second int\n",
    "    \"\"\"\n",
    "    return a * b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ysN6g_eZRfv7",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 2,
     "status": "ok",
     "timestamp": 1717588869707,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "ysN6g_eZRfv7",
    "outputId": "20ab7492-2474-4c6c-b138-86b2cb5d0095"
   },
   "outputs": [],
   "source": [
    "multiply.invoke(input={\"a\": 1, \"b\": 2})"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aY97yw-ConiP",
   "metadata": {
    "id": "aY97yw-ConiP"
   },
   "source": [
    "# ToolConfig"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "xkSch8Q-dAgH",
   "metadata": {
    "id": "xkSch8Q-dAgH"
   },
   "source": [
    "Let's define two tools:\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 100,
   "id": "aid4P1ezoosA",
   "metadata": {
    "id": "aid4P1ezoosA"
   },
   "outputs": [],
   "source": [
    "search_declaration = {\n",
    "    \"name\": \"Search\",\n",
    "    \"description\": \"Useful when you need to answer questions about current and future events. You should ask targeted questions.\",\n",
    "    \"parameters\": {\n",
    "        \"properties\": {\n",
    "            \"query\": {\"type\": \"string\", \"title\": \"query\"}\n",
    "        },\n",
    "        \"title\": 'SearchInput',\n",
    "        \"required\": [\"query\"],\n",
    "        \"description\": 'Input to the Google Search tool.',\n",
    "        \"type\": \"object\"\n",
    "    }\n",
    "}\n",
    "\n",
    "maps_declaration = {\n",
    "    \"name\": \"MapSearch\",\n",
    "    \"description\": \"Useful to answer question about maps and locations. You can ask targeted questions.\",\n",
    "    \"parameters\": {\n",
    "        \"properties\": {\n",
    "            \"query\": {\"type\": \"string\", \"title\": \"query\"},\n",
    "            \"country\": {\"type\": \"string\", \"title\": \"query\", \"description\": \"A country used to restrict geo area to answer the query.\"}\n",
    "        },\n",
    "        \"required\": [\"query\"],\n",
    "        \"description\": 'A query to Google Maps tool.',\n",
    "        \"type\": \"object\"\n",
    "    }\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "G-hmYg7JU8ZN",
   "metadata": {
    "id": "G-hmYg7JU8ZN"
   },
   "source": [
    "If we invoke our model, it calls the *Search* tool:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 101,
   "id": "CoWYWi1tdHUL",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 1287,
     "status": "ok",
     "timestamp": 1726819386146,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "CoWYWi1tdHUL",
    "outputId": "6ae38e38-e3e6-4c02-a351-b525813cadc4"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "I0000 00:00:1729781728.051681 8206579 fork_posix.cc:77] Other threads are currently calling into gRPC, skipping fork() handlers\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'name': 'Search', 'args': {'query': 'What is the capital of Germany?'}, 'id': '6e2c5eca-0f2e-44b7-9405-997e8199f81e', 'type': 'tool_call'}\n"
     ]
    }
   ],
   "source": [
    "llm = ChatVertexAI(model_name=\"gemini-1.5-pro-001\")\n",
    "response = llm.invoke(\n",
    "    \"What is the capital of Germany?\",\n",
    "    tools=[{\"function_declarations\": [search_declaration, maps_declaration]}])\n",
    "\n",
    "print(response.tool_calls[0])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "nUIjiZ-cVAtX",
   "metadata": {
    "id": "nUIjiZ-cVAtX"
   },
   "source": [
    "But what if we want to make sure it **ALWAYS** decided to call a *Search* tool? We can use a *tool_config* parameter:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 102,
   "id": "IhzmD1w8dMFk",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 1962,
     "status": "ok",
     "timestamp": 1726819413486,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "IhzmD1w8dMFk",
    "outputId": "5ab47c7f-446f-45f2-a233-6712a7d037c3"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'name': 'Search', 'args': {'query': 'What is the capital of Germany?'}, 'id': '28cfa14a-f80e-4d29-9ac4-6c32bd4c9b72', 'type': 'tool_call'}\n"
     ]
    }
   ],
   "source": [
    "response1 = llm.invoke(\n",
    "    \"What is the capital of Germany?\", tools=[{\"function_declarations\": [search_declaration, maps_declaration]}],\n",
    "    tool_config={\"function_calling_config\": {\"mode\": \"ANY\", \"allowed_function_names\": [\"Search\"]}})\n",
    "print(response1.tool_calls[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "Ogtd1DgCfaAT",
   "metadata": {
    "id": "Ogtd1DgCfaAT"
   },
   "source": [
    "It's equivalent to:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 103,
   "id": "Anor01LcfH1T",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 1879,
     "status": "ok",
     "timestamp": 1726819957531,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "Anor01LcfH1T",
    "outputId": "f70db589-f41f-4bdc-d6a3-e697e028a9c7"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'name': 'Search', 'args': {'query': 'What is the capital of Germany?'}, 'id': '2e5b4d87-3ec5-438a-9726-060acf215023', 'type': 'tool_call'}\n"
     ]
    }
   ],
   "source": [
    "response1 = llm.invoke(\n",
    "    \"What is the capital of Germany?\", tools=[{\"function_declarations\": [search_declaration, maps_declaration]}],\n",
    "    tool_choice={\"mode\": \"ANY\", \"allowed_function_names\": [\"Search\"]})\n",
    "print(response1.tool_calls[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "lviCWFyOVIME",
   "metadata": {
    "id": "lviCWFyOVIME"
   },
   "source": [
    "We can also tell the model never call any tools:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 104,
   "id": "qk_p0rTKdVNf",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 778,
     "status": "ok",
     "timestamp": 1726819428138,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "qk_p0rTKdVNf",
    "outputId": "dad40a08-94f7-47e9-8306-4e2fc080a72c"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[]\n",
      "The capital of Germany is **Berlin**. \n",
      "\n"
     ]
    }
   ],
   "source": [
    "response2 = llm.invoke(\"What is the capital of Germany?\", tools=[{\"function_declarations\": [search_declaration, maps_declaration]}],\n",
    "                       tool_config={\"function_calling_config\": {\"mode\": \"NONE\"}})\n",
    "print(response2.tool_calls)\n",
    "print(response2.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dE1IIJvMeeSa",
   "metadata": {
    "id": "dE1IIJvMeeSa"
   },
   "source": [
    "It's equvalent to:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 105,
   "id": "j6GibrgIed8D",
   "metadata": {
    "id": "j6GibrgIed8D"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[]\n",
      "The capital of Germany is **Berlin**. \n",
      "\n"
     ]
    }
   ],
   "source": [
    "response2 = llm.invoke(\"What is the capital of Germany?\", tools=[{\"function_declarations\": [search_declaration, maps_declaration]}],\n",
    "                       tool_choice=\"none\")\n",
    "print(response2.tool_calls)\n",
    "print(response2.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "pS4JOjteVL8F",
   "metadata": {
    "id": "pS4JOjteVL8F"
   },
   "source": [
    "Or we can ask the model to call *MapSearch* tool only:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 106,
   "id": "Me2Jncz-dXN2",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 2512,
     "status": "ok",
     "timestamp": 1726819469883,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "Me2Jncz-dXN2",
    "outputId": "5b773c7f-3110-4eca-ab34-0b18b68336e9"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'name': 'MapSearch', 'args': {'query': 'What is the capital of Germany?'}, 'id': '00da797b-a802-44fb-9912-f58f5c5940ff', 'type': 'tool_call'}]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "response2 = llm.invoke(\"What is the capital of Germany?\", tools=[{\"function_declarations\": [search_declaration, maps_declaration]}],\n",
    "                       tool_config={\"function_calling_config\": {\"mode\": \"ANY\", \"allowed_function_names\": [\"MapSearch\"]}})\n",
    "print(response2.tool_calls)\n",
    "print(response2.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "UbcNdLBlGxBg",
   "metadata": {
    "id": "UbcNdLBlGxBg"
   },
   "source": [
    "# Tools provided by Google"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "_gsOcps2G0de",
   "metadata": {
    "id": "_gsOcps2G0de"
   },
   "source": [
    "## Google Search"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "KvSLneVOG1gA",
   "metadata": {
    "id": "KvSLneVOG1gA"
   },
   "source": [
    "You need to follow the instructions and create your search engine as described [here](https://). Then, set up the variables below:\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "QZ42Gc_nRWbZ",
   "metadata": {
    "id": "QZ42Gc_nRWbZ"
   },
   "outputs": [],
   "source": [
    "google_search_api_key = \"PUT YOUR API KEY HERE\"\n",
    "google_cse_id = \"PUT YOUR CSE ID HERE\"\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 90,
   "id": "h_jKvoNZG-Qn",
   "metadata": {
    "id": "h_jKvoNZG-Qn"
   },
   "outputs": [],
   "source": [
    "from langchain_google_community import GoogleSearchAPIWrapper\n",
    "search = GoogleSearchAPIWrapper(\n",
    "    k=10, google_api_key=google_search_api_key, google_cse_id=google_cse_id\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 91,
   "id": "VAfjE0k9TA43",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 527,
     "status": "ok",
     "timestamp": 1726766920510,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "VAfjE0k9TA43",
    "outputId": "3c4ebc57-156b-4094-afc7-ac412cad46da"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Build context-aware, reasoning applications with LangChain's flexible framework that leverages your company's data and APIs. LangChain provides AI developers with tools to connect language models with external data sources. It is open-source and supported by an active community. LangChain is a framework for developing applications powered by large language models (LLMs). LangChain is a framework for developing applications powered by large language models (LLMs). For these applications, LangChain simplifies the entire ... LangChain is a framework that simplifies the process of creating generative AI application interfaces. Developers working on these types of interfaces use ... Nov 9, 2023 ... LangChain is a sophisticated framework comprising several key components that work in synergy to enhance natural language processing tasks. LangChain is an open source orchestration framework for the development of applications using large language models (LLMs), like chatbots and virtual ... Jun 30, 2023 ... Langchain is one of the fastest growing open source projects in history, in large part due to the explosion of interest in LLM's. May 5, 2023 ... Langchain is a tool that makes Gpt4 and other language models more useful. (Gpt4 is the engine that runs chatgpt) Jun 30, 2023 ... Not so hot take ❄️: LangChain solves some very real gaps in the large language model tooling space. I finally took a little time to explore ...\n"
     ]
    }
   ],
   "source": [
    "result = search.run(\"What is LangChain\")\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 92,
   "id": "UK5TEkLaT1-d",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 639,
     "status": "ok",
     "timestamp": 1726766641494,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "UK5TEkLaT1-d",
    "outputId": "420cad36-2950-4b9f-8d31-ef10ab84c16f"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "3\n",
      "{'title': 'Munich, Bavaria, Germany Weather Forecast | AccuWeather', 'link': 'https://www.accuweather.com/en/de/munich/80331/weather-forecast/178086', 'snippet': 'Hourly Weather · 1 PM 55°. rain drop 2% · 2 PM 56°. rain drop 0% · 3 PM 58°. rain drop 0% · 4 PM 57°. rain drop 0% · 5 PM 56°.'}\n"
     ]
    }
   ],
   "source": [
    "result = search.results(\"What is the weather in Munich tomorrow?\", num_results=3)\n",
    "print(len(result))\n",
    "print(result[0])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 93,
   "id": "IMxqIJTScOqp",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 514,
     "status": "ok",
     "timestamp": 1726768907450,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "IMxqIJTScOqp",
    "outputId": "6df9ce74-8988-452e-ef18-fbc332de7e4d"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/var/folders/5r/vzx_gv654ss9pc9897_bh5nm00gtqf/T/ipykernel_54638/1309990677.py:4: LangGraphDeprecationWarning: Parameter 'messages_modifier' in function 'create_react_agent' is deprecated as of version 0.1.9 and will be removed in version 0.3.0. Use 'state_modifier' parameter instead.\n",
      "  agent_executor = create_react_agent(llm, [calculator, search_tool], messages_modifier=prompt)\n"
     ]
    }
   ],
   "source": [
    "from langchain_google_community import GoogleSearchRun\n",
    "\n",
    "search_tool = GoogleSearchRun(api_wrapper=search)\n",
    "agent_executor = create_react_agent(llm, [calculator, search_tool], messages_modifier=prompt)\n",
    "result = agent_executor.invoke(\n",
    "    {\"messages\": [(\"user\", \"how much is distance from Earth to Moon multiplied by 2?\")]})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 94,
   "id": "15dsU0LyczAg",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 513,
     "status": "ok",
     "timestamp": 1726768996926,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "15dsU0LyczAg",
    "outputId": "4384de60-0753-45f3-af4d-ae3a42908c4d"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'langchain_core.messages.human.HumanMessage'>\n",
      "how much is distance from Earth to Moon multiplied by 2?\n",
      "<class 'langchain_core.messages.ai.AIMessage'>\n",
      "[{'name': 'google_search', 'args': {'query': 'distance from Earth to Moon'}, 'id': '72a558cb-d9e9-4cf7-a402-d15148a668bd', 'type': 'tool_call'}]\n",
      "<class 'langchain_core.messages.tool.ToolMessage'>\n",
      "The average distance between the Earth and the Moon is 384 400 km (238 855 miles). How far is that in light-seconds? Light travels at 300,000 kilometres per ... Well, the Moon is not always the same distance away from Earth. The orbit is not a perfect circle. When the Moon is the farthest away, it's 252,088 miles away. Mar 26, 2023 ... This image showing real size between Moon and Earth with real distance + Jupiter (Just for the sake of comparison) The average distance to the Moon is 382,500 km. The distance varies because the Moon travels around Earth in an elliptical orbit. At perigee, the point at which ... A lunar distance, 384,399 km (238,854 mi), is the Moon's average distance to Earth. The actual distance varies over the course of its orbit. The image compares ... Nov 18, 2022 ... The average distance between the blue planet and its only natural satellite is about 238,855 miles (384,400 kilometers), according to NASA. The resulting debris from both Earth and the impactor accumulated to form our natural satellite 239,000 miles (384,000 kilometers) away. The newly formed Moon ... In the present work, we present a physical model that reconciles these two constraints and yields a unique solution for the tidal history. Jan 11, 2024 ... The orbit changes over the course of the year so the distance from the Moon to Earth roughly ranges from 357,000 km to 407,000 km, giving ... The Earth's moon is moving away from Earth by a few centimeters a year. Will it break free from Earth's gravitational influence before our Sun turns into a red ...\n",
      "<class 'langchain_core.messages.ai.AIMessage'>\n",
      "[{'name': 'calculator', 'args': {'expression': '384400*2'}, 'id': '9400d682-06e8-4432-a5b7-947819fefb10', 'type': 'tool_call'}]\n",
      "<class 'langchain_core.messages.tool.ToolMessage'>\n",
      "768800\n",
      "<class 'langchain_core.messages.ai.AIMessage'>\n",
      "FINAL_ANSWER=768800 kilometers. \n",
      "\n"
     ]
    }
   ],
   "source": [
    "for message in result[\"messages\"]:\n",
    "  print(type(message))\n",
    "  if hasattr(message, \"tool_calls\") and message.tool_calls:\n",
    "    print(message.tool_calls)\n",
    "  else:\n",
    "    print(message.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "mcK1bpxsKaeM",
   "metadata": {
    "id": "mcK1bpxsKaeM"
   },
   "source": [
    "## Grounding with Vertex Search"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "t_osHE2LM1pT",
   "metadata": {
    "id": "t_osHE2LM1pT"
   },
   "source": [
    "If you're using Gemini, you can use built-in capabilities to ground it on Google Search results:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 95,
   "id": "UC-6gcQ8KdQq",
   "metadata": {
    "id": "UC-6gcQ8KdQq"
   },
   "outputs": [],
   "source": [
    "from vertexai.generative_models import grounding\n",
    "from vertexai.generative_models import Tool as VertexTool\n",
    "\n",
    "tool = VertexTool.from_google_search_retrieval(grounding.GoogleSearchRetrieval())\n",
    "\n",
    "response = llm.invoke(\"How far is moon from the Earth?\", tools=[tool])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 96,
   "id": "uPfM0rxcM8Wx",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 522,
     "status": "ok",
     "timestamp": 1726781578618,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "uPfM0rxcM8Wx",
    "outputId": "61115619-f72f-4b49-9cf4-9fe48c548f5b"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The average distance between the Moon and Earth is around 384,400 kilometers (238,855 miles).\n",
      "\n",
      "It's interesting to note that the Moon doesn't orbit Earth in a perfect circle, but rather in an elliptical path. This means the distance between Earth and the Moon is constantly changing.\n",
      "\n",
      "- At its farthest point, called apogee, the Moon is about 405,696 kilometers (252,088 miles) from Earth.\n",
      "- At its closest point, called perigee, the Moon is about 363,104 kilometers (225,623 miles) from Earth. \n",
      "\n",
      "To put this distance into perspective, if you were to line up 30 Earths, that would roughly equal the distance between Earth and the Moon. \n",
      "\n"
     ]
    }
   ],
   "source": [
    "print(response.content)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bLVr2_ZbMzbo",
   "metadata": {
    "id": "bLVr2_ZbMzbo"
   },
   "source": [
    "Let's explore grounding metadata:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 98,
   "id": "AIsUO8uvKsEX",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 511,
     "status": "ok",
     "timestamp": 1726781583165,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "AIsUO8uvKsEX",
    "outputId": "4aa3d27e-6b5f-4443-9910-e5ae109a7deb"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['how far is the moon from the earth']"
      ]
     },
     "execution_count": 98,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response.response_metadata[\"grounding_metadata\"][\"web_search_queries\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 99,
   "id": "gd57QVDKLSiW",
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "executionInfo": {
     "elapsed": 2,
     "status": "ok",
     "timestamp": 1726781584916,
     "user": {
      "displayName": "",
      "userId": ""
     },
     "user_tz": -120
    },
    "id": "gd57QVDKLSiW",
    "outputId": "7cfb2a80-d9a1-42a8-b388-f98258492887"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'segment': {'end_index': 93,\n",
       "   'text': 'The average distance between the Moon and Earth is around 384,400 kilometers (238,855 miles).',\n",
       "   'part_index': 0,\n",
       "   'start_index': 0},\n",
       "  'grounding_chunk_indices': [0, 1],\n",
       "  'confidence_scores': [0.95101345, 0.95101345]},\n",
       " {'segment': {'start_index': 285,\n",
       "   'end_index': 389,\n",
       "   'text': '- At its farthest point, called apogee, the Moon is about 405,696 kilometers (252,088 miles) from Earth.',\n",
       "   'part_index': 0},\n",
       "  'grounding_chunk_indices': [0],\n",
       "  'confidence_scores': [0.99039346]},\n",
       " {'segment': {'start_index': 390,\n",
       "   'end_index': 494,\n",
       "   'text': '- At its closest point, called perigee, the Moon is about 363,104 kilometers (225,623 miles) from Earth.',\n",
       "   'part_index': 0},\n",
       "  'grounding_chunk_indices': [0],\n",
       "  'confidence_scores': [0.96773213]}]"
      ]
     },
     "execution_count": 99,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response.response_metadata[\"grounding_metadata\"][\"grounding_supports\"]"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [
    "IfIzVgBxClFL",
    "26HHx3EMojdA"
   ],
   "name": "Chapter 9. Tools",
   "provenance": [],
   "toc_visible": true
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
