{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "579c24a2",
   "metadata": {},
   "source": [
    "# How to migrate from legacy LangChain agents to LangGraph\n",
    "\n",
    "Here we focus on how to move from legacy LangChain agents to LangGraph agents.\n",
    "LangChain agents (the\n",
    "[`AgentExecutor`](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html)\n",
    "in particular) have multiple configuration parameters. In this notebook we will\n",
    "show how those parameters map to the LangGraph\n",
    "[react agent executor](https://langchain-ai.github.io/langgraphjs/reference/functions/prebuilt.createReactAgent.html).\n",
    "\n",
    "For more information on how to build agentic workflows in LangGraph, check out\n",
    "the [docs here](https://langchain-ai.github.io/langgraphjs/how-tos/).\n",
    "\n",
    "#### Prerequisites\n",
    "\n",
    "This how-to guide uses Anthropic's `\"claude-3-haiku-20240307\"` as the LLM. If you are running this guide as a notebook, set your Anthropic API key to run."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "24ef582f",
   "metadata": {
    "lines_to_next_cell": 2
   },
   "outputs": [],
   "source": [
    "// process.env.ANTHROPIC_API_KEY = \"sk-...\";\n",
    "\n",
    "// Optional, add tracing in LangSmith\n",
    "// process.env.LANGCHAIN_API_KEY = \"ls...\";\n",
    "// process.env.LANGCHAIN_CALLBACKS_BACKGROUND = \"true\";\n",
    "// process.env.LANGCHAIN_TRACING_V2 = \"true\";\n",
    "// process.env.LANGCHAIN_PROJECT = \"How to migrate: LangGraphJS\";"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c1ff5c79",
   "metadata": {},
   "source": [
    "## Basic Usage\n",
    "\n",
    "For basic creation and usage of a tool-calling ReAct-style agent, the\n",
    "functionality is the same. First, let's define a model and tool(s), then we'll\n",
    "use those to create an agent.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "1222c5e2",
   "metadata": {
    "lines_to_next_cell": 2
   },
   "outputs": [],
   "source": [
    "import { DynamicStructuredTool } from \"@langchain/core/tools\";\n",
    "import { z } from \"zod\";\n",
    "import { ChatAnthropic } from \"@langchain/anthropic\";\n",
    "\n",
    "const llm = new ChatAnthropic({\n",
    "  model: \"claude-3-haiku-20240307\",\n",
    "  temperature: 0,\n",
    "});\n",
    "\n",
    "const magicTool = new DynamicStructuredTool({\n",
    "  name: \"magic_function\",\n",
    "  description: \"Applies a magic function to an input.\",\n",
    "  schema: z.object({\n",
    "    input: z.number(),\n",
    "  }),\n",
    "  func: async ({ input }: { input: number }) => {\n",
    "    return `${input + 2}`;\n",
    "  },\n",
    "});\n",
    "\n",
    "const tools = [magicTool];\n",
    "\n",
    "const query = \"what is the value of magic_function(3)?\";"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "768d9e8c",
   "metadata": {},
   "source": [
    "For the LangChain\n",
    "[`AgentExecutor`](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html),\n",
    "we define a prompt with a placeholder for the agent's scratchpad. The agent can\n",
    "be invoked as follows:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "e52bf891",
   "metadata": {
    "lines_to_next_cell": 2
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{\n",
       "  input: \u001b[32m\"what is the value of magic_function(3)?\"\u001b[39m,\n",
       "  output: \u001b[32m\"The value of magic_function(3) is 5.\"\u001b[39m\n",
       "}"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import {\n",
    "  ChatPromptTemplate,\n",
    "} from \"@langchain/core/prompts\";\n",
    "import { createToolCallingAgent } from \"langchain/agents\";\n",
    "import { AgentExecutor } from \"langchain/agents\";\n",
    "\n",
    "const prompt = ChatPromptTemplate.fromMessages([\n",
    "  [\"system\", \"You are a helpful assistant\"],\n",
    "  [\"placeholder\", \"{chat_history}\"],\n",
    "  [\"human\", \"{input}\"],\n",
    "  [\"placeholder\", \"{agent_scratchpad}\"],\n",
    "]);\n",
    "\n",
    "const agent = createToolCallingAgent({ llm, tools, prompt });\n",
    "const agentExecutor = new AgentExecutor({ agent, tools });\n",
    "\n",
    "await agentExecutor.invoke({ input: query });"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ba3e5db9",
   "metadata": {},
   "source": [
    "LangGraph's off-the-shelf\n",
    "[react agent executor](https://langchain-ai.github.io/langgraphjs/reference/functions/prebuilt.createReactAgent.html)\n",
    "manages a state that is defined by a list of messages. In a similar way to the `AgentExecutor`, it will continue to\n",
    "process the list until there are no tool calls in the agent's output. To kick it\n",
    "off, we input a list of messages. The output will contain the entire state of\n",
    "the graph - in this case, the conversation history and messages representing intermediate tool calls:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "dcda7082",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\n",
      "  messages: [\n",
      "    HumanMessage {\n",
      "      lc_serializable: true,\n",
      "      lc_kwargs: {\n",
      "        content: \"what is the value of magic_function(3)?\",\n",
      "        additional_kwargs: {},\n",
      "        response_metadata: {}\n",
      "      },\n",
      "      lc_namespace: [ \"langchain_core\", \"messages\" ],\n",
      "      content: \"what is the value of magic_function(3)?\",\n",
      "      name: undefined,\n",
      "      additional_kwargs: {},\n",
      "      response_metadata: {}\n",
      "    },\n",
      "    AIMessage {\n",
      "      lc_serializable: true,\n",
      "      lc_kwargs: {\n",
      "        content: [ [Object] ],\n",
      "        additional_kwargs: {\n",
      "          id: \"msg_015jSku8UgrtRQ2kNQuTsvi1\",\n",
      "          type: \"message\",\n",
      "          role: \"assistant\",\n",
      "          model: \"claude-3-haiku-20240307\",\n",
      "          stop_reason: \"tool_use\",\n",
      "          stop_sequence: null,\n",
      "          usage: [Object]\n",
      "        },\n",
      "        tool_calls: [ [Object] ],\n",
      "        invalid_tool_calls: [],\n",
      "        response_metadata: {}\n",
      "      },\n",
      "      lc_namespace: [ \"langchain_core\", \"messages\" ],\n",
      "      content: [\n",
      "        {\n",
      "          type: \"tool_use\",\n",
      "          id: \"toolu_01WCezi2ywMPnRm1xbrXYPoB\",\n",
      "          name: \"magic_function\",\n",
      "          input: [Object]\n",
      "        }\n",
      "      ],\n",
      "      name: undefined,\n",
      "      additional_kwargs: {\n",
      "        id: \"msg_015jSku8UgrtRQ2kNQuTsvi1\",\n",
      "        type: \"message\",\n",
      "        role: \"assistant\",\n",
      "        model: \"claude-3-haiku-20240307\",\n",
      "        stop_reason: \"tool_use\",\n",
      "        stop_sequence: null,\n",
      "        usage: { input_tokens: 365, output_tokens: 53 }\n",
      "      },\n",
      "      response_metadata: {\n",
      "        id: \"msg_015jSku8UgrtRQ2kNQuTsvi1\",\n",
      "        model: \"claude-3-haiku-20240307\",\n",
      "        stop_reason: \"tool_use\",\n",
      "        stop_sequence: null,\n",
      "        usage: { input_tokens: 365, output_tokens: 53 }\n",
      "      },\n",
      "      tool_calls: [\n",
      "        {\n",
      "          name: \"magic_function\",\n",
      "          args: [Object],\n",
      "          id: \"toolu_01WCezi2ywMPnRm1xbrXYPoB\"\n",
      "        }\n",
      "      ],\n",
      "      invalid_tool_calls: []\n",
      "    },\n",
      "    ToolMessage {\n",
      "      lc_serializable: true,\n",
      "      lc_kwargs: {\n",
      "        name: \"magic_function\",\n",
      "        content: \"5\",\n",
      "        tool_call_id: \"toolu_01WCezi2ywMPnRm1xbrXYPoB\",\n",
      "        additional_kwargs: {},\n",
      "        response_metadata: {}\n",
      "      },\n",
      "      lc_namespace: [ \"langchain_core\", \"messages\" ],\n",
      "      content: \"5\",\n",
      "      name: \"magic_function\",\n",
      "      additional_kwargs: {},\n",
      "      response_metadata: {},\n",
      "      tool_call_id: \"toolu_01WCezi2ywMPnRm1xbrXYPoB\"\n",
      "    },\n",
      "    AIMessage {\n",
      "      lc_serializable: true,\n",
      "      lc_kwargs: {\n",
      "        content: \"The value of magic_function(3) is 5.\",\n",
      "        tool_calls: [],\n",
      "        invalid_tool_calls: [],\n",
      "        additional_kwargs: {\n",
      "          id: \"msg_01FbyPvpxtczu2Cmd4vKcPQm\",\n",
      "          type: \"message\",\n",
      "          role: \"assistant\",\n",
      "          model: \"claude-3-haiku-20240307\",\n",
      "          stop_reason: \"end_turn\",\n",
      "          stop_sequence: null,\n",
      "          usage: [Object]\n",
      "        },\n",
      "        response_metadata: {}\n",
      "      },\n",
      "      lc_namespace: [ \"langchain_core\", \"messages\" ],\n",
      "      content: \"The value of magic_function(3) is 5.\",\n",
      "      name: undefined,\n",
      "      additional_kwargs: {\n",
      "        id: \"msg_01FbyPvpxtczu2Cmd4vKcPQm\",\n",
      "        type: \"message\",\n",
      "        role: \"assistant\",\n",
      "        model: \"claude-3-haiku-20240307\",\n",
      "        stop_reason: \"end_turn\",\n",
      "        stop_sequence: null,\n",
      "        usage: { input_tokens: 431, output_tokens: 17 }\n",
      "      },\n",
      "      response_metadata: {\n",
      "        id: \"msg_01FbyPvpxtczu2Cmd4vKcPQm\",\n",
      "        model: \"claude-3-haiku-20240307\",\n",
      "        stop_reason: \"end_turn\",\n",
      "        stop_sequence: null,\n",
      "        usage: { input_tokens: 431, output_tokens: 17 }\n",
      "      },\n",
      "      tool_calls: [],\n",
      "      invalid_tool_calls: []\n",
      "    }\n",
      "  ]\n",
      "}\n"
     ]
    }
   ],
   "source": [
    "import { createReactAgent } from \"@langchain/langgraph/prebuilt\";\n",
    "import { HumanMessage } from \"@langchain/core/messages\";\n",
    "\n",
    "const app = createReactAgent({ llm, tools });\n",
    "\n",
    "let agentOutput = await app.invoke({\n",
    "  messages: [\n",
    "    new HumanMessage(query)\n",
    "  ],\n",
    "});\n",
    "\n",
    "console.log(agentOutput);"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "b0a390a2",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{\n",
       "  messages: [\n",
       "    HumanMessage {\n",
       "      lc_serializable: \u001b[33mtrue\u001b[39m,\n",
       "      lc_kwargs: {\n",
       "        content: \u001b[32m\"what is the value of magic_function(3)?\"\u001b[39m,\n",
       "        additional_kwargs: {},\n",
       "        response_metadata: {}\n",
       "      },\n",
       "      lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
       "      content: \u001b[32m\"what is the value of magic_function(3)?\"\u001b[39m,\n",
       "      name: \u001b[90mundefined\u001b[39m,\n",
       "      additional_kwargs: {},\n",
       "      response_metadata: {}\n",
       "    },\n",
       "    AIMessage {\n",
       "      lc_serializable: \u001b[33mtrue\u001b[39m,\n",
       "      lc_kwargs: {\n",
       "        content: [ \u001b[36m[Object]\u001b[39m ],\n",
       "        additional_kwargs: {\n",
       "          id: \u001b[32m\"msg_015jSku8UgrtRQ2kNQuTsvi1\"\u001b[39m,\n",
       "          type: \u001b[32m\"message\"\u001b[39m,\n",
       "          role: \u001b[32m\"assistant\"\u001b[39m,\n",
       "          model: \u001b[32m\"claude-3-haiku-20240307\"\u001b[39m,\n",
       "          stop_reason: \u001b[32m\"tool_use\"\u001b[39m,\n",
       "          stop_sequence: \u001b[1mnull\u001b[22m,\n",
       "          usage: \u001b[36m[Object]\u001b[39m\n",
       "        },\n",
       "        tool_calls: [ \u001b[36m[Object]\u001b[39m ],\n",
       "        invalid_tool_calls: [],\n",
       "        response_metadata: {}\n",
       "      },\n",
       "      lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
       "      content: [\n",
       "        {\n",
       "          type: \u001b[32m\"tool_use\"\u001b[39m,\n",
       "          id: \u001b[32m\"toolu_01WCezi2ywMPnRm1xbrXYPoB\"\u001b[39m,\n",
       "          name: \u001b[32m\"magic_function\"\u001b[39m,\n",
       "          input: \u001b[36m[Object]\u001b[39m\n",
       "        }\n",
       "      ],\n",
       "      name: \u001b[90mundefined\u001b[39m,\n",
       "      additional_kwargs: {\n",
       "        id: \u001b[32m\"msg_015jSku8UgrtRQ2kNQuTsvi1\"\u001b[39m,\n",
       "        type: \u001b[32m\"message\"\u001b[39m,\n",
       "        role: \u001b[32m\"assistant\"\u001b[39m,\n",
       "        model: \u001b[32m\"claude-3-haiku-20240307\"\u001b[39m,\n",
       "        stop_reason: \u001b[32m\"tool_use\"\u001b[39m,\n",
       "        stop_sequence: \u001b[1mnull\u001b[22m,\n",
       "        usage: { input_tokens: \u001b[33m365\u001b[39m, output_tokens: \u001b[33m53\u001b[39m }\n",
       "      },\n",
       "      response_metadata: {\n",
       "        id: \u001b[32m\"msg_015jSku8UgrtRQ2kNQuTsvi1\"\u001b[39m,\n",
       "        model: \u001b[32m\"claude-3-haiku-20240307\"\u001b[39m,\n",
       "        stop_reason: \u001b[32m\"tool_use\"\u001b[39m,\n",
       "        stop_sequence: \u001b[1mnull\u001b[22m,\n",
       "        usage: { input_tokens: \u001b[33m365\u001b[39m, output_tokens: \u001b[33m53\u001b[39m }\n",
       "      },\n",
       "      tool_calls: [\n",
       "        {\n",
       "          name: \u001b[32m\"magic_function\"\u001b[39m,\n",
       "          args: \u001b[36m[Object]\u001b[39m,\n",
       "          id: \u001b[32m\"toolu_01WCezi2ywMPnRm1xbrXYPoB\"\u001b[39m\n",
       "        }\n",
       "      ],\n",
       "      invalid_tool_calls: []\n",
       "    },\n",
       "    ToolMessage {\n",
       "      lc_serializable: \u001b[33mtrue\u001b[39m,\n",
       "      lc_kwargs: {\n",
       "        name: \u001b[32m\"magic_function\"\u001b[39m,\n",
       "        content: \u001b[32m\"5\"\u001b[39m,\n",
       "        tool_call_id: \u001b[32m\"toolu_01WCezi2ywMPnRm1xbrXYPoB\"\u001b[39m,\n",
       "        additional_kwargs: {},\n",
       "        response_metadata: {}\n",
       "      },\n",
       "      lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
       "      content: \u001b[32m\"5\"\u001b[39m,\n",
       "      name: \u001b[32m\"magic_function\"\u001b[39m,\n",
       "      additional_kwargs: {},\n",
       "      response_metadata: {},\n",
       "      tool_call_id: \u001b[32m\"toolu_01WCezi2ywMPnRm1xbrXYPoB\"\u001b[39m\n",
       "    },\n",
       "    AIMessage {\n",
       "      lc_serializable: \u001b[33mtrue\u001b[39m,\n",
       "      lc_kwargs: {\n",
       "        content: \u001b[32m\"The value of magic_function(3) is 5.\"\u001b[39m,\n",
       "        tool_calls: [],\n",
       "        invalid_tool_calls: [],\n",
       "        additional_kwargs: {\n",
       "          id: \u001b[32m\"msg_01FbyPvpxtczu2Cmd4vKcPQm\"\u001b[39m,\n",
       "          type: \u001b[32m\"message\"\u001b[39m,\n",
       "          role: \u001b[32m\"assistant\"\u001b[39m,\n",
       "          model: \u001b[32m\"claude-3-haiku-20240307\"\u001b[39m,\n",
       "          stop_reason: \u001b[32m\"end_turn\"\u001b[39m,\n",
       "          stop_sequence: \u001b[1mnull\u001b[22m,\n",
       "          usage: \u001b[36m[Object]\u001b[39m\n",
       "        },\n",
       "        response_metadata: {}\n",
       "      },\n",
       "      lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
       "      content: \u001b[32m\"The value of magic_function(3) is 5.\"\u001b[39m,\n",
       "      name: \u001b[90mundefined\u001b[39m,\n",
       "      additional_kwargs: {\n",
       "        id: \u001b[32m\"msg_01FbyPvpxtczu2Cmd4vKcPQm\"\u001b[39m,\n",
       "        type: \u001b[32m\"message\"\u001b[39m,\n",
       "        role: \u001b[32m\"assistant\"\u001b[39m,\n",
       "        model: \u001b[32m\"claude-3-haiku-20240307\"\u001b[39m,\n",
       "        stop_reason: \u001b[32m\"end_turn\"\u001b[39m,\n",
       "        stop_sequence: \u001b[1mnull\u001b[22m,\n",
       "        usage: { input_tokens: \u001b[33m431\u001b[39m, output_tokens: \u001b[33m17\u001b[39m }\n",
       "      },\n",
       "      response_metadata: {\n",
       "        id: \u001b[32m\"msg_01FbyPvpxtczu2Cmd4vKcPQm\"\u001b[39m,\n",
       "        model: \u001b[32m\"claude-3-haiku-20240307\"\u001b[39m,\n",
       "        stop_reason: \u001b[32m\"end_turn\"\u001b[39m,\n",
       "        stop_sequence: \u001b[1mnull\u001b[22m,\n",
       "        usage: { input_tokens: \u001b[33m431\u001b[39m, output_tokens: \u001b[33m17\u001b[39m }\n",
       "      },\n",
       "      tool_calls: [],\n",
       "      invalid_tool_calls: []\n",
       "    },\n",
       "    HumanMessage {\n",
       "      lc_serializable: \u001b[33mtrue\u001b[39m,\n",
       "      lc_kwargs: {\n",
       "        content: \u001b[32m\"Pardon?\"\u001b[39m,\n",
       "        additional_kwargs: {},\n",
       "        response_metadata: {}\n",
       "      },\n",
       "      lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
       "      content: \u001b[32m\"Pardon?\"\u001b[39m,\n",
       "      name: \u001b[90mundefined\u001b[39m,\n",
       "      additional_kwargs: {},\n",
       "      response_metadata: {}\n",
       "    },\n",
       "    AIMessage {\n",
       "      lc_serializable: \u001b[33mtrue\u001b[39m,\n",
       "      lc_kwargs: {\n",
       "        content: \u001b[32m\"I apologize for the confusion. Let me explain the steps I took to arrive at the result:\\n\"\u001b[39m +\n",
       "          \u001b[32m\"\\n\"\u001b[39m +\n",
       "          \u001b[32m\"1. You aske\"\u001b[39m... 52 more characters,\n",
       "        tool_calls: [],\n",
       "        invalid_tool_calls: [],\n",
       "        additional_kwargs: {\n",
       "          id: \u001b[32m\"msg_012yLSnnf1c64NWKS9K58hcN\"\u001b[39m,\n",
       "          type: \u001b[32m\"message\"\u001b[39m,\n",
       "          role: \u001b[32m\"assistant\"\u001b[39m,\n",
       "          model: \u001b[32m\"claude-3-haiku-20240307\"\u001b[39m,\n",
       "          stop_reason: \u001b[32m\"end_turn\"\u001b[39m,\n",
       "          stop_sequence: \u001b[1mnull\u001b[22m,\n",
       "          usage: \u001b[36m[Object]\u001b[39m\n",
       "        },\n",
       "        response_metadata: {}\n",
       "      },\n",
       "      lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
       "      content: \u001b[32m\"I apologize for the confusion. Let me explain the steps I took to arrive at the result:\\n\"\u001b[39m +\n",
       "        \u001b[32m\"\\n\"\u001b[39m +\n",
       "        \u001b[32m\"1. You aske\"\u001b[39m... 52 more characters,\n",
       "      name: \u001b[90mundefined\u001b[39m,\n",
       "      additional_kwargs: {\n",
       "        id: \u001b[32m\"msg_012yLSnnf1c64NWKS9K58hcN\"\u001b[39m,\n",
       "        type: \u001b[32m\"message\"\u001b[39m,\n",
       "        role: \u001b[32m\"assistant\"\u001b[39m,\n",
       "        model: \u001b[32m\"claude-3-haiku-20240307\"\u001b[39m,\n",
       "        stop_reason: \u001b[32m\"end_turn\"\u001b[39m,\n",
       "        stop_sequence: \u001b[1mnull\u001b[22m,\n",
       "        usage: { input_tokens: \u001b[33m455\u001b[39m, output_tokens: \u001b[33m137\u001b[39m }\n",
       "      },\n",
       "      response_metadata: {\n",
       "        id: \u001b[32m\"msg_012yLSnnf1c64NWKS9K58hcN\"\u001b[39m,\n",
       "        model: \u001b[32m\"claude-3-haiku-20240307\"\u001b[39m,\n",
       "        stop_reason: \u001b[32m\"end_turn\"\u001b[39m,\n",
       "        stop_sequence: \u001b[1mnull\u001b[22m,\n",
       "        usage: { input_tokens: \u001b[33m455\u001b[39m, output_tokens: \u001b[33m137\u001b[39m }\n",
       "      },\n",
       "      tool_calls: [],\n",
       "      invalid_tool_calls: []\n",
       "    }\n",
       "  ]\n",
       "}"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "const messageHistory = agentOutput.messages;\n",
    "const newQuery = \"Pardon?\";\n",
    "\n",
    "agentOutput = await app.invoke({\n",
    "  messages: [\n",
    "    ...messageHistory,\n",
    "    new HumanMessage(newQuery)\n",
    "  ],\n",
    "});\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "41a12f7a",
   "metadata": {},
   "source": [
    "## Prompt Templates\n",
    "\n",
    "With legacy LangChain agents you have to pass in a prompt template. You can use\n",
    "this to control the agent.\n",
    "\n",
    "With LangGraph\n",
    "[react agent executor](https://langchain-ai.github.io/langgraphjs/reference/functions/prebuilt.createReactAgent.html),\n",
    "by default there is no prompt. You can achieve similar control over the agent in\n",
    "a few ways:\n",
    "\n",
    "1. Pass in a system message as input\n",
    "2. Initialize the agent with a system message\n",
    "3. Initialize the agent with a function to transform messages before passing to\n",
    "   the model.\n",
    "\n",
    "Let's take a look at all of these below. We will pass in custom instructions to\n",
    "get the agent to respond in Spanish.\n",
    "\n",
    "First up, using LangChain's `AgentExecutor`:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "4c5266cc",
   "metadata": {
    "lines_to_next_cell": 2
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{\n",
       "  input: \u001b[32m\"what is the value of magic_function(3)?\"\u001b[39m,\n",
       "  output: \u001b[32m\"El valor de magic_function(3) es 5.\"\u001b[39m\n",
       "}"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "const spanishPrompt = ChatPromptTemplate.fromMessages([\n",
    "  [\"system\", \"You are a helpful assistant. Respond only in Spanish.\"],\n",
    "  [\"placeholder\", \"{chat_history}\"],\n",
    "  [\"human\", \"{input}\"],\n",
    "  [\"placeholder\", \"{agent_scratchpad}\"],\n",
    "]);\n",
    "\n",
    "const spanishAgent = createToolCallingAgent({\n",
    "  llm,\n",
    "  tools,\n",
    "  prompt: spanishPrompt,\n",
    "});\n",
    "const spanishAgentExecutor = new AgentExecutor({\n",
    "  agent: spanishAgent,\n",
    "  tools,\n",
    "});\n",
    "\n",
    "await spanishAgentExecutor.invoke({ input: query });\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c54b374d",
   "metadata": {},
   "source": [
    "Now, let's pass a custom system message to\n",
    "[react agent executor](https://langchain-ai.github.io/langgraphjs/reference/functions/prebuilt.createReactAgent.html).\n",
    "This can either be a string or a LangChain `SystemMessage`.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "38a751ba",
   "metadata": {
    "lines_to_next_cell": 2
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage {\n",
       "  lc_serializable: \u001b[33mtrue\u001b[39m,\n",
       "  lc_kwargs: {\n",
       "    content: \u001b[32m\"El valor de magic_function(3) es 5.\"\u001b[39m,\n",
       "    tool_calls: [],\n",
       "    invalid_tool_calls: [],\n",
       "    additional_kwargs: {\n",
       "      id: \u001b[32m\"msg_01P5VUYbBZoeMaReqBgqFJZa\"\u001b[39m,\n",
       "      type: \u001b[32m\"message\"\u001b[39m,\n",
       "      role: \u001b[32m\"assistant\"\u001b[39m,\n",
       "      model: \u001b[32m\"claude-3-haiku-20240307\"\u001b[39m,\n",
       "      stop_reason: \u001b[32m\"end_turn\"\u001b[39m,\n",
       "      stop_sequence: \u001b[1mnull\u001b[22m,\n",
       "      usage: { input_tokens: \u001b[33m444\u001b[39m, output_tokens: \u001b[33m17\u001b[39m }\n",
       "    },\n",
       "    response_metadata: {}\n",
       "  },\n",
       "  lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n",
       "  content: \u001b[32m\"El valor de magic_function(3) es 5.\"\u001b[39m,\n",
       "  name: \u001b[90mundefined\u001b[39m,\n",
       "  additional_kwargs: {\n",
       "    id: \u001b[32m\"msg_01P5VUYbBZoeMaReqBgqFJZa\"\u001b[39m,\n",
       "    type: \u001b[32m\"message\"\u001b[39m,\n",
       "    role: \u001b[32m\"assistant\"\u001b[39m,\n",
       "    model: \u001b[32m\"claude-3-haiku-20240307\"\u001b[39m,\n",
       "    stop_reason: \u001b[32m\"end_turn\"\u001b[39m,\n",
       "    stop_sequence: \u001b[1mnull\u001b[22m,\n",
       "    usage: { input_tokens: \u001b[33m444\u001b[39m, output_tokens: \u001b[33m17\u001b[39m }\n",
       "  },\n",
       "  response_metadata: {\n",
       "    id: \u001b[32m\"msg_01P5VUYbBZoeMaReqBgqFJZa\"\u001b[39m,\n",
       "    model: \u001b[32m\"claude-3-haiku-20240307\"\u001b[39m,\n",
       "    stop_reason: \u001b[32m\"end_turn\"\u001b[39m,\n",
       "    stop_sequence: \u001b[1mnull\u001b[22m,\n",
       "    usage: { input_tokens: \u001b[33m444\u001b[39m, output_tokens: \u001b[33m17\u001b[39m }\n",
       "  },\n",
       "  tool_calls: [],\n",
       "  invalid_tool_calls: []\n",
       "}"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import { SystemMessage } from \"@langchain/core/messages\";\n",
    "\n",
    "const systemMessage = \"You are a helpful assistant. Respond only in Spanish.\";\n",
    "\n",
    "// This could also be a SystemMessage object\n",
    "// const systemMessage = new SystemMessage(\"You are a helpful assistant. Respond only in Spanish.\");\n",
    "\n",
    "const appWithSystemMessage = createReactAgent({\n",
    "  llm,\n",
    "  tools,\n",
    "  messageModifier: systemMessage,\n",
    "});\n",
    "\n",
    "agentOutput = await appWithSystemMessage.invoke({\n",
    "  messages: [\n",
    "    new HumanMessage(query)\n",
    "  ],\n",
    "});\n",
    "agentOutput.messages[agentOutput.messages.length - 1];"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7622d8f7",
   "metadata": {},
   "source": [
    "We can also pass in an arbitrary function. This function should take in a list\n",
    "of messages and output a list of messages. We can do all types of arbitrary\n",
    "formatting of messages here. In this cases, let's just add a `SystemMessage` to\n",
    "the start of the list of messages.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "c7120cdd",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\n",
      "  input: \"what is the value of magic_function(3)?\",\n",
      "  output: \"5. ¡Pandemonium!\"\n",
      "}\n"
     ]
    }
   ],
   "source": [
    "import { BaseMessage, SystemMessage } from \"@langchain/core/messages\";\n",
    "\n",
    "const modifyMessages = (messages: BaseMessage[]) => {\n",
    "  return [\n",
    "    new SystemMessage(\"You are a helpful assistant. Respond only in Spanish.\"),\n",
    "    ...messages,\n",
    "    new HumanMessage(\"Also say 'Pandemonium!' after the answer.\"),\n",
    "  ];\n",
    "};\n",
    "\n",
    "const appWithMessagesModifier = createReactAgent({\n",
    "  llm,\n",
    "  tools,\n",
    "  messageModifier: modifyMessages,\n",
    "});\n",
    "\n",
    "agentOutput = await appWithMessagesModifier.invoke({\n",
    "  messages: [new HumanMessage(query)],\n",
    "});\n",
    "\n",
    "console.log({\n",
    "  input: query,\n",
    "  output: agentOutput.messages[agentOutput.messages.length - 1].content,\n",
    "});"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "44337a14",
   "metadata": {},
   "source": [
    "## Memory\n",
    "\n",
    "With LangChain's\n",
    "[`AgentExecutor`](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html), you could add chat memory classes so it can engage in a multi-turn conversation.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "4d67ba36",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The magic_function takes an input number and applies some magic to it, returning the output. For an input of 3, the output is 5.\n",
      "---\n",
      "Okay, I remember your name is Polly.\n",
      "---\n",
      "So the output of the magic_function with an input of 3 is 5.\n"
     ]
    }
   ],
   "source": [
    "import { ChatMessageHistory } from \"@langchain/community/stores/message/in_memory\";\n",
    "import { RunnableWithMessageHistory } from \"@langchain/core/runnables\";\n",
    "\n",
    "const memory = new ChatMessageHistory();\n",
    "const agentExecutorWithMemory = new RunnableWithMessageHistory({\n",
    "  runnable: agentExecutor,\n",
    "  getMessageHistory: () => memory,\n",
    "  inputMessagesKey: \"input\",\n",
    "  historyMessagesKey: \"chat_history\",\n",
    "});\n",
    "\n",
    "const config = { configurable: { sessionId: \"test-session\" } };\n",
    "\n",
    "agentOutput = await agentExecutorWithMemory.invoke(\n",
    "  { input: \"Hi, I'm polly! What's the output of magic_function of 3?\" },\n",
    "  config,\n",
    ");\n",
    "\n",
    "console.log(agentOutput.output);\n",
    "\n",
    "agentOutput = await agentExecutorWithMemory.invoke(\n",
    "  { input: \"Remember my name?\" },\n",
    "  config,\n",
    ");\n",
    "\n",
    "console.log(\"---\");\n",
    "console.log(agentOutput.output);\n",
    "console.log(\"---\");\n",
    "\n",
    "agentOutput = await agentExecutorWithMemory.invoke(\n",
    "  { input: \"what was that output again?\" },\n",
    "  config,\n",
    ");\n",
    "\n",
    "console.log(agentOutput.output);"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a7fe4e21",
   "metadata": {},
   "source": [
    "#### In LangGraph\n",
    "\n",
    "The equivalent to this type of memory in LangGraph is [persistence](https://langchain-ai.github.io/langgraphjs/how-tos/persistence/), and [checkpointing](https://langchain-ai.github.io/langgraphjs/reference/interfaces/index.Checkpoint.html).\n",
    "\n",
    "Add a `checkpointer` to the agent and you get chat memory for free. You'll need to also pass a `thread_id` within the `configurable` field in the `config` parameter. Notice that we only pass one message into each request, but the model still has context from previous runs:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "bbc64438",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The magic_function takes an input number and applies some magic to it, returning the output. For an input of 3, the magic_function returns 5.\n",
      "---\n",
      "Ah yes, I remember your name is Polly! It's nice to meet you Polly.\n",
      "---\n",
      "So the magic_function returned an output of 5 for an input of 3.\n"
     ]
    }
   ],
   "source": [
    "import { MemorySaver } from \"@langchain/langgraph\";\n",
    "\n",
    "const memory = new MemorySaver();\n",
    "const appWithMemory = createReactAgent({\n",
    "  llm,\n",
    "  tools,\n",
    "  checkpointSaver: memory\n",
    "});\n",
    "\n",
    "const config = {\n",
    "  configurable: {\n",
    "    thread_id: \"test-thread\",\n",
    "  },\n",
    "};\n",
    "\n",
    "agentOutput = await appWithMemory.invoke(\n",
    "  {\n",
    "    messages: [\n",
    "      new HumanMessage(\n",
    "        \"Hi, I'm polly! What's the output of magic_function of 3?\",\n",
    "      ),\n",
    "    ],\n",
    "  },\n",
    "  config,\n",
    ");\n",
    "\n",
    "console.log(agentOutput.messages[agentOutput.messages.length - 1].content);\n",
    "console.log(\"---\");\n",
    "\n",
    "agentOutput = await appWithMemory.invoke(\n",
    "  {\n",
    "    messages: [\n",
    "      new HumanMessage(\"Remember my name?\")\n",
    "    ]\n",
    "  },\n",
    "  config,\n",
    ");\n",
    "\n",
    "console.log(agentOutput.messages[agentOutput.messages.length - 1].content);\n",
    "console.log(\"---\");\n",
    "\n",
    "agentOutput = await appWithMemory.invoke(\n",
    "  {\n",
    "    messages: [\n",
    "      new HumanMessage(\"what was that output again?\")\n",
    "    ]\n",
    "  },\n",
    "  config,\n",
    ");\n",
    "\n",
    "console.log(agentOutput.messages[agentOutput.messages.length - 1].content);"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2997b4da",
   "metadata": {},
   "source": [
    "## Iterating through steps\n",
    "\n",
    "With LangChain's\n",
    "[`AgentExecutor`](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html),\n",
    "you could iterate over the steps using the\n",
    "[`stream`](https://api.js.langchain.com/classes/langchain_core_runnables.Runnable.html#stream) method:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "5c928049",
   "metadata": {
    "lines_to_next_cell": 2
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\n",
      "  intermediateSteps: [\n",
      "    {\n",
      "      action: {\n",
      "        tool: \"magic_function\",\n",
      "        toolInput: { input: 3 },\n",
      "        toolCallId: \"toolu_01KCJJ8kyiY5LV4RHbVPzK8v\",\n",
      "        log: 'Invoking \"magic_function\" with {\"input\":3}\\n' +\n",
      "          '[{\"type\":\"tool_use\",\"id\":\"toolu_01KCJJ8kyiY5LV4RHbVPzK8v\"'... 46 more characters,\n",
      "        messageLog: [ [AIMessageChunk] ]\n",
      "      },\n",
      "      observation: \"5\"\n",
      "    }\n",
      "  ]\n",
      "}\n",
      "{ output: \"The value of magic_function(3) is 5.\" }\n"
     ]
    }
   ],
   "source": [
    "const langChainStream = await agentExecutor.stream({ input: query });\n",
    "\n",
    "for await (const step of langChainStream) {\n",
    "  console.log(step);\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cd371818",
   "metadata": {},
   "source": [
    "#### In LangGraph\n",
    "\n",
    "In LangGraph, things are handled natively using the stream method.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "2be89a30",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\n",
      "  agent: {\n",
      "    messages: [\n",
      "      AIMessage {\n",
      "        lc_serializable: true,\n",
      "        lc_kwargs: {\n",
      "          content: [Array],\n",
      "          additional_kwargs: [Object],\n",
      "          tool_calls: [Array],\n",
      "          invalid_tool_calls: [],\n",
      "          response_metadata: {}\n",
      "        },\n",
      "        lc_namespace: [ \"langchain_core\", \"messages\" ],\n",
      "        content: [ [Object] ],\n",
      "        name: undefined,\n",
      "        additional_kwargs: {\n",
      "          id: \"msg_01WWYeJvJroT82QhJQZKdwSt\",\n",
      "          type: \"message\",\n",
      "          role: \"assistant\",\n",
      "          model: \"claude-3-haiku-20240307\",\n",
      "          stop_reason: \"tool_use\",\n",
      "          stop_sequence: null,\n",
      "          usage: [Object]\n",
      "        },\n",
      "        response_metadata: {\n",
      "          id: \"msg_01WWYeJvJroT82QhJQZKdwSt\",\n",
      "          model: \"claude-3-haiku-20240307\",\n",
      "          stop_reason: \"tool_use\",\n",
      "          stop_sequence: null,\n",
      "          usage: [Object]\n",
      "        },\n",
      "        tool_calls: [ [Object] ],\n",
      "        invalid_tool_calls: []\n",
      "      }\n",
      "    ]\n",
      "  }\n",
      "}\n",
      "{\n",
      "  tools: {\n",
      "    messages: [\n",
      "      ToolMessage {\n",
      "        lc_serializable: true,\n",
      "        lc_kwargs: {\n",
      "          name: \"magic_function\",\n",
      "          content: \"5\",\n",
      "          tool_call_id: \"toolu_01X9pwxuroTWNVqiwQTL1U8C\",\n",
      "          additional_kwargs: {},\n",
      "          response_metadata: {}\n",
      "        },\n",
      "        lc_namespace: [ \"langchain_core\", \"messages\" ],\n",
      "        content: \"5\",\n",
      "        name: \"magic_function\",\n",
      "        additional_kwargs: {},\n",
      "        response_metadata: {},\n",
      "        tool_call_id: \"toolu_01X9pwxuroTWNVqiwQTL1U8C\"\n",
      "      }\n",
      "    ]\n",
      "  }\n",
      "}\n",
      "{\n",
      "  agent: {\n",
      "    messages: [\n",
      "      AIMessage {\n",
      "        lc_serializable: true,\n",
      "        lc_kwargs: {\n",
      "          content: \"The value of magic_function(3) is 5.\",\n",
      "          tool_calls: [],\n",
      "          invalid_tool_calls: [],\n",
      "          additional_kwargs: [Object],\n",
      "          response_metadata: {}\n",
      "        },\n",
      "        lc_namespace: [ \"langchain_core\", \"messages\" ],\n",
      "        content: \"The value of magic_function(3) is 5.\",\n",
      "        name: undefined,\n",
      "        additional_kwargs: {\n",
      "          id: \"msg_012kQPkxt2CrsFw4CsdfNTWr\",\n",
      "          type: \"message\",\n",
      "          role: \"assistant\",\n",
      "          model: \"claude-3-haiku-20240307\",\n",
      "          stop_reason: \"end_turn\",\n",
      "          stop_sequence: null,\n",
      "          usage: [Object]\n",
      "        },\n",
      "        response_metadata: {\n",
      "          id: \"msg_012kQPkxt2CrsFw4CsdfNTWr\",\n",
      "          model: \"claude-3-haiku-20240307\",\n",
      "          stop_reason: \"end_turn\",\n",
      "          stop_sequence: null,\n",
      "          usage: [Object]\n",
      "        },\n",
      "        tool_calls: [],\n",
      "        invalid_tool_calls: []\n",
      "      }\n",
      "    ]\n",
      "  }\n",
      "}\n"
     ]
    }
   ],
   "source": [
    "const langGraphStream = await app.stream(\n",
    "  { messages: [new HumanMessage(query)] },\n",
    "  { streamMode: \"updates\" },\n",
    ");\n",
    "\n",
    "for await (const step of langGraphStream) {\n",
    "  console.log(step);\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ce023792",
   "metadata": {},
   "source": [
    "## `returnIntermediateSteps`\n",
    "\n",
    "Setting this parameter on AgentExecutor allows users to access\n",
    "intermediate_steps, which pairs agent actions (e.g., tool invocations) with\n",
    "their outcomes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "77ce2771",
   "metadata": {
    "lines_to_next_cell": 2
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[\n",
      "  {\n",
      "    action: {\n",
      "      tool: \"magic_function\",\n",
      "      toolInput: { input: 3 },\n",
      "      toolCallId: \"toolu_0126dJXbjwLC5daAScz8bw1k\",\n",
      "      log: 'Invoking \"magic_function\" with {\"input\":3}\\n' +\n",
      "        '[{\"type\":\"tool_use\",\"id\":\"toolu_0126dJXbjwLC5daAScz8bw1k\"'... 46 more characters,\n",
      "      messageLog: [\n",
      "        AIMessageChunk {\n",
      "          lc_serializable: true,\n",
      "          lc_kwargs: [Object],\n",
      "          lc_namespace: [Array],\n",
      "          content: [Array],\n",
      "          name: undefined,\n",
      "          additional_kwargs: [Object],\n",
      "          response_metadata: {},\n",
      "          tool_calls: [Array],\n",
      "          invalid_tool_calls: [],\n",
      "          tool_call_chunks: [Array]\n",
      "        }\n",
      "      ]\n",
      "    },\n",
      "    observation: \"5\"\n",
      "  }\n",
      "]\n"
     ]
    }
   ],
   "source": [
    "const agentExecutorWithIntermediateSteps = new AgentExecutor({\n",
    "  agent,\n",
    "  tools,\n",
    "  returnIntermediateSteps: true,\n",
    "});\n",
    "\n",
    "const result = await agentExecutorWithIntermediateSteps.invoke({\n",
    "  input: query,\n",
    "});\n",
    "\n",
    "console.log(result.intermediateSteps);\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "050845ae",
   "metadata": {},
   "source": [
    "By default the\n",
    "[react agent executor](https://langchain-ai.github.io/langgraphjs/reference/functions/prebuilt.createReactAgent.html)\n",
    "in LangGraph appends all messages to the central state. Therefore, it is easy to\n",
    "see any intermediate steps by just looking at the full state.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "2f9cdfa8",
   "metadata": {
    "lines_to_next_cell": 2
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[\n",
      "  HumanMessage {\n",
      "    lc_serializable: true,\n",
      "    lc_kwargs: {\n",
      "      content: \"what is the value of magic_function(3)?\",\n",
      "      additional_kwargs: {},\n",
      "      response_metadata: {}\n",
      "    },\n",
      "    lc_namespace: [ \"langchain_core\", \"messages\" ],\n",
      "    content: \"what is the value of magic_function(3)?\",\n",
      "    name: undefined,\n",
      "    additional_kwargs: {},\n",
      "    response_metadata: {}\n",
      "  },\n",
      "  AIMessage {\n",
      "    lc_serializable: true,\n",
      "    lc_kwargs: {\n",
      "      content: [\n",
      "        {\n",
      "          type: \"tool_use\",\n",
      "          id: \"toolu_01L2N6TKrZxyUWRCQZ5qLYVj\",\n",
      "          name: \"magic_function\",\n",
      "          input: [Object]\n",
      "        }\n",
      "      ],\n",
      "      additional_kwargs: {\n",
      "        id: \"msg_01BhXyjA2PTwGC5J3JNnfAXY\",\n",
      "        type: \"message\",\n",
      "        role: \"assistant\",\n",
      "        model: \"claude-3-haiku-20240307\",\n",
      "        stop_reason: \"tool_use\",\n",
      "        stop_sequence: null,\n",
      "        usage: { input_tokens: 365, output_tokens: 53 }\n",
      "      },\n",
      "      tool_calls: [\n",
      "        {\n",
      "          name: \"magic_function\",\n",
      "          args: [Object],\n",
      "          id: \"toolu_01L2N6TKrZxyUWRCQZ5qLYVj\"\n",
      "        }\n",
      "      ],\n",
      "      invalid_tool_calls: [],\n",
      "      response_metadata: {}\n",
      "    },\n",
      "    lc_namespace: [ \"langchain_core\", \"messages\" ],\n",
      "    content: [\n",
      "      {\n",
      "        type: \"tool_use\",\n",
      "        id: \"toolu_01L2N6TKrZxyUWRCQZ5qLYVj\",\n",
      "        name: \"magic_function\",\n",
      "        input: { input: 3 }\n",
      "      }\n",
      "    ],\n",
      "    name: undefined,\n",
      "    additional_kwargs: {\n",
      "      id: \"msg_01BhXyjA2PTwGC5J3JNnfAXY\",\n",
      "      type: \"message\",\n",
      "      role: \"assistant\",\n",
      "      model: \"claude-3-haiku-20240307\",\n",
      "      stop_reason: \"tool_use\",\n",
      "      stop_sequence: null,\n",
      "      usage: { input_tokens: 365, output_tokens: 53 }\n",
      "    },\n",
      "    response_metadata: {\n",
      "      id: \"msg_01BhXyjA2PTwGC5J3JNnfAXY\",\n",
      "      model: \"claude-3-haiku-20240307\",\n",
      "      stop_reason: \"tool_use\",\n",
      "      stop_sequence: null,\n",
      "      usage: { input_tokens: 365, output_tokens: 53 }\n",
      "    },\n",
      "    tool_calls: [\n",
      "      {\n",
      "        name: \"magic_function\",\n",
      "        args: { input: 3 },\n",
      "        id: \"toolu_01L2N6TKrZxyUWRCQZ5qLYVj\"\n",
      "      }\n",
      "    ],\n",
      "    invalid_tool_calls: []\n",
      "  },\n",
      "  ToolMessage {\n",
      "    lc_serializable: true,\n",
      "    lc_kwargs: {\n",
      "      name: \"magic_function\",\n",
      "      content: \"5\",\n",
      "      tool_call_id: \"toolu_01L2N6TKrZxyUWRCQZ5qLYVj\",\n",
      "      additional_kwargs: {},\n",
      "      response_metadata: {}\n",
      "    },\n",
      "    lc_namespace: [ \"langchain_core\", \"messages\" ],\n",
      "    content: \"5\",\n",
      "    name: \"magic_function\",\n",
      "    additional_kwargs: {},\n",
      "    response_metadata: {},\n",
      "    tool_call_id: \"toolu_01L2N6TKrZxyUWRCQZ5qLYVj\"\n",
      "  },\n",
      "  AIMessage {\n",
      "    lc_serializable: true,\n",
      "    lc_kwargs: {\n",
      "      content: \"The value of magic_function(3) is 5.\",\n",
      "      tool_calls: [],\n",
      "      invalid_tool_calls: [],\n",
      "      additional_kwargs: {\n",
      "        id: \"msg_01ABtcXJ4CwMHphYYmffQZoF\",\n",
      "        type: \"message\",\n",
      "        role: \"assistant\",\n",
      "        model: \"claude-3-haiku-20240307\",\n",
      "        stop_reason: \"end_turn\",\n",
      "        stop_sequence: null,\n",
      "        usage: { input_tokens: 431, output_tokens: 17 }\n",
      "      },\n",
      "      response_metadata: {}\n",
      "    },\n",
      "    lc_namespace: [ \"langchain_core\", \"messages\" ],\n",
      "    content: \"The value of magic_function(3) is 5.\",\n",
      "    name: undefined,\n",
      "    additional_kwargs: {\n",
      "      id: \"msg_01ABtcXJ4CwMHphYYmffQZoF\",\n",
      "      type: \"message\",\n",
      "      role: \"assistant\",\n",
      "      model: \"claude-3-haiku-20240307\",\n",
      "      stop_reason: \"end_turn\",\n",
      "      stop_sequence: null,\n",
      "      usage: { input_tokens: 431, output_tokens: 17 }\n",
      "    },\n",
      "    response_metadata: {\n",
      "      id: \"msg_01ABtcXJ4CwMHphYYmffQZoF\",\n",
      "      model: \"claude-3-haiku-20240307\",\n",
      "      stop_reason: \"end_turn\",\n",
      "      stop_sequence: null,\n",
      "      usage: { input_tokens: 431, output_tokens: 17 }\n",
      "    },\n",
      "    tool_calls: [],\n",
      "    invalid_tool_calls: []\n",
      "  }\n",
      "]\n"
     ]
    }
   ],
   "source": [
    "agentOutput = await app.invoke({\n",
    "  messages: [\n",
    "    new HumanMessage(query)\n",
    "  ]\n",
    "});\n",
    "\n",
    "console.log(agentOutput.messages);"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6e671e6",
   "metadata": {},
   "source": [
    "## `maxIterations`\n",
    "\n",
    "`AgentExecutor` implements a `maxIterations` parameter, whereas this is\n",
    "controlled via `recursionLimit` in LangGraph.\n",
    "\n",
    "Note that in the LangChain `AgentExecutor`, an \"iteration\" includes a full turn of tool\n",
    "invocation and execution. In LangGraph, each step contributes to the recursion\n",
    "limit, so we will need to multiply by two (and add one) to get equivalent\n",
    "results.\n",
    "\n",
    "If the recursion limit is reached, LangGraph raises a specific exception type,\n",
    "that we can catch and manage similarly to AgentExecutor.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "1cca9d11",
   "metadata": {
    "lines_to_next_cell": 2
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[32m[chain/start]\u001b[39m [\u001b[90m\u001b[1m1:chain:AgentExecutor\u001b[22m\u001b[39m] Entering Chain run with input: {\n",
      "  \"input\": \"what is the value of magic_function(3)?\"\n",
      "}\n",
      "\u001b[32m[chain/start]\u001b[39m [\u001b[90m1:chain:AgentExecutor > \u001b[1m2:chain:ToolCallingAgent\u001b[22m\u001b[39m] Entering Chain run with input: {\n",
      "  \"input\": \"what is the value of magic_function(3)?\",\n",
      "  \"steps\": []\n",
      "}\n",
      "\u001b[32m[chain/start]\u001b[39m [\u001b[90m1:chain:AgentExecutor > 2:chain:ToolCallingAgent > \u001b[1m3:chain:RunnableAssign\u001b[22m\u001b[39m] Entering Chain run with input: {\n",
      "  \"input\": \"\"\n",
      "}\n",
      "\u001b[32m[chain/start]\u001b[39m [\u001b[90m1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > \u001b[1m4:chain:RunnableMap\u001b[22m\u001b[39m] Entering Chain run with input: {\n",
      "  \"input\": \"\"\n",
      "}\n",
      "\u001b[32m[chain/start]\u001b[39m [\u001b[90m1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap > \u001b[1m5:chain:RunnableLambda\u001b[22m\u001b[39m] Entering Chain run with input: {\n",
      "  \"input\": \"\"\n",
      "}\n",
      "\u001b[36m[chain/end]\u001b[39m [\u001b[90m1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > 4:chain:RunnableMap > \u001b[1m5:chain:RunnableLambda\u001b[22m\u001b[39m] [0ms] Exiting Chain run with output: {\n",
      "  \"output\": []\n",
      "}\n",
      "\u001b[36m[chain/end]\u001b[39m [\u001b[90m1:chain:AgentExecutor > 2:chain:ToolCallingAgent > 3:chain:RunnableAssign > \u001b[1m4:chain:RunnableMap\u001b[22m\u001b[39m] [1ms] Exiting Chain run with output: {\n",
      "  \"agent_scratchpad\": []\n",
      "}\n",
      "\u001b[36m[chain/end]\u001b[39m [\u001b[90m1:chain:AgentExecutor > 2:chain:ToolCallingAgent > \u001b[1m3:chain:RunnableAssign\u001b[22m\u001b[39m] [1ms] Exiting Chain run with output: {\n",
      "  \"input\": \"what is the value of magic_function(3)?\",\n",
      "  \"steps\": [],\n",
      "  \"agent_scratchpad\": []\n",
      "}\n",
      "\u001b[32m[chain/start]\u001b[39m [\u001b[90m1:chain:AgentExecutor > 2:chain:ToolCallingAgent > \u001b[1m6:prompt:ChatPromptTemplate\u001b[22m\u001b[39m] Entering Chain run with input: {\n",
      "  \"input\": \"what is the value of magic_function(3)?\",\n",
      "  \"steps\": [],\n",
      "  \"agent_scratchpad\": []\n",
      "}\n",
      "\u001b[36m[chain/end]\u001b[39m [\u001b[90m1:chain:AgentExecutor > 2:chain:ToolCallingAgent > \u001b[1m6:prompt:ChatPromptTemplate\u001b[22m\u001b[39m] [0ms] Exiting Chain run with output: {\n",
      "  \"lc\": 1,\n",
      "  \"type\": \"constructor\",\n",
      "  \"id\": [\n",
      "    \"langchain_core\",\n",
      "    \"prompt_values\",\n",
      "    \"ChatPromptValue\"\n",
      "  ],\n",
      "  \"kwargs\": {\n",
      "    \"messages\": [\n",
      "      {\n",
      "        \"lc\": 1,\n",
      "        \"type\": \"constructor\",\n",
      "        \"id\": [\n",
      "          \"langchain_core\",\n",
      "          \"messages\",\n",
      "          \"SystemMessage\"\n",
      "        ],\n",
      "        \"kwargs\": {\n",
      "          \"content\": \"You are a helpful assistant. Respond only in Spanish.\",\n",
      "          \"additional_kwargs\": {},\n",
      "          \"response_metadata\": {}\n",
      "        }\n",
      "      },\n",
      "      {\n",
      "        \"lc\": 1,\n",
      "        \"type\": \"constructor\",\n",
      "        \"id\": [\n",
      "          \"langchain_core\",\n",
      "          \"messages\",\n",
      "          \"HumanMessage\"\n",
      "        ],\n",
      "        \"kwargs\": {\n",
      "          \"content\": \"what is the value of magic_function(3)?\",\n",
      "          \"additional_kwargs\": {},\n",
      "          \"response_metadata\": {}\n",
      "        }\n",
      "      }\n",
      "    ]\n",
      "  }\n",
      "}\n",
      "\u001b[32m[llm/start]\u001b[39m [\u001b[90m1:chain:AgentExecutor > 2:chain:ToolCallingAgent > \u001b[1m7:llm:ChatAnthropic\u001b[22m\u001b[39m] Entering LLM run with input: {\n",
      "  \"messages\": [\n",
      "    [\n",
      "      {\n",
      "        \"lc\": 1,\n",
      "        \"type\": \"constructor\",\n",
      "        \"id\": [\n",
      "          \"langchain_core\",\n",
      "          \"messages\",\n",
      "          \"SystemMessage\"\n",
      "        ],\n",
      "        \"kwargs\": {\n",
      "          \"content\": \"You are a helpful assistant. Respond only in Spanish.\",\n",
      "          \"additional_kwargs\": {},\n",
      "          \"response_metadata\": {}\n",
      "        }\n",
      "      },\n",
      "      {\n",
      "        \"lc\": 1,\n",
      "        \"type\": \"constructor\",\n",
      "        \"id\": [\n",
      "          \"langchain_core\",\n",
      "          \"messages\",\n",
      "          \"HumanMessage\"\n",
      "        ],\n",
      "        \"kwargs\": {\n",
      "          \"content\": \"what is the value of magic_function(3)?\",\n",
      "          \"additional_kwargs\": {},\n",
      "          \"response_metadata\": {}\n",
      "        }\n",
      "      }\n",
      "    ]\n",
      "  ]\n",
      "}\n",
      "\u001b[36m[llm/end]\u001b[39m [\u001b[90m1:chain:AgentExecutor > 2:chain:ToolCallingAgent > \u001b[1m7:llm:ChatAnthropic\u001b[22m\u001b[39m] [1.56s] Exiting LLM run with output: {\n",
      "  \"generations\": [\n",
      "    [\n",
      "      {\n",
      "        \"text\": \"Lo siento, pero la función \\\"magic_function\\\" espera un parámetro de tipo \\\"string\\\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.\",\n",
      "        \"message\": {\n",
      "          \"lc\": 1,\n",
      "          \"type\": \"constructor\",\n",
      "          \"id\": [\n",
      "            \"langchain_core\",\n",
      "            \"messages\",\n",
      "            \"AIMessageChunk\"\n",
      "          ],\n",
      "          \"kwargs\": {\n",
      "            \"content\": \"Lo siento, pero la función \\\"magic_function\\\" espera un parámetro de tipo \\\"string\\\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.\",\n",
      "            \"additional_kwargs\": {\n",
      "              \"id\": \"msg_011b4GnLtiCRnCzZiqUBAZeH\",\n",
      "              \"type\": \"message\",\n",
      "              \"role\": \"assistant\",\n",
      "              \"model\": \"claude-3-haiku-20240307\",\n",
      "              \"stop_reason\": \"end_turn\",\n",
      "              \"stop_sequence\": null,\n",
      "              \"usage\": {\n",
      "                \"input_tokens\": 378,\n",
      "                \"output_tokens\": 59\n",
      "              }\n",
      "            },\n",
      "            \"tool_call_chunks\": [],\n",
      "            \"tool_calls\": [],\n",
      "            \"invalid_tool_calls\": [],\n",
      "            \"response_metadata\": {}\n",
      "          }\n",
      "        }\n",
      "      }\n",
      "    ]\n",
      "  ]\n",
      "}\n",
      "\u001b[32m[chain/start]\u001b[39m [\u001b[90m1:chain:AgentExecutor > 2:chain:ToolCallingAgent > \u001b[1m8:parser:ToolCallingAgentOutputParser\u001b[22m\u001b[39m] Entering Chain run with input: {\n",
      "  \"lc\": 1,\n",
      "  \"type\": \"constructor\",\n",
      "  \"id\": [\n",
      "    \"langchain_core\",\n",
      "    \"messages\",\n",
      "    \"AIMessageChunk\"\n",
      "  ],\n",
      "  \"kwargs\": {\n",
      "    \"content\": \"Lo siento, pero la función \\\"magic_function\\\" espera un parámetro de tipo \\\"string\\\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.\",\n",
      "    \"additional_kwargs\": {\n",
      "      \"id\": \"msg_011b4GnLtiCRnCzZiqUBAZeH\",\n",
      "      \"type\": \"message\",\n",
      "      \"role\": \"assistant\",\n",
      "      \"model\": \"claude-3-haiku-20240307\",\n",
      "      \"stop_reason\": \"end_turn\",\n",
      "      \"stop_sequence\": null,\n",
      "      \"usage\": {\n",
      "        \"input_tokens\": 378,\n",
      "        \"output_tokens\": 59\n",
      "      }\n",
      "    },\n",
      "    \"tool_call_chunks\": [],\n",
      "    \"tool_calls\": [],\n",
      "    \"invalid_tool_calls\": [],\n",
      "    \"response_metadata\": {}\n",
      "  }\n",
      "}\n",
      "\u001b[36m[chain/end]\u001b[39m [\u001b[90m1:chain:AgentExecutor > 2:chain:ToolCallingAgent > \u001b[1m8:parser:ToolCallingAgentOutputParser\u001b[22m\u001b[39m] [0ms] Exiting Chain run with output: {\n",
      "  \"returnValues\": {\n",
      "    \"output\": \"Lo siento, pero la función \\\"magic_function\\\" espera un parámetro de tipo \\\"string\\\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.\"\n",
      "  },\n",
      "  \"log\": \"Lo siento, pero la función \\\"magic_function\\\" espera un parámetro de tipo \\\"string\\\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.\"\n",
      "}\n",
      "\u001b[36m[chain/end]\u001b[39m [\u001b[90m1:chain:AgentExecutor > \u001b[1m2:chain:ToolCallingAgent\u001b[22m\u001b[39m] [1.56s] Exiting Chain run with output: {\n",
      "  \"returnValues\": {\n",
      "    \"output\": \"Lo siento, pero la función \\\"magic_function\\\" espera un parámetro de tipo \\\"string\\\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.\"\n",
      "  },\n",
      "  \"log\": \"Lo siento, pero la función \\\"magic_function\\\" espera un parámetro de tipo \\\"string\\\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.\"\n",
      "}\n",
      "\u001b[36m[chain/end]\u001b[39m [\u001b[90m\u001b[1m1:chain:AgentExecutor\u001b[22m\u001b[39m] [1.56s] Exiting Chain run with output: {\n",
      "  \"input\": \"what is the value of magic_function(3)?\",\n",
      "  \"output\": \"Lo siento, pero la función \\\"magic_function\\\" espera un parámetro de tipo \\\"string\\\", no un número entero. Por favor, proporciona una entrada de tipo cadena de texto para que pueda aplicar la función mágica.\"\n",
      "}\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{\n",
       "  input: \u001b[32m\"what is the value of magic_function(3)?\"\u001b[39m,\n",
       "  output: \u001b[32m'Lo siento, pero la función \"magic_function\" espera un parámetro de tipo \"string\", no un número enter'\u001b[39m... 103 more characters\n",
       "}"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "const badMagicTool = new DynamicStructuredTool({\n",
    "  name: \"magic_function\",\n",
    "  description: \"Applies a magic function to an input.\",\n",
    "  schema: z.object({\n",
    "    input: z.string(),\n",
    "  }),\n",
    "  func: async ({ input }) => {\n",
    "    return \"Sorry, there was an error. Please try again.\";\n",
    "  },\n",
    "});\n",
    "\n",
    "const badTools = [badMagicTool];\n",
    "\n",
    "const spanishAgentExecutorWithMaxIterations = new AgentExecutor({\n",
    "  agent: createToolCallingAgent({\n",
    "    llm,\n",
    "    tools: badTools,\n",
    "    prompt: spanishPrompt,\n",
    "  }),\n",
    "  tools: badTools,\n",
    "  verbose: true,\n",
    "  maxIterations: 2,\n",
    "});\n",
    "\n",
    "await spanishAgentExecutorWithMaxIterations.invoke({ input: query });\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "2f5e7d58",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Recursion limit reached.\n"
     ]
    }
   ],
   "source": [
    "import { GraphRecursionError } from \"@langchain/langgraph\";\n",
    "\n",
    "const RECURSION_LIMIT = 2 * 2 + 1;\n",
    "\n",
    "const appWithBadTools = createReactAgent({ llm, tools: badTools });\n",
    "\n",
    "try {\n",
    "  await appWithBadTools.invoke({\n",
    "    messages: [\n",
    "      new HumanMessage(query)\n",
    "    ]\n",
    "  }, {\n",
    "    recursionLimit: RECURSION_LIMIT,\n",
    "  });\n",
    "} catch (e) {\n",
    "  if (e instanceof GraphRecursionError) {\n",
    "    console.log(\"Recursion limit reached.\");\n",
    "  } else {\n",
    "    throw e;\n",
    "  }\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3f5bf788",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Deno",
   "language": "typescript",
   "name": "deno"
  },
  "language_info": {
   "file_extension": ".ts",
   "mimetype": "text/x.typescript",
   "name": "typescript",
   "nb_converter": "script",
   "pygments_lexer": "typescript",
   "version": "5.3.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
