{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# INVALID_TOOL_RESULTS\n",
        "\n",
        "You are passing too many, too few, or mismatched [`ToolMessages`](https://api.js.langchain.com/classes/_langchain_core.messages_tool.ToolMessage.html) to a model.\n",
        "\n",
        "When [using a model to call tools](/docs/concepts/tool_calling), the [`AIMessage`](https://api.js.langchain.com/classes/_langchain_core.messages.AIMessage.html)\n",
        "the model responds with will contain a `tool_calls` array. To continue the flow, the next messages you pass back to the model must\n",
        "be exactly one `ToolMessage` for each item in that array containing the result of that tool call. Each `ToolMessage` must have a `tool_call_id` field\n",
        "that matches one of the `tool_calls` on the `AIMessage`.\n",
        "\n",
        "For example, given the following response from a model:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "AIMessage {\n",
            "  \"id\": \"chatcmpl-AIgT1xUd6lkWAutThiiBsqjq7Ykj1\",\n",
            "  \"content\": \"\",\n",
            "  \"additional_kwargs\": {\n",
            "    \"tool_calls\": [\n",
            "      {\n",
            "        \"id\": \"call_BknYpnY7xiARM17TPYqL7luj\",\n",
            "        \"type\": \"function\",\n",
            "        \"function\": \"[Object]\"\n",
            "      },\n",
            "      {\n",
            "        \"id\": \"call_EHf8MIcTdsLCZcFVlcH4hxJw\",\n",
            "        \"type\": \"function\",\n",
            "        \"function\": \"[Object]\"\n",
            "      }\n",
            "    ]\n",
            "  },\n",
            "  \"response_metadata\": {\n",
            "    \"tokenUsage\": {\n",
            "      \"promptTokens\": 42,\n",
            "      \"completionTokens\": 37,\n",
            "      \"totalTokens\": 79\n",
            "    },\n",
            "    \"finish_reason\": \"tool_calls\",\n",
            "    \"usage\": {\n",
            "      \"prompt_tokens\": 42,\n",
            "      \"completion_tokens\": 37,\n",
            "      \"total_tokens\": 79,\n",
            "      \"prompt_tokens_details\": {\n",
            "        \"cached_tokens\": 0\n",
            "      },\n",
            "      \"completion_tokens_details\": {\n",
            "        \"reasoning_tokens\": 0\n",
            "      }\n",
            "    },\n",
            "    \"system_fingerprint\": \"fp_e2bde53e6e\"\n",
            "  },\n",
            "  \"tool_calls\": [\n",
            "    {\n",
            "      \"name\": \"foo\",\n",
            "      \"args\": {},\n",
            "      \"type\": \"tool_call\",\n",
            "      \"id\": \"call_BknYpnY7xiARM17TPYqL7luj\"\n",
            "    },\n",
            "    {\n",
            "      \"name\": \"foo\",\n",
            "      \"args\": {},\n",
            "      \"type\": \"tool_call\",\n",
            "      \"id\": \"call_EHf8MIcTdsLCZcFVlcH4hxJw\"\n",
            "    }\n",
            "  ],\n",
            "  \"invalid_tool_calls\": [],\n",
            "  \"usage_metadata\": {\n",
            "    \"output_tokens\": 37,\n",
            "    \"input_tokens\": 42,\n",
            "    \"total_tokens\": 79,\n",
            "    \"input_token_details\": {\n",
            "      \"cache_read\": 0\n",
            "    },\n",
            "    \"output_token_details\": {\n",
            "      \"reasoning\": 0\n",
            "    }\n",
            "  }\n",
            "}\n"
          ]
        }
      ],
      "source": [
        "import { z } from \"zod\";\n",
        "import { tool } from \"@langchain/core/tools\";\n",
        "import { ChatOpenAI } from \"@langchain/openai\";\n",
        "import { BaseMessageLike } from \"@langchain/core/messages\";\n",
        "\n",
        "const model = new ChatOpenAI({\n",
        "  model: \"gpt-4o-mini\",\n",
        "});\n",
        "\n",
        "const dummyTool = tool(\n",
        "  async () => {\n",
        "    return \"action complete!\";\n",
        "  },\n",
        "  {\n",
        "    name: \"foo\",\n",
        "    schema: z.object({}),\n",
        "  }\n",
        ");\n",
        "\n",
        "const modelWithTools = model.bindTools([dummyTool]);\n",
        "\n",
        "const chatHistory: BaseMessageLike[] = [\n",
        "  {\n",
        "    role: \"user\",\n",
        "    content: `Call tool \"foo\" twice with no arguments`,\n",
        "  },\n",
        "];\n",
        "\n",
        "const responseMessage = await modelWithTools.invoke(chatHistory);\n",
        "\n",
        "console.log(responseMessage);"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Calling the model with only one tool response would result in an error:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {},
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "BadRequestError: 400 An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_EHf8MIcTdsLCZcFVlcH4hxJw\n",
            "    at APIError.generate (/Users/jacoblee/langchain/langchainjs/libs/langchain-openai/node_modules/openai/error.js:45:20)\n",
            "    at OpenAI.makeStatusError (/Users/jacoblee/langchain/langchainjs/libs/langchain-openai/node_modules/openai/core.js:291:33)\n",
            "    at OpenAI.makeRequest (/Users/jacoblee/langchain/langchainjs/libs/langchain-openai/node_modules/openai/core.js:335:30)\n",
            "    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n",
            "    at async /Users/jacoblee/langchain/langchainjs/libs/langchain-openai/dist/chat_models.cjs:1441:29\n",
            "    at async RetryOperation._fn (/Users/jacoblee/langchain/langchainjs/node_modules/p-retry/index.js:50:12) {\n",
            "  status: 400,\n",
            "  headers: {\n",
            "    'access-control-expose-headers': 'X-Request-ID',\n",
            "    'alt-svc': 'h3=\":443\"; ma=86400',\n",
            "    'cf-cache-status': 'DYNAMIC',\n",
            "    'cf-ray': '8d31d4d95e2a0c96-EWR',\n",
            "    connection: 'keep-alive',\n",
            "    'content-length': '315',\n",
            "    'content-type': 'application/json',\n",
            "    date: 'Tue, 15 Oct 2024 18:21:53 GMT',\n",
            "    'openai-organization': 'langchain',\n",
            "    'openai-processing-ms': '16',\n",
            "    'openai-version': '2020-10-01',\n",
            "    server: 'cloudflare',\n",
            "    'set-cookie': '__cf_bm=e5.GX1bHiMVgr76YSvAKuECCGG7X_RXF0jDGSMXFGfU-1729016513-1.0.1.1-ZBYeVqX.M6jSNJB.wS696fEhX7V.es._M0WcWtQ9Qx8doEA5qMVKNE5iX6i7UKyPCg2GvDfM.MoDwRCXKMSkEA; path=/; expires=Tue, 15-Oct-24 18:51:53 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None, _cfuvid=J8gS08GodUA9hRTYuElen0YOCzMO3d4LW0ZT0k_kyj4-1729016513560-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None',\n",
            "    'strict-transport-security': 'max-age=31536000; includeSubDomains; preload',\n",
            "    'x-content-type-options': 'nosniff',\n",
            "    'x-ratelimit-limit-requests': '30000',\n",
            "    'x-ratelimit-limit-tokens': '150000000',\n",
            "    'x-ratelimit-remaining-requests': '29999',\n",
            "    'x-ratelimit-remaining-tokens': '149999967',\n",
            "    'x-ratelimit-reset-requests': '2ms',\n",
            "    'x-ratelimit-reset-tokens': '0s',\n",
            "    'x-request-id': 'req_f810058e7f047fafcb713575c4419161'\n",
            "  },\n",
            "  request_id: 'req_f810058e7f047fafcb713575c4419161',\n",
            "  error: {\n",
            "    message: \"An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_EHf8MIcTdsLCZcFVlcH4hxJw\",\n",
            "    type: 'invalid_request_error',\n",
            "    param: 'messages',\n",
            "    code: null\n",
            "  },\n",
            "  code: null,\n",
            "  param: 'messages',\n",
            "  type: 'invalid_request_error',\n",
            "  attemptNumber: 1,\n",
            "  retriesLeft: 6\n",
            "}\n"
          ]
        }
      ],
      "source": [
        "const toolResponse1 = await dummyTool.invoke(responseMessage.tool_calls![0]);\n",
        "\n",
        "chatHistory.push(responseMessage);\n",
        "chatHistory.push(toolResponse1);\n",
        "\n",
        "await modelWithTools.invoke(chatHistory);"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "If we add a second response, the call will succeed as expected because we now have one tool response per tool call:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "AIMessage {\n",
            "  \"id\": \"chatcmpl-AIgTPDBm1epnnLHx0tPFTgpsf8Ay6\",\n",
            "  \"content\": \"The tool \\\"foo\\\" was called twice, and both times returned the result: \\\"action complete!\\\".\",\n",
            "  \"additional_kwargs\": {},\n",
            "  \"response_metadata\": {\n",
            "    \"tokenUsage\": {\n",
            "      \"promptTokens\": 98,\n",
            "      \"completionTokens\": 21,\n",
            "      \"totalTokens\": 119\n",
            "    },\n",
            "    \"finish_reason\": \"stop\",\n",
            "    \"usage\": {\n",
            "      \"prompt_tokens\": 98,\n",
            "      \"completion_tokens\": 21,\n",
            "      \"total_tokens\": 119,\n",
            "      \"prompt_tokens_details\": {\n",
            "        \"cached_tokens\": 0\n",
            "      },\n",
            "      \"completion_tokens_details\": {\n",
            "        \"reasoning_tokens\": 0\n",
            "      }\n",
            "    },\n",
            "    \"system_fingerprint\": \"fp_e2bde53e6e\"\n",
            "  },\n",
            "  \"tool_calls\": [],\n",
            "  \"invalid_tool_calls\": [],\n",
            "  \"usage_metadata\": {\n",
            "    \"output_tokens\": 21,\n",
            "    \"input_tokens\": 98,\n",
            "    \"total_tokens\": 119,\n",
            "    \"input_token_details\": {\n",
            "      \"cache_read\": 0\n",
            "    },\n",
            "    \"output_token_details\": {\n",
            "      \"reasoning\": 0\n",
            "    }\n",
            "  }\n",
            "}\n"
          ]
        }
      ],
      "source": [
        "const toolResponse2 = await dummyTool.invoke(responseMessage.tool_calls![1]);\n",
        "\n",
        "chatHistory.push(toolResponse2);\n",
        "\n",
        "await modelWithTools.invoke(chatHistory);"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "But if we add a duplicate, extra tool response, the call will fail again:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {},
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "BadRequestError: 400 Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.\n",
            "    at APIError.generate (/Users/jacoblee/langchain/langchainjs/libs/langchain-openai/node_modules/openai/error.js:45:20)\n",
            "    at OpenAI.makeStatusError (/Users/jacoblee/langchain/langchainjs/libs/langchain-openai/node_modules/openai/core.js:291:33)\n",
            "    at OpenAI.makeRequest (/Users/jacoblee/langchain/langchainjs/libs/langchain-openai/node_modules/openai/core.js:335:30)\n",
            "    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n",
            "    at async /Users/jacoblee/langchain/langchainjs/libs/langchain-openai/dist/chat_models.cjs:1441:29\n",
            "    at async RetryOperation._fn (/Users/jacoblee/langchain/langchainjs/node_modules/p-retry/index.js:50:12) {\n",
            "  status: 400,\n",
            "  headers: {\n",
            "    'access-control-expose-headers': 'X-Request-ID',\n",
            "    'alt-svc': 'h3=\":443\"; ma=86400',\n",
            "    'cf-cache-status': 'DYNAMIC',\n",
            "    'cf-ray': '8d31d57dff5e0f3b-EWR',\n",
            "    connection: 'keep-alive',\n",
            "    'content-length': '233',\n",
            "    'content-type': 'application/json',\n",
            "    date: 'Tue, 15 Oct 2024 18:22:19 GMT',\n",
            "    'openai-organization': 'langchain',\n",
            "    'openai-processing-ms': '36',\n",
            "    'openai-version': '2020-10-01',\n",
            "    server: 'cloudflare',\n",
            "    'set-cookie': '__cf_bm=QUsNlSGxVeIbscI0rm2YR3U9aUFLNxxqh1i_3aYBGN4-1729016539-1.0.1.1-sKRUvxHkQXvlb5LaqASkGtIwPMWUF5x9kF0ut8NLP6e0FVKEhdIEkEe6lYA1toW45JGTwp98xahaX7wt9CO4AA; path=/; expires=Tue, 15-Oct-24 18:52:19 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None, _cfuvid=J6fN8u8HUieCeyLDI59mi_0r_W0DgiO207wEtvrmT9Y-1729016539919-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None',\n",
            "    'strict-transport-security': 'max-age=31536000; includeSubDomains; preload',\n",
            "    'x-content-type-options': 'nosniff',\n",
            "    'x-ratelimit-limit-requests': '30000',\n",
            "    'x-ratelimit-limit-tokens': '150000000',\n",
            "    'x-ratelimit-remaining-requests': '29999',\n",
            "    'x-ratelimit-remaining-tokens': '149999956',\n",
            "    'x-ratelimit-reset-requests': '2ms',\n",
            "    'x-ratelimit-reset-tokens': '0s',\n",
            "    'x-request-id': 'req_aebfebbb9af2feaf2e9683948e431676'\n",
            "  },\n",
            "  request_id: 'req_aebfebbb9af2feaf2e9683948e431676',\n",
            "  error: {\n",
            "    message: \"Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.\",\n",
            "    type: 'invalid_request_error',\n",
            "    param: 'messages.[4].role',\n",
            "    code: null\n",
            "  },\n",
            "  code: null,\n",
            "  param: 'messages.[4].role',\n",
            "  type: 'invalid_request_error',\n",
            "  attemptNumber: 1,\n",
            "  retriesLeft: 6\n",
            "}\n"
          ]
        }
      ],
      "source": [
        "const duplicateToolResponse2 = await dummyTool.invoke(responseMessage.tool_calls![1]);\n",
        "\n",
        "chatHistory.push(duplicateToolResponse2);\n",
        "\n",
        "await modelWithTools.invoke(chatHistory);"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "You should additionally not pass `ToolMessages` back to to a model if they are not preceded by an `AIMessage` with tool calls. For example, this will fail:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {},
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "BadRequestError: 400 Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.\n",
            "    at APIError.generate (/Users/jacoblee/langchain/langchainjs/libs/langchain-openai/node_modules/openai/error.js:45:20)\n",
            "    at OpenAI.makeStatusError (/Users/jacoblee/langchain/langchainjs/libs/langchain-openai/node_modules/openai/core.js:291:33)\n",
            "    at OpenAI.makeRequest (/Users/jacoblee/langchain/langchainjs/libs/langchain-openai/node_modules/openai/core.js:335:30)\n",
            "    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n",
            "    at async /Users/jacoblee/langchain/langchainjs/libs/langchain-openai/dist/chat_models.cjs:1441:29\n",
            "    at async RetryOperation._fn (/Users/jacoblee/langchain/langchainjs/node_modules/p-retry/index.js:50:12) {\n",
            "  status: 400,\n",
            "  headers: {\n",
            "    'access-control-expose-headers': 'X-Request-ID',\n",
            "    'alt-svc': 'h3=\":443\"; ma=86400',\n",
            "    'cf-cache-status': 'DYNAMIC',\n",
            "    'cf-ray': '8d31d5da7fba19aa-EWR',\n",
            "    connection: 'keep-alive',\n",
            "    'content-length': '233',\n",
            "    'content-type': 'application/json',\n",
            "    date: 'Tue, 15 Oct 2024 18:22:34 GMT',\n",
            "    'openai-organization': 'langchain',\n",
            "    'openai-processing-ms': '25',\n",
            "    'openai-version': '2020-10-01',\n",
            "    server: 'cloudflare',\n",
            "    'set-cookie': '__cf_bm=qK6.PWACr7IYuMafLpxumD4CrFnwHQiJn4TiGkrNTBk-1729016554-1.0.1.1-ECIk0cvh1wOfsK41a1Ce7npngsUDRRG93_yinP4.kVIWu1eX0CFG19iZ8yfGXedyPo6Wh1CKTGLk_3Qwrg.blA; path=/; expires=Tue, 15-Oct-24 18:52:34 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None, _cfuvid=IVTqysqHo4VUVJ.tVTcGg0rnXGWTbSSzX5mcUVrw8BU-1729016554732-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None',\n",
            "    'strict-transport-security': 'max-age=31536000; includeSubDomains; preload',\n",
            "    'x-content-type-options': 'nosniff',\n",
            "    'x-ratelimit-limit-requests': '30000',\n",
            "    'x-ratelimit-limit-tokens': '150000000',\n",
            "    'x-ratelimit-remaining-requests': '29999',\n",
            "    'x-ratelimit-remaining-tokens': '149999978',\n",
            "    'x-ratelimit-reset-requests': '2ms',\n",
            "    'x-ratelimit-reset-tokens': '0s',\n",
            "    'x-request-id': 'req_59339f8163ef5bd3f0308a212611dfea'\n",
            "  },\n",
            "  request_id: 'req_59339f8163ef5bd3f0308a212611dfea',\n",
            "  error: {\n",
            "    message: \"Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.\",\n",
            "    type: 'invalid_request_error',\n",
            "    param: 'messages.[0].role',\n",
            "    code: null\n",
            "  },\n",
            "  code: null,\n",
            "  param: 'messages.[0].role',\n",
            "  type: 'invalid_request_error',\n",
            "  attemptNumber: 1,\n",
            "  retriesLeft: 6\n",
            "}\n"
          ]
        }
      ],
      "source": [
        "await modelWithTools.invoke([{\n",
        "  role: \"tool\",\n",
        "  content: \"action completed!\",\n",
        "  tool_call_id: \"dummy\",\n",
        "}])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "See [this guide](/docs/how_to/tool_results_pass_to_model/) for more details on tool calling.\n",
        "\n",
        "## Troubleshooting\n",
        "\n",
        "The following may help resolve this error:\n",
        "\n",
        "- If you are using a custom executor rather than a prebuilt one like LangGraph's [`ToolNode`](https://langchain-ai.github.io/langgraphjs/reference/classes/langgraph_prebuilt.ToolNode.html)\n",
        "  or the legacy LangChain [AgentExecutor](/docs/how_to/agent_executor), verify that you are invoking and returning the result for one tool per tool call.\n",
        "- If you are using [few-shot tool call examples](/docs/how_to/tools_few_shot) with messages that you manually create, and you want to simulate a failure,\n",
        "  you still need to pass back a `ToolMessage` whose content indicates that failure.\n"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "TypeScript",
      "language": "typescript",
      "name": "tslab"
    },
    "language_info": {
      "codemirror_mode": {
        "mode": "typescript",
        "name": "javascript",
        "typescript": true
      },
      "file_extension": ".ts",
      "mimetype": "text/typescript",
      "name": "typescript",
      "version": "3.7.2"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 2
}
