{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "942222c8-b56a-4ccf-a6ec-3b8f051d8a53",
   "metadata": {},
   "source": [
    "# 构建一个聊天机器人（Chatbot）\n",
    "\n",
    "> **注意**\n",
    "> 本教程之前使用的是 `RunnableWithMessageHistory` 抽象接口。你可以在 v0.2 版本的文档中查看该版本的内容。\n",
    "\n",
    "从 **LangChain v0.3** 发布起，我们建议 LangChain 用户使用 **LangGraph 的持久化功能**，将记忆机制集成到新的 LangChain 应用程序中。\n",
    "\n",
    "如果你的代码已经依赖于 `RunnableWithMessageHistory` 或 `BaseChatMessageHistory`，则**无需进行任何更改**。我们近期不计划弃用这些功能，因为它们在简单的聊天应用中表现良好，任何使用 `RunnableWithMessageHistory` 的代码都将按预期继续运行。\n",
    "\n",
    "\n",
    "### 概述（Overview）\n",
    "\n",
    "我们将通过一个示例来讲解如何设计和实现一个由大语言模型（LLM）驱动的聊天机器人。这个聊天机器人将能够与用户进行对话，并记住与聊天模型之前的交互记录。\n",
    "\n",
    "请注意，我们构建的这个聊天机器人将**仅使用语言模型来进行对话**。还有一些其他相关概念你可能也感兴趣：\n",
    "\n",
    "- **对话式 RAG（Conversational RAG）**：让你的聊天机器人基于外部数据源提供回答  \n",
    "- **智能代理（Agents）**：构建一个可以执行操作的聊天机器人  \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "9c4ee376-3d57-43a0-b00f-1d7ea694a50b",
   "metadata": {},
   "outputs": [],
   "source": [
    "import getpass\n",
    "import os\n",
    "\n",
    "try:\n",
    "    # load environment variables from .env file (requires `python-dotenv`)\n",
    "    from dotenv import load_dotenv\n",
    "\n",
    "    _ = load_dotenv()\n",
    "except ImportError:\n",
    "    pass\n",
    "\n",
    "if not os.environ.get(\"DASHSCOPE_API_KEY\"):\n",
    "  os.environ[\"DASHSCOPE_API_KEY\"] = getpass.getpass(\"Enter API key for OpenAI: \")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "247e7dd9-c111-4071-8965-670914b96072",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_community.chat_models.tongyi import ChatTongyi\n",
    "\n",
    "model = ChatTongyi(\n",
    "    streaming=True,\n",
    "    name=\"qwen-max\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a3b529d3-98ec-422f-8b74-82b0539b6e56",
   "metadata": {},
   "source": [
    "我们首先直接使用模型。ChatModels 是 LangChain 中 “Runnables” 的实例，这意味着它们提供了一个标准的接口用于与之交互。要简单地调用模型，我们可以将一组消息列表传递给 `.invoke` 方法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "327a4dcf-708e-4189-8f63-a0baed87e10a",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='你好张三，很高兴认识你！有什么我可以帮助你的？', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': 'a76d6693-5d4f-9e40-8954-fa578691f1e3', 'token_usage': {'input_tokens': 14, 'output_tokens': 13, 'total_tokens': 27, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--37ca05cd-f8e0-48f3-b3b8-c6bdf03663ee-0')"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.messages import HumanMessage\n",
    "\n",
    "model.invoke([HumanMessage(content=\"你好 我是张三\")])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4b0fc28b-be7d-4c77-8b7e-b6516d1a3913",
   "metadata": {},
   "source": [
    "该模型本身没有任何状态概念。例如，如果你问一个后续问题："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "89c1a596-9008-4e87-8255-104865ffe974",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='你还没有告诉我你的名字呢。你可以告诉我你的名字，我会记住的！', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': 'b27eacb7-8894-9b15-b9e7-66e27b739107', 'token_usage': {'input_tokens': 12, 'output_tokens': 16, 'total_tokens': 28, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--16d13fcc-f4f2-4dad-b4cc-260f6ddf9e83-0')"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.invoke([HumanMessage(content=\"我叫什么?\")])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4fb13f9f-d27a-4e3f-8302-b054edf22797",
   "metadata": {},
   "source": [
    "为了解决这个问题，我们需要将整个对话历史记录传入模型。让我们看看这样做会发生什么："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "3b7ddc00-9f55-418c-b3ed-c5d1798b380b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content='你说你叫王五呀！没错，刚才你已经介绍过了。😄 那么，王五同学，今天想聊些什么呢？', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': 'abf59717-775a-9c1f-8d75-339300b9cf1c', 'token_usage': {'input_tokens': 41, 'output_tokens': 31, 'total_tokens': 72, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--a43ebff2-0ccc-442d-8ea8-b016e0f1e528-0')"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.messages import AIMessage\n",
    "\n",
    "model.invoke(\n",
    "    [\n",
    "        HumanMessage(content=\"你好 我是王五\"),\n",
    "        AIMessage(content=\"你好，王五！今天有什么可以帮到你吗？\"),\n",
    "        HumanMessage(content=\"我叫什么?\"),\n",
    "    ]\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5df3b824-74f1-4ba0-8667-df6234ae108b",
   "metadata": {},
   "source": [
    "现在我们可以看到，我们得到了不错的回复！\n",
    "\n",
    "这就是聊天机器人对话式交互能力的基本理念。那么，我们如何才能最好地实现这一点呢？"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f9411143-f24c-4754-b32a-cb83dd44f168",
   "metadata": {},
   "source": [
    "### 消息持久化（Message persistence）\n",
    "\n",
    "LangGraph 实现了一个内置的持久化层，这使它非常适合用于支持多轮对话的聊天应用。\n",
    "\n",
    "将我们的聊天模型封装在一个最小的 LangGraph 应用中，可以让我们自动持久化消息历史记录，从而简化多轮对话应用的开发。\n",
    "\n",
    "LangGraph 自带了一个简单的内存型检查点存储器（in-memory checkpointer），我们在下面使用的就是它。你可以在其文档中查看更多详细信息，包括如何使用不同的持久化后端（例如 SQLite 或 Postgres）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "b61f983d-c7ff-45df-aec8-9a99795db781",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langgraph.checkpoint.memory import MemorySaver\n",
    "from langgraph.graph import START, MessagesState, StateGraph\n",
    "\n",
    "# Define a new graph\n",
    "workflow = StateGraph(state_schema=MessagesState)\n",
    "\n",
    "\n",
    "# Define the function that calls the model\n",
    "def call_model(state: MessagesState):\n",
    "    response = model.invoke(state[\"messages\"])\n",
    "    return {\"messages\": response}\n",
    "\n",
    "\n",
    "# Define the (single) node in the graph\n",
    "workflow.add_edge(START, \"model\")\n",
    "workflow.add_node(\"model\", call_model)\n",
    "\n",
    "# Add memory\n",
    "memory = MemorySaver()\n",
    "app = workflow.compile(checkpointer=memory)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "8bcac532-2091-49de-b5ad-e26c76342c46",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-----------+  \n",
      "| __start__ |  \n",
      "+-----------+  \n",
      "      *        \n",
      "      *        \n",
      "      *        \n",
      "  +-------+    \n",
      "  | model |    \n",
      "  +-------+    \n",
      "      *        \n",
      "      *        \n",
      "      *        \n",
      " +---------+   \n",
      " | __end__ |   \n",
      " +---------+   \n"
     ]
    }
   ],
   "source": [
    "print(app.get_graph().draw_ascii())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "666bd376-fb63-4ce8-b8c0-8de347bcfafc",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAGoAAADqCAIAAADF80cYAAAAAXNSR0IArs4c6QAAFX1JREFUeJztnXl8FEW+wKun574zyYQck8lJkIQE4gSCYJQjkrBE2CDLraKyLODiQx/LuvrEg+fxWXEF3V1MvNbVqKw8EQkB1JVdAgIJkHAFEpKQ+5xJJnPPdPf0+2PYmMU5U9Nkwtb3L+iq6v7lO9Xd1VXdVRhN0wAxUlijHcDYBumDAumDAumDAumDAumDgg1ZvrvZZjZQNjNls1AUMTbaQDgH4wtxvggXy/Bx8XyYXWEja/ddv2RuumRuvGCSyNlSBYcvwvkiFoc7Nuoy4XDazE6rmTLoCPMgmTxZnDRJlJAuGsGuAtbX22b/xxe9hN05IVuaMkUsV3JGcNTQQd9HXKs21p0x8gSsWb+IVKp4ARUPQB9F0Me+7Gu5askpUEzMkY4o2tDl8klD5WFdUob43iVK/0v5q89qog6UdI6L59/7QAB7H1tQBH1sX5+2w174yxiBGPeniF/6dF2Or9/pmDIrLGu2PBhxhjRnvxu4cHxw0foYRRTXZ2bf+syD5Oc72nKLIlLvlAQvyJCm7ozxhzLt0qfUIqmPOujjXkk6nF8Xd2bmyv5z3AEAJmRL0u+SHSjpoEgfdcuHvtOH++VKztR5iqCGNwaYlq8Qy9mVR/q9Z/Omb1BLXK0y5q2KCnZsY4N5q6OuVBqMA6SXPN70Hf9KO3WegsPFGIhtDMDls+6cHVbxVZ+XPB71DWoJbZc9Y6aMmdjGBpm58p4Wu5cK6FHftWpTxkwZNjYew5iChYOMmbJr1UaPGTwlNJw3xk8cyWMgDLNmzeru7g601Oeff/7SSy8xExGInyhsqDF5SnWvz6QnrUYqPNp3uzGItLe3m0weA/VCbW0tA+HcQKniGfpJT+ev+w6rrmZboA/P/kPTdGlpaXl5eUtLS3Jy8vTp09evX3/27NkNGzYAAAoLC2fNmrVjx46Ghoa9e/dWVVV1d3cnJyc/8MADixYtAgDU19evXLly165dL774YmRkpEAgqK6uBgB8/fXXn376aWpqatADjlTxetvskjA3rtzrs5spgQS2K9ATpaWlH3300Zo1a5KTkzs7O//0pz/JZLJVq1a9+eabTz75ZFlZWVRUFADgjTfe6Onp+d3vfodhWGNj4/bt29VqdVZWFpfLBQC89957jzzyyOTJk9PS0h566KGUlJRt27YxFLBAgtstlNskD/qsTqF/z8wjoKamZtKkSatWrXL9Nzs72+Fw/DTba6+9ZrFYoqOjXXn27dt34sSJrKwsV+qMGTNWrFjBUIQ3IRDjdqvTbZJ7fU4njXOYau5lZGTs3r17+/btGo0mNzdXrVZ7iMFZWlr6ww8/tLa2urakpaUNpU6cOJGh8H4Kh8vy9PTmXp9AhGu73NSIoLB69WqJRHL06NFt27ax2ez58+c/8cQTYWFhw/NQFLVp0yaapjdt2jRt2jSRSLR69WpXEoZhAAA+H6qTPSAsRjIyzv3h3OsTStiWegtD0eA4vnjx4sWLFzc2NlZWVhYXF9tstldffXV4ntra2qtXrxYXF2s0GteWoZvyrX+rxGKghBL3lzIPtU+CW43uL5bwlJWVpaenJyYmJicnJycn63S67777bqhauTAajQAApfJG12xdXV17e/vQhe8mhhdkArORFErdi3Lf7lPG8rQddifFyO9cVla2devWiooKg8FQUVFx7NixzMxMAIBKpQIAfPPNN5cvX05KSsIwrLS01GQyNTU17dq1Kycnp6ury+0OY2NjL126dObMmYGBgaBHSxK0vpfw2ASmPbB/d0fjBZOnVBi6urqeeuopjUaj0Wjy8/NLSkqsVqsr6dlnn83JyVm/fj1N04cPH16yZIlGo1m8eHFtbe23336r0WhWrFhx/fp1jUZTVVU1tMOqqqqioqJp06ZVVlYGPdqGGuOBkg5PqR57my+dGOxsss17cFzQf8+xxZG/dselCtOmux8a8/jMm6qRtNVbvPd23fYYB8j2a9bxnnvavY11nD+m72yyzV/jvru0o6NjqOl7EywWy+l0385cunTpxo0b/Yh8JGzevLmmpsZtklwu1+v1bpNefvnlmTNnuk0q/6BLNV6Ymeux186bPicFPnmleeYiZXKmm64Xp9NpNpvdFrTZbJ7aZRwOh7kmm8VioSj3DQaCIDgc9yP6AoGAzXZzY60/azxZrnvo2QRvvXbeL5y9bbaSZxr7ux1BvySHONpOe8kzjb1tNu/ZfHSHKlW8eaujDr7f6bC5PxlvSxw258H3OuevifbZ7eTXMHndWWPNP/SFa2NEMqb6EUIHk548+H5X1my5P2Oz/r6k0dFoPbqnd97qqEg1U/2AoUBvq/3Ix915K8dFJ/p1gQ7gFSFDP3mgpCMxXTwtX8G+7YbfCAd9+pCurc6yYG2MVOFvX2dgL6hRBF172lB31jhphiw5U8zh3Q4SCbuz4bzp8klDWo7UU/PYEyN8PbLpkvn6RbNJT4RH88RyNl+E80X4WBkRJhy0zUzZzJRJT2q77JIwTlKGKPHWvB55E13Xbf3djkEtoe9z2CxBvjvrdDoAQHh4eHB3yxex5BFcmZITHsWNShiNl3NvDcXFxRiGrVu3brQD8ch/9jA4NEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFKH4WcyCBQsoiqJp2mq1AgBEIhFFURwO5+DBg6Md2s0wNU0aDNHR0dXV1UOT27g+sc/Ozh7tuNwQiifv8uXL5fJ/m548PDx8aA6rkCIU9eXl5aWkpAzfkpCQcO+9945eRB4JRX2u+UpkshvTf8jl8pUrV452RO4JUX1z585NSEhw/Ts+Pn7OnDmjHZF7QlQfAGDZsmUikUgkEi1btmy0Y/EI1J3XYXNqO+wMtXzSk3InJszEcTw9KbejwcrEITAMRMTyuPyR16ERtvva6iwnDujsVkokZQMwNr7BdwdtNpB8IT5zYYRqvGAE5UdS+04f6r9WbZy7KlYsD8VmY6AYB4i/f9J5xzTp1HlhfmT/NwKuty1XLJdPDRY8Fnd7uAMASMI4BWvjLh7Xt9YFfIkIWN/x/X3TF0TyIK4XIQhfwJq+IPKE18UR3BKYBZKgDf2kKvVWz2V/C1BNEOl1BBngSn2B6dP3OmQRXIZnWh0dMAzIIjj6PiKgUoHpczoB63Z05wIDGO1ksvYhbgLpgwLpgwLpgwLpgwLpgwLpgwLpgwLpgwLpgwLpg2Ls6evp6Z49N/vkyQrv2bY9/5vfPr2J6WDGnr6QAumDgvEO9+df2MrlcrM103f84X85HE7axIwXnv/9nr/99ZPSD8LCFPMLFv5y7a9dOVtbm9/c+Wr9tSscDjc+PvGxRzZmZt5Y2+m7vx/+8MPdZot5xl33FBUtG768U/mh/QfKvmxubkxKGj93TsHiols6qsl47eNwOOcvnKu7dmXvF0f++NaHNefPPrH5MT5fUF5WsXXLtk8/+8vFizUAAJ1O+/iv18TFxb//7p63dr4nkUi3v/yM3W4HADQ1Nbzy6nMLFhR9/Nd9c+bkv/3H14d2/u235a/v2J6WlvFZ6YFH1qz/9LMPi0veYvovGg7j+jAMczqdG9c/KZPKkpJS4uMTuRzuqpWPCASC6dPv5vP59deuAgD+9sUnAqFw8389HRUVrVYn/GbLNr1+oPzQfgDAvq/2REfHrlyxRiKWZGtyfjZ/0dDODxz8MmtK9qbHt8jlYdmanDUP/2rv/306aBhk+o8agnF9NE3HxKiGlmMRCkXxCUlDqSKR2Gw2AQCamxtTUyeyWDfikUllKpX6ytVLAIDOzvaEYUUmpN5YapGiqCtXLk2detdQ0pQp2SRJXqm9yPQfNQTj1z6apoekuMDcDavr+rXx6sThWwQCoc1qBQAYjQa5/McRWC6P59qtw+EgSbLk3bdL3n17eMEBvY/l7INIqIzVCgRCm902fIvValEowgEAYrFkeJLLKYZhAoFAKBTm59+fe/fs4QVVse7XDGWCUNE3ITXt+6NHSJJ0neaDg/r29tafL1oKAIgcF3XmzCmapl1329OVJ4ZWqkxMTDGbTVlTbrx4arfb+/p6lMrIWxZ2qLT7Fi1cotcPvLnz1f5+XVNTwyuvbROJxPnzCgEAs+7J0+m07xTvAgCcPVdZVvblUMNl7aOPHz9+9MiRMoqiamrOvvDSb7ds3UgQgQ02whAq+uLi4l984ff19Vce+EX+f/9mA47jb+18z7Uk2fTpd6/75aaKiu9nz83esWP7b7e+4LpvAACmTNG88+ePq8+fWbxk3tPPPOGw21/e/qanBcWYILA3rHrb7N9/3rtgXRyTIY0aZcVteSsjA1qUPVRq3xgF6YMC6YMC6YMC6YMC6YMC6YMC6YMC6YMC6YMC6YMC6YMC6YMiMH2s29w2jQX44UBgPmRKrl7rCDCmMcOglpArA+srDEwfh4sJxLi20x5gYGMAbYddJGOzOUzWPgDA1PsUx/Z22YO9kvHoYrdQx/Z2Tc1XBFpwJN/znjyou/SDYXqhMiFNHGjZEOT6ZVNleV/GTFnO/FuiDwDQXm89vr9PryXCY3hux22DgpOmAQAsxr6howGt67TLldy7F43wc2ioWYQY/RgfAHDgwAEAwP3338/Q/uE/xoca5+XyWTHJI/nR/AQTDmAYFpvC4CEguc0bckyD9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EERimuTFxYWdnZ2Ds13+K8JUGNCcG3yUKx9hYWFOI7jOM76F2w2e+HChaMdlxtCUd/SpUtVKtXwLWq1evny5aMXkUdCUZ9CoSgoKBg6czEMy8vLG1prO6QIRX0AgCVLlsTF3ZijUqVSrVixYrQjck+I6gsPD8/Ly8MwDMOwgoICuVw+2hG5J0T1udYmV6vVsbGxobw2eRAaLuZBsuG8aVBHWo2UzUzZ7UFrCfX19gEMKJXKYO2Qx8P4IlwowaXh7JTJYpEMdtrqkeujCPrcUX19tdGgI+TRIjaPg3NxNgfH2aFboynSSRIURVCkhdD3mKXh3IlTxZNz5XiAE2gMMUJ99edMFfv6OCJuWLRUEikc2bFHHUOvRd9lIMyO3CJl6p0jmdYiYH12q7Ps3e5BPRWVohCG8UdwyFDD3G/taRiQKfCF66I5vMCqYWD6DP3kvj92iJSSiIRQbIXB0Hddbx0w/3xDjFQRwAUxAH09rbbyD3qUqeHisNCdmwEGk87W26C9f22U/xOH+3uZtxiogx/0xKRH3q7uAADicH5MemTZ+91mA+VnEb/0kQS9788dkcnhPDEXLsJQhy/mKpPD97/TSZF+nZR+6TtV3i9UiMURt229G444XMCXCU8f9mvJGd/6zINUc60lLO52u1d4QaGWN16wmAdJnzl96/vnl32y2BB95GQOWYysYr/OZzYf+mxmZ3uDVaIM0YbxgL57y3M5tVePB33P0khRS63ZZvZxD/Ghr+G8UaoUBTWwMQIGpONETZdM3nP50HetxiyKCNGqxzRihbChxuI9j48Wdl+bLXlG0Do8bmLQ0Pf1oZ0tbRcJwn7H+Lvum702IlwFAKg4uedoxce/WvP2R58/3dvXHB01fvbdD945Od9V6tyFI0e+K7bZzWl35N6d8wvgmkiOAQRyXnOl1nseb7WPJGiSpBnqQaEo8p0PH29pu7j05/+zZdNnAoHkrZJHB/TdAAA2m2u1Gb4qf2NZ0f+8/tKp9Am5e/a9ZDT1AwC6eho+2/t8TvaipzfvzcqY91X5H5iIzQWbixOE0+l1llFvaga1hEDM1KpTTc3VfdqWFQ+8kJoyTSJW3F+wmccVVJzc4xrcIAh7wdz18XEZGIZppsynKLKjsw4AcPzUF4qw2Dn3PCwQSFJTpk27k6mZEV3whexBrbdly7zpM+lJNg9nICoAAGhuvcDl8JMT73T9F8fxBPXk5tbzQ0vYqVXpriQ+XwwAsNlNAABdf/u4yB/XYlTFTgSAsbk/AeAI2Ca9t9aft2sfm4sxN4Zus5sdhG3LcznDN4bJowEAgKaHr8DrwuXUajWKRT+uV8lh84aSmICiaNxr/fGmTyjGKbvvlvfIkIjD+TzRmpWvD9/I8h4sAHy+2EH8uF6lg7D+VHQQIe2UUOq1hnlJE0jYDpu/fQ+BEh2VYrObw+RR4YpY1xZtf7tUHOG9VJg8qr7h9ND7G1frf2C09hFWUijx9ot6u/bxhSw2l0XYGKmAE1JyUlNyvtj/in6wx2QeqDi5Z+fuh8+eP+S9VGb6XINRW3bkbQDAtcaqU2e+Aow1XBwWksPHvc+r66Pdp75DaOyzKOKkwY4NAADWPrjzZNWXH+95tqXtYqQyIUez6K6pRd6LpE2Y+bN5j5+q2vfPE6Vh8ujli7ft/mCD08nIKWLUWhIn+Xji8tHb3HjedPLwoCozKtixjQHaz3fPKJQneTXoo0msShUO9lodFqZuICGLw0oa+qxxqT4eWH2cvDwBa4JG2t00oJrk/tGNosjnX8t3m0SSDjbOddsqi41O3fDobu+HDojnXs6jgfvTyOmkWCw3l3+1Kn3dw2952mFvQ/+EqVIO18dV1fdQkdVEfbS9OSE7hu+hp75/oNPtdpvN5Grx/hQc58ikwXyU9hQDAMBB2LkcN0M/bDZXKnF/o7cZHS3nutY8n8AT+Dg7/Rppq/7HwLmjhsSpMSw8dN8gCBZO0nm9qnPqfbLMXN+dxH7pmHKPXBnDab/UF4Jv8gYXmqbbLvRExHAyZvo1OOGXPoyF/ezRaA5Oddf5NYAydum62s/l0gsei/Zz0SJ/T0Y2ByvaGANIe2tNj9O/QbyxhZOkW2t6MKejaGOs/0vuBPaSBkXSh/7S3dPqUGdFcfiwb3eFDoSNbDnXHZPEy39wHM4O4BlmJG9Ynflm4Mz3AxFqmUItY+HMdRfdCiiK7m/R61oN2feFZeeF+VHi3xjhC2oDPUT1P/XXL5mFcqFAzhOHC9hcpnoGmYC0UaYBq2XQbh2wJGWIsmbJA11izAXU26UkQTdfttTXmNuumGiA8cUcrpDD5oXoSU3TgHKQDgthMzswGqjTxOOzRCmZUOOIQfuqyKQn9X3EoJbwZ3B+dMCASMqWRXDkSo5YHpzfOBQ/yhpD3P5PEYyC9EGB9EGB9EGB9EGB9EHx/6Xr7EcJxlTxAAAAAElFTkSuQmCC",
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from IPython.display import Image, display\n",
    "\n",
    "try:\n",
    "    display(Image(app.get_graph().draw_mermaid_png()))\n",
    "except Exception:\n",
    "    # This requires some extra dependencies and is optional\n",
    "    pass"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7deccf06-6d81-4ee1-8d7f-211a4c1b29ac",
   "metadata": {},
   "source": [
    "我们现在需要创建一个配置（config），并在每次调用可运行对象（runnable）时传入它。这个配置包含了一些**不直接属于输入内容**，但仍然有用的信息。\n",
    "\n",
    "在本例中，我们希望包含一个 `thread_id`（对话线程 ID）。配置应该如下所示："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "10c2e31a-8ebd-48bd-a183-f7ca81bf6bfa",
   "metadata": {},
   "outputs": [],
   "source": [
    "config = {\"configurable\": {\"thread_id\": \"abc123\"}}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c9ccb440-5956-4397-b0b2-6938484ebffe",
   "metadata": {},
   "source": [
    "这使我们能够使用一个应用程序支持多个对话线程，当你的应用程序有**多个用户**时，这是一个常见的需求。\n",
    "\n",
    "然后，我们可以调用该应用程序："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "8c82d514-c318-4715-b1e0-9f60c88acc6b",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "你好小明！我是通义千问，很高兴认识你！有什么我可以帮助你的？\n"
     ]
    }
   ],
   "source": [
    "query = \"嗨，我是小明.\"\n",
    "\n",
    "input_messages = [HumanMessage(query)]\n",
    "output = app.invoke({\"messages\": input_messages}, config)\n",
    "output[\"messages\"][-1].pretty_print()  # output contains all messages in state"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "7e2b13c2-ce07-4704-888f-449f37fa8198",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "哈哈，让我想想...你说你是小明对吧？我还记得呢！很高兴再次见到你，小明！有什么我可以帮忙的吗？\n"
     ]
    }
   ],
   "source": [
    "query = \"我叫什么，你还记得吗?\"\n",
    "\n",
    "input_messages = [HumanMessage(query)]\n",
    "output = app.invoke({\"messages\": input_messages}, config)\n",
    "output[\"messages\"][-1].pretty_print()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a43ff870-b85f-486b-8d97-e4f9d194dc1c",
   "metadata": {},
   "source": [
    "太好了！我们的聊天机器人现在能够记住关于我们的一些信息了。如果我们更改配置，使其引用一个不同的 `thread_id`，我们会看到它会重新开始一段新的对话。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "a0c932a2-f57b-4a06-9fcc-3b396e486c8e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "抱歉，作为一个AI助手，我无法记住用户的个人信息，包括名字。每次对话都是独立的，没有记忆功能。你可以随时告诉我你的名字，我会尊重并使用它来进行交流。很高兴再次见到你!不知道今天你想聊聊什么呢?\n"
     ]
    }
   ],
   "source": [
    "config = {\"configurable\": {\"thread_id\": \"abc234\"}}\n",
    "\n",
    "input_messages = [HumanMessage(query)]\n",
    "output = app.invoke({\"messages\": input_messages}, config)\n",
    "output[\"messages\"][-1].pretty_print()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "b5ad5224-0d64-470d-b921-5d881864fa5c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "当然记得！你刚才说过你叫小明。很高兴再次确认，小明！如果你有任何问题或需要帮助，随时告诉我哦！ 😊\n"
     ]
    }
   ],
   "source": [
    "config = {\"configurable\": {\"thread_id\": \"abc123\"}}\n",
    "\n",
    "input_messages = [HumanMessage(query)]\n",
    "output = app.invoke({\"messages\": input_messages}, config)\n",
    "output[\"messages\"][-1].pretty_print()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "491131a4-4d7d-4ee9-88c1-ea3c590f2419",
   "metadata": {},
   "source": [
    "这就是我们如何支持一个聊天机器人与多个用户进行对话的方式！\n",
    "\n",
    "💡 提示  \n",
    "如需支持异步（async），请将 `call_model` 节点改为一个异步函数（`async def`），并在调用应用程序时使用 `.ainvoke` 方法：  \n",
    "\n",
    "目前，我们所做的只是在模型周围添加了一个简单的持久化层。我们可以通过加入**提示模板（prompt template）**，开始让聊天机器人变得更复杂和个性化。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8f4d94b3-7697-4946-87a1-8fe7556f7f9c",
   "metadata": {},
   "source": [
    "### 提示模板（Prompt Templates）\n",
    "\n",
    "提示模板（Prompt Templates）有助于将用户的原始信息转换为大语言模型（LLM）可以处理的格式。在当前的例子中，用户的原始输入只是一个消息（message），我们直接将其传递给了 LLM。现在，我们可以让这个过程稍微复杂一些。\n",
    "\n",
    "接下来，我们会添加一个带有自定义指令的**系统消息（system message）**，并且仍然以消息作为输入。之后，我们还会添加除消息以外的更多输入内容。\n",
    "\n",
    "为了添加系统消息，我们将创建一个 `ChatPromptTemplate`，并使用 `MessagesPlaceholder` 来传入所有的消息内容。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "740e1e45-fa62-442c-9f01-4b2a35732086",
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
    "\n",
    "prompt_template = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\n",
    "            \"system\",\n",
    "            \"你说话像香港人一样。请尽力回答所有问题。\",\n",
    "        ),\n",
    "        MessagesPlaceholder(variable_name=\"messages\"),\n",
    "    ]\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "b48d0dbc-40b8-40cf-b929-2ba3c316423c",
   "metadata": {},
   "outputs": [],
   "source": [
    "workflow = StateGraph(state_schema=MessagesState)\n",
    "\n",
    "\n",
    "def call_model(state: MessagesState):\n",
    "    prompt = prompt_template.invoke(state)\n",
    "    response = model.invoke(prompt)\n",
    "    return {\"messages\": response}\n",
    "\n",
    "\n",
    "workflow.add_edge(START, \"model\")\n",
    "workflow.add_node(\"model\", call_model)\n",
    "\n",
    "memory = MemorySaver()\n",
    "app = workflow.compile(checkpointer=memory)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "3207d51d-4d63-4314-af82-6f2b1bb0b3ca",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAGoAAADqCAIAAADF80cYAAAAAXNSR0IArs4c6QAAFX1JREFUeJztnXl8FEW+wKun574zyYQck8lJkIQE4gSCYJQjkrBE2CDLraKyLODiQx/LuvrEg+fxWXEF3V1MvNbVqKw8EQkB1JVdAgIJkHAFEpKQ+5xJJnPPdPf0+2PYmMU5U9Nkwtb3L+iq6v7lO9Xd1VXdVRhN0wAxUlijHcDYBumDAumDAumDAumDAumDgg1ZvrvZZjZQNjNls1AUMTbaQDgH4wtxvggXy/Bx8XyYXWEja/ddv2RuumRuvGCSyNlSBYcvwvkiFoc7Nuoy4XDazE6rmTLoCPMgmTxZnDRJlJAuGsGuAtbX22b/xxe9hN05IVuaMkUsV3JGcNTQQd9HXKs21p0x8gSsWb+IVKp4ARUPQB9F0Me+7Gu5askpUEzMkY4o2tDl8klD5WFdUob43iVK/0v5q89qog6UdI6L59/7QAB7H1tQBH1sX5+2w174yxiBGPeniF/6dF2Or9/pmDIrLGu2PBhxhjRnvxu4cHxw0foYRRTXZ2bf+syD5Oc72nKLIlLvlAQvyJCm7ozxhzLt0qfUIqmPOujjXkk6nF8Xd2bmyv5z3AEAJmRL0u+SHSjpoEgfdcuHvtOH++VKztR5iqCGNwaYlq8Qy9mVR/q9Z/Omb1BLXK0y5q2KCnZsY4N5q6OuVBqMA6SXPN70Hf9KO3WegsPFGIhtDMDls+6cHVbxVZ+XPB71DWoJbZc9Y6aMmdjGBpm58p4Wu5cK6FHftWpTxkwZNjYew5iChYOMmbJr1UaPGTwlNJw3xk8cyWMgDLNmzeru7g601Oeff/7SSy8xExGInyhsqDF5SnWvz6QnrUYqPNp3uzGItLe3m0weA/VCbW0tA+HcQKniGfpJT+ev+w6rrmZboA/P/kPTdGlpaXl5eUtLS3Jy8vTp09evX3/27NkNGzYAAAoLC2fNmrVjx46Ghoa9e/dWVVV1d3cnJyc/8MADixYtAgDU19evXLly165dL774YmRkpEAgqK6uBgB8/fXXn376aWpqatADjlTxetvskjA3rtzrs5spgQS2K9ATpaWlH3300Zo1a5KTkzs7O//0pz/JZLJVq1a9+eabTz75ZFlZWVRUFADgjTfe6Onp+d3vfodhWGNj4/bt29VqdVZWFpfLBQC89957jzzyyOTJk9PS0h566KGUlJRt27YxFLBAgtstlNskD/qsTqF/z8wjoKamZtKkSatWrXL9Nzs72+Fw/DTba6+9ZrFYoqOjXXn27dt34sSJrKwsV+qMGTNWrFjBUIQ3IRDjdqvTbZJ7fU4njXOYau5lZGTs3r17+/btGo0mNzdXrVZ7iMFZWlr6ww8/tLa2urakpaUNpU6cOJGh8H4Kh8vy9PTmXp9AhGu73NSIoLB69WqJRHL06NFt27ax2ez58+c/8cQTYWFhw/NQFLVp0yaapjdt2jRt2jSRSLR69WpXEoZhAAA+H6qTPSAsRjIyzv3h3OsTStiWegtD0eA4vnjx4sWLFzc2NlZWVhYXF9tstldffXV4ntra2qtXrxYXF2s0GteWoZvyrX+rxGKghBL3lzIPtU+CW43uL5bwlJWVpaenJyYmJicnJycn63S67777bqhauTAajQAApfJG12xdXV17e/vQhe8mhhdkArORFErdi3Lf7lPG8rQddifFyO9cVla2devWiooKg8FQUVFx7NixzMxMAIBKpQIAfPPNN5cvX05KSsIwrLS01GQyNTU17dq1Kycnp6ury+0OY2NjL126dObMmYGBgaBHSxK0vpfw2ASmPbB/d0fjBZOnVBi6urqeeuopjUaj0Wjy8/NLSkqsVqsr6dlnn83JyVm/fj1N04cPH16yZIlGo1m8eHFtbe23336r0WhWrFhx/fp1jUZTVVU1tMOqqqqioqJp06ZVVlYGPdqGGuOBkg5PqR57my+dGOxsss17cFzQf8+xxZG/dselCtOmux8a8/jMm6qRtNVbvPd23fYYB8j2a9bxnnvavY11nD+m72yyzV/jvru0o6NjqOl7EywWy+l0385cunTpxo0b/Yh8JGzevLmmpsZtklwu1+v1bpNefvnlmTNnuk0q/6BLNV6Ymeux186bPicFPnmleeYiZXKmm64Xp9NpNpvdFrTZbJ7aZRwOh7kmm8VioSj3DQaCIDgc9yP6AoGAzXZzY60/azxZrnvo2QRvvXbeL5y9bbaSZxr7ux1BvySHONpOe8kzjb1tNu/ZfHSHKlW8eaujDr7f6bC5PxlvSxw258H3OuevifbZ7eTXMHndWWPNP/SFa2NEMqb6EUIHk548+H5X1my5P2Oz/r6k0dFoPbqnd97qqEg1U/2AoUBvq/3Ix915K8dFJ/p1gQ7gFSFDP3mgpCMxXTwtX8G+7YbfCAd9+pCurc6yYG2MVOFvX2dgL6hRBF172lB31jhphiw5U8zh3Q4SCbuz4bzp8klDWo7UU/PYEyN8PbLpkvn6RbNJT4RH88RyNl+E80X4WBkRJhy0zUzZzJRJT2q77JIwTlKGKPHWvB55E13Xbf3djkEtoe9z2CxBvjvrdDoAQHh4eHB3yxex5BFcmZITHsWNShiNl3NvDcXFxRiGrVu3brQD8ch/9jA4NEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFEgfFKH4WcyCBQsoiqJp2mq1AgBEIhFFURwO5+DBg6Md2s0wNU0aDNHR0dXV1UOT27g+sc/Ozh7tuNwQiifv8uXL5fJ/m548PDx8aA6rkCIU9eXl5aWkpAzfkpCQcO+9945eRB4JRX2u+UpkshvTf8jl8pUrV452RO4JUX1z585NSEhw/Ts+Pn7OnDmjHZF7QlQfAGDZsmUikUgkEi1btmy0Y/EI1J3XYXNqO+wMtXzSk3InJszEcTw9KbejwcrEITAMRMTyuPyR16ERtvva6iwnDujsVkokZQMwNr7BdwdtNpB8IT5zYYRqvGAE5UdS+04f6r9WbZy7KlYsD8VmY6AYB4i/f9J5xzTp1HlhfmT/NwKuty1XLJdPDRY8Fnd7uAMASMI4BWvjLh7Xt9YFfIkIWN/x/X3TF0TyIK4XIQhfwJq+IPKE18UR3BKYBZKgDf2kKvVWz2V/C1BNEOl1BBngSn2B6dP3OmQRXIZnWh0dMAzIIjj6PiKgUoHpczoB63Z05wIDGO1ksvYhbgLpgwLpgwLpgwLpgwLpgwLpgwLpgwLpgwLpgwLpg2Ls6evp6Z49N/vkyQrv2bY9/5vfPr2J6WDGnr6QAumDgvEO9+df2MrlcrM103f84X85HE7axIwXnv/9nr/99ZPSD8LCFPMLFv5y7a9dOVtbm9/c+Wr9tSscDjc+PvGxRzZmZt5Y2+m7vx/+8MPdZot5xl33FBUtG768U/mh/QfKvmxubkxKGj93TsHiols6qsl47eNwOOcvnKu7dmXvF0f++NaHNefPPrH5MT5fUF5WsXXLtk8/+8vFizUAAJ1O+/iv18TFxb//7p63dr4nkUi3v/yM3W4HADQ1Nbzy6nMLFhR9/Nd9c+bkv/3H14d2/u235a/v2J6WlvFZ6YFH1qz/9LMPi0veYvovGg7j+jAMczqdG9c/KZPKkpJS4uMTuRzuqpWPCASC6dPv5vP59deuAgD+9sUnAqFw8389HRUVrVYn/GbLNr1+oPzQfgDAvq/2REfHrlyxRiKWZGtyfjZ/0dDODxz8MmtK9qbHt8jlYdmanDUP/2rv/306aBhk+o8agnF9NE3HxKiGlmMRCkXxCUlDqSKR2Gw2AQCamxtTUyeyWDfikUllKpX6ytVLAIDOzvaEYUUmpN5YapGiqCtXLk2detdQ0pQp2SRJXqm9yPQfNQTj1z6apoekuMDcDavr+rXx6sThWwQCoc1qBQAYjQa5/McRWC6P59qtw+EgSbLk3bdL3n17eMEBvY/l7INIqIzVCgRCm902fIvValEowgEAYrFkeJLLKYZhAoFAKBTm59+fe/fs4QVVse7XDGWCUNE3ITXt+6NHSJJ0neaDg/r29tafL1oKAIgcF3XmzCmapl1329OVJ4ZWqkxMTDGbTVlTbrx4arfb+/p6lMrIWxZ2qLT7Fi1cotcPvLnz1f5+XVNTwyuvbROJxPnzCgEAs+7J0+m07xTvAgCcPVdZVvblUMNl7aOPHz9+9MiRMoqiamrOvvDSb7ds3UgQgQ02whAq+uLi4l984ff19Vce+EX+f/9mA47jb+18z7Uk2fTpd6/75aaKiu9nz83esWP7b7e+4LpvAACmTNG88+ePq8+fWbxk3tPPPOGw21/e/qanBcWYILA3rHrb7N9/3rtgXRyTIY0aZcVteSsjA1qUPVRq3xgF6YMC6YMC6YMC6YMC6YMC6YMC6YMC6YMC6YMC6YMC6YMC6YMiMH2s29w2jQX44UBgPmRKrl7rCDCmMcOglpArA+srDEwfh4sJxLi20x5gYGMAbYddJGOzOUzWPgDA1PsUx/Z22YO9kvHoYrdQx/Z2Tc1XBFpwJN/znjyou/SDYXqhMiFNHGjZEOT6ZVNleV/GTFnO/FuiDwDQXm89vr9PryXCY3hux22DgpOmAQAsxr6howGt67TLldy7F43wc2ioWYQY/RgfAHDgwAEAwP3338/Q/uE/xoca5+XyWTHJI/nR/AQTDmAYFpvC4CEguc0bckyD9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EGB9EERimuTFxYWdnZ2Ds13+K8JUGNCcG3yUKx9hYWFOI7jOM76F2w2e+HChaMdlxtCUd/SpUtVKtXwLWq1evny5aMXkUdCUZ9CoSgoKBg6czEMy8vLG1prO6QIRX0AgCVLlsTF3ZijUqVSrVixYrQjck+I6gsPD8/Ly8MwDMOwgoICuVw+2hG5J0T1udYmV6vVsbGxobw2eRAaLuZBsuG8aVBHWo2UzUzZ7UFrCfX19gEMKJXKYO2Qx8P4IlwowaXh7JTJYpEMdtrqkeujCPrcUX19tdGgI+TRIjaPg3NxNgfH2aFboynSSRIURVCkhdD3mKXh3IlTxZNz5XiAE2gMMUJ99edMFfv6OCJuWLRUEikc2bFHHUOvRd9lIMyO3CJl6p0jmdYiYH12q7Ps3e5BPRWVohCG8UdwyFDD3G/taRiQKfCF66I5vMCqYWD6DP3kvj92iJSSiIRQbIXB0Hddbx0w/3xDjFQRwAUxAH09rbbyD3qUqeHisNCdmwEGk87W26C9f22U/xOH+3uZtxiogx/0xKRH3q7uAADicH5MemTZ+91mA+VnEb/0kQS9788dkcnhPDEXLsJQhy/mKpPD97/TSZF+nZR+6TtV3i9UiMURt229G444XMCXCU8f9mvJGd/6zINUc60lLO52u1d4QaGWN16wmAdJnzl96/vnl32y2BB95GQOWYysYr/OZzYf+mxmZ3uDVaIM0YbxgL57y3M5tVePB33P0khRS63ZZvZxD/Ghr+G8UaoUBTWwMQIGpONETZdM3nP50HetxiyKCNGqxzRihbChxuI9j48Wdl+bLXlG0Do8bmLQ0Pf1oZ0tbRcJwn7H+Lvum702IlwFAKg4uedoxce/WvP2R58/3dvXHB01fvbdD945Od9V6tyFI0e+K7bZzWl35N6d8wvgmkiOAQRyXnOl1nseb7WPJGiSpBnqQaEo8p0PH29pu7j05/+zZdNnAoHkrZJHB/TdAAA2m2u1Gb4qf2NZ0f+8/tKp9Am5e/a9ZDT1AwC6eho+2/t8TvaipzfvzcqY91X5H5iIzQWbixOE0+l1llFvaga1hEDM1KpTTc3VfdqWFQ+8kJoyTSJW3F+wmccVVJzc4xrcIAh7wdz18XEZGIZppsynKLKjsw4AcPzUF4qw2Dn3PCwQSFJTpk27k6mZEV3whexBrbdly7zpM+lJNg9nICoAAGhuvcDl8JMT73T9F8fxBPXk5tbzQ0vYqVXpriQ+XwwAsNlNAABdf/u4yB/XYlTFTgSAsbk/AeAI2Ca9t9aft2sfm4sxN4Zus5sdhG3LcznDN4bJowEAgKaHr8DrwuXUajWKRT+uV8lh84aSmICiaNxr/fGmTyjGKbvvlvfIkIjD+TzRmpWvD9/I8h4sAHy+2EH8uF6lg7D+VHQQIe2UUOq1hnlJE0jYDpu/fQ+BEh2VYrObw+RR4YpY1xZtf7tUHOG9VJg8qr7h9ND7G1frf2C09hFWUijx9ot6u/bxhSw2l0XYGKmAE1JyUlNyvtj/in6wx2QeqDi5Z+fuh8+eP+S9VGb6XINRW3bkbQDAtcaqU2e+Aow1XBwWksPHvc+r66Pdp75DaOyzKOKkwY4NAADWPrjzZNWXH+95tqXtYqQyIUez6K6pRd6LpE2Y+bN5j5+q2vfPE6Vh8ujli7ft/mCD08nIKWLUWhIn+Xji8tHb3HjedPLwoCozKtixjQHaz3fPKJQneTXoo0msShUO9lodFqZuICGLw0oa+qxxqT4eWH2cvDwBa4JG2t00oJrk/tGNosjnX8t3m0SSDjbOddsqi41O3fDobu+HDojnXs6jgfvTyOmkWCw3l3+1Kn3dw2952mFvQ/+EqVIO18dV1fdQkdVEfbS9OSE7hu+hp75/oNPtdpvN5Grx/hQc58ikwXyU9hQDAMBB2LkcN0M/bDZXKnF/o7cZHS3nutY8n8AT+Dg7/Rppq/7HwLmjhsSpMSw8dN8gCBZO0nm9qnPqfbLMXN+dxH7pmHKPXBnDab/UF4Jv8gYXmqbbLvRExHAyZvo1OOGXPoyF/ezRaA5Oddf5NYAydum62s/l0gsei/Zz0SJ/T0Y2ByvaGANIe2tNj9O/QbyxhZOkW2t6MKejaGOs/0vuBPaSBkXSh/7S3dPqUGdFcfiwb3eFDoSNbDnXHZPEy39wHM4O4BlmJG9Ynflm4Mz3AxFqmUItY+HMdRfdCiiK7m/R61oN2feFZeeF+VHi3xjhC2oDPUT1P/XXL5mFcqFAzhOHC9hcpnoGmYC0UaYBq2XQbh2wJGWIsmbJA11izAXU26UkQTdfttTXmNuumGiA8cUcrpDD5oXoSU3TgHKQDgthMzswGqjTxOOzRCmZUOOIQfuqyKQn9X3EoJbwZ3B+dMCASMqWRXDkSo5YHpzfOBQ/yhpD3P5PEYyC9EGB9EGB9EGB9EGB9EHx/6Xr7EcJxlTxAAAAAElFTkSuQmCC",
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from IPython.display import Image, display\n",
    "\n",
    "try:\n",
    "    display(Image(app.get_graph().draw_mermaid_png()))\n",
    "except Exception:\n",
    "    # This requires some extra dependencies and is optional\n",
    "    pass"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "b1f42fc7-dd87-414b-be5f-6b19fa5dfcda",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "哈囉Jim！我係阿Tom，好高兴认识你呀！你喺边度住呀？\n"
     ]
    }
   ],
   "source": [
    "config = {\"configurable\": {\"thread_id\": \"abc345\"}}\n",
    "query = \"Hi! I'm Jim.\"\n",
    "\n",
    "input_messages = [HumanMessage(query)]\n",
    "output = app.invoke({\"messages\": input_messages}, config)\n",
    "output[\"messages\"][-1].pretty_print()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "696f54d3-b340-4a8e-bb40-e9a02b917b79",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "啊对唔住，我一时冇搞清楚。根据你嘅介绍，你应该就系Jim啦！刚刚真系对唔住呀，搞错咗名。希望你唔介意！想同你做好朋友呀！\n"
     ]
    }
   ],
   "source": [
    "query = \"我叫什么?\"\n",
    "\n",
    "input_messages = [HumanMessage(query)]\n",
    "output = app.invoke({\"messages\": input_messages}, config)\n",
    "output[\"messages\"][-1].pretty_print()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1c7799cf-1fe7-4915-86c8-3dc01dfc97b2",
   "metadata": {},
   "source": [
    "\n",
    "太棒了！现在让我们的提示变得更复杂一点。假设提示模板现在看起来像这样："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "3f238163-0865-4e9d-8929-c78bb880e016",
   "metadata": {},
   "outputs": [],
   "source": [
    "prompt_template = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\n",
    "            \"system\",\n",
    "            \"你是一个乐于助人的助手。请尽你所能用 {language} 回答所有问题。\",\n",
    "        ),\n",
    "        MessagesPlaceholder(variable_name=\"messages\"),\n",
    "    ]\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "131a8251-ff2a-4fdd-9fb8-1611e8dc17e3",
   "metadata": {},
   "source": [
    "请注意，我们已经在提示词（prompt）中添加了一个新的输入参数 `language`（语言）。现在，我们的应用程序有两个参数：输入消息（messages）和语言（language）。我们应该更新应用程序的状态（state）以反映这一变化："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "fce0948a-fa03-4dce-b640-94ee52f627b6",
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import Sequence\n",
    "\n",
    "from langchain_core.messages import BaseMessage\n",
    "from langgraph.graph.message import add_messages\n",
    "from typing_extensions import Annotated, TypedDict\n",
    "\n",
    "\n",
    "class State(TypedDict):\n",
    "    messages: Annotated[Sequence[BaseMessage], add_messages]\n",
    "    language: str\n",
    "\n",
    "\n",
    "workflow = StateGraph(state_schema=State)\n",
    "\n",
    "\n",
    "def call_model(state: State):\n",
    "    prompt = prompt_template.invoke(state)\n",
    "    response = model.invoke(prompt)\n",
    "    return {\"messages\": [response]}\n",
    "\n",
    "\n",
    "workflow.add_edge(START, \"model\")\n",
    "workflow.add_node(\"model\", call_model)\n",
    "\n",
    "memory = MemorySaver()\n",
    "app = workflow.compile(checkpointer=memory)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "4cb3899c-dc51-4740-ad8c-1e9a61b615a2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "你叫张三啊，这不显而易见呢吗？跟俺们东北的大豆包一样明白！\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.messages import SystemMessage\n",
    "\n",
    "config = {\"configurable\": {\"thread_id\": \"a659\"}}\n",
    "query = \"我叫什么?\"\n",
    "language = \"网红东北话\"\n",
    "\n",
    "input_messages = [\n",
    "    SystemMessage(content=\"you're a good assistant\"),\n",
    "    HumanMessage(content=\"hi! 我的名字是张三\"),\n",
    "    AIMessage(content=\"hi!\"),\n",
    "    HumanMessage(query)\n",
    "]\n",
    "output = app.invoke(\n",
    "    {\"messages\": input_messages, \"language\": language},\n",
    "    config,\n",
    ")\n",
    "output[\"messages\"][-1].pretty_print()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf5a8bb9-dcee-4e19-9a06-ea15787684eb",
   "metadata": {},
   "source": [
    "请注意，整个状态是持久的，因此如果不需要更改，我们可以省略语言等参数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "b65d60a8-9fab-467d-902e-e26ece622e66",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "啥你name呢？刚才不都整明白了嘛，你叫张三，张三这个名儿多好记啊，跟俺们东北的刨乎（唠嗑）似的，一说就懂！\n"
     ]
    }
   ],
   "source": [
    "query = \"What is my name?\"\n",
    "\n",
    "input_messages = [HumanMessage(query)]\n",
    "output = app.invoke(\n",
    "    {\"messages\": input_messages},\n",
    "    config,\n",
    ")\n",
    "output[\"messages\"][-1].pretty_print()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ecb1cc72-2e52-4bff-bfca-6250c0488ae4",
   "metadata": {},
   "source": [
    "### 管理对话历史（Managing Conversation History）\n",
    "\n",
    "在构建聊天机器人时，一个非常重要的概念是**如何管理对话历史记录**。\n",
    "\n",
    "如果不对消息进行管理，消息列表会不断增长，最终可能会超出大语言模型（LLM）的上下文窗口限制。因此，非常重要的一点是：**在传入模型之前，对消息的大小进行限制**。\n",
    "\n",
    "需要注意的是，这个限制操作应该发生在：\n",
    "\n",
    "- **提示模板（prompt template）处理之前**\n",
    "- 但要在 **从消息历史中加载完消息之后**\n",
    "\n",
    "我们可以通过在提示词处理之前添加一个简单的步骤来实现这一点，该步骤适当地修改 `messages` 字段。然后，我们将这个新的链（chain）包装在 `Message History` 类中。\n",
    "\n",
    "LangChain 提供了一些用于管理消息列表的内置辅助工具。在本例中，我们将使用 `trim_messages` 工具来减少发送给模型的消息数量。\n",
    "\n",
    "这个裁剪器（trimmer）允许我们指定要保留多少个 token，还可以设置其他参数，例如：\n",
    "\n",
    "- 是否始终保留系统消息（system message）\n",
    "- 是否允许截断部分消息（partial messages）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a3fd88d8-7d86-45c2-9771-e1cfd0d66302",
   "metadata": {},
   "source": [
    "| 方法                | 准确性       | 速度    | 依赖项          |\n",
    "|---------------------|-------------|---------|----------------|\n",
    "| **`cl100k_base`**   | 近似（±10%）| ⚡ 极快 | `tiktoken`     |\n",
    "| **Qwen 官方分词器** | ✅ 精确      | 中等    | `transformers` |\n",
    "\n",
    "**优先选择 `cl100k_base`** 作为 tiktoken 下的最佳近似。若需精确值（如计费场景），必须使用 Qwen 官方分词器。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "b12a82e7-1e86-4df5-8faf-929311034f74",
   "metadata": {},
   "outputs": [],
   "source": [
    "# %pip install  transformers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "49d6cbf3-4c68-49d0-8b22-bdcc19a9f2ac",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 替代方案：使用 tiktoken 近似计数（精度稍低）\n",
    "# 如果无法安装 transformers 使用 tiktoken 只是近似方案，实际 token 计数可能与 Qwen 有差异（约 ±10%）。推荐优先使用官方的 Qwen tokenizer 获得精确计数。\n",
    "import tiktoken\n",
    "from typing import List\n",
    "\n",
    "# def tiktoken_token_counter(messages: List[BaseMessage]) -> int:\n",
    "#     \"\"\"使用 tiktoken 的近似计数\"\"\"\n",
    "#     enc = tiktoken.encoding_for_model(\"gpt-4\")  # 使用 GPT-4 的编码近似\n",
    "#     text = \" \".join([msg.content for msg in messages])\n",
    "#     return len(enc.encode(text))\n",
    "\n",
    "def tiktoken_token_counter(messages: List[BaseMessage]) -> int:\n",
    "    enc = tiktoken.get_encoding(\"cl100k_base\")  # 显式指定编码\n",
    "    text = \" \".join([msg.content for msg in messages])\n",
    "    return len(enc.encode(text))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "id": "00fa5fab-da47-47bc-90ea-f6afbc4bbdc7",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content=\"you're a good assistant\", additional_kwargs={}, response_metadata={}),\n",
       " HumanMessage(content='hi! 我是张三', additional_kwargs={}, response_metadata={}),\n",
       " AIMessage(content='你好!', additional_kwargs={}, response_metadata={}),\n",
       " HumanMessage(content='我喜欢香草冰淇凌', additional_kwargs={}, response_metadata={}),\n",
       " AIMessage(content='好的', additional_kwargs={}, response_metadata={}),\n",
       " HumanMessage(content=' 2 + 2是多少？', additional_kwargs={}, response_metadata={}),\n",
       " AIMessage(content='4', additional_kwargs={}, response_metadata={}),\n",
       " HumanMessage(content='谢谢', additional_kwargs={}, response_metadata={}),\n",
       " AIMessage(content='不客气!', additional_kwargs={}, response_metadata={}),\n",
       " HumanMessage(content='我昨天去游泳了?', additional_kwargs={}, response_metadata={}),\n",
       " AIMessage(content='真不错!', additional_kwargs={}, response_metadata={})]"
      ]
     },
     "execution_count": 75,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.messages import SystemMessage, trim_messages\n",
    "\n",
    "trimmer = trim_messages(\n",
    "    max_tokens=138,\n",
    "    strategy=\"last\",\n",
    "    token_counter=tiktoken_token_counter,\n",
    "    include_system=True,\n",
    "    allow_partial=False,\n",
    "    start_on=\"human\",\n",
    ")\n",
    "\n",
    "\n",
    "messages = [\n",
    "    SystemMessage(content=\"you're a good assistant\"),\n",
    "    HumanMessage(content=\"hi! 我是张三\"),\n",
    "    AIMessage(content=\"你好!\"),\n",
    "    HumanMessage(content=\"我喜欢香草冰淇凌\"),\n",
    "    AIMessage(content=\"好的\"),\n",
    "    HumanMessage(content=\" 2 + 2是多少？\"),\n",
    "    AIMessage(content=\"4\"),\n",
    "    HumanMessage(content=\"谢谢\"),\n",
    "    AIMessage(content=\"不客气!\"),\n",
    "    HumanMessage(content=\"我昨天去游泳了?\"),\n",
    "    AIMessage(content=\"真不错!\"),\n",
    "]\n",
    "\n",
    "trimmer.invoke(messages)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "id": "1261996e-4032-41c6-9331-81e735196d5c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "75\n",
      "87\n"
     ]
    }
   ],
   "source": [
    "print(tiktoken_token_counter(messages))\n",
    "print(tiktoken_token_counter(messages + [HumanMessage(query)]))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "613dc205-3f42-4360-a417-e372c510fa90",
   "metadata": {},
   "source": [
    "要将其应用于我们的链中，我们只需在将消息输入传递给提示词模板之前运行裁剪器即可。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 79,
   "id": "0482b7a3-1293-4384-8972-e8aa90b076ef",
   "metadata": {},
   "outputs": [],
   "source": [
    "workflow = StateGraph(state_schema=State)\n",
    "\n",
    "\n",
    "def call_model(state: State):\n",
    "    trimmed_messages = trimmer.invoke(state[\"messages\"])\n",
    "    print(trimmed_messages)\n",
    "    prompt = prompt_template.invoke(\n",
    "        {\"messages\": trimmed_messages, \"language\": state[\"language\"]}\n",
    "    )\n",
    "    response = model.invoke(prompt)\n",
    "    return {\"messages\": [response]}\n",
    "\n",
    "\n",
    "workflow.add_edge(START, \"model\")\n",
    "workflow.add_node(\"model\", call_model)\n",
    "\n",
    "memory = MemorySaver()\n",
    "app = workflow.compile(checkpointer=memory)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ffc23646-ce39-4b52-9d93-72e70e0288e2",
   "metadata": {},
   "source": [
    "### 第一次调用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 80,
   "id": "c2b44989-bfee-4232-9452-1a35f0891af8",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[SystemMessage(content=\"you're a good assistant\", additional_kwargs={}, response_metadata={}, id='8f111324-0235-491d-a4e0-d8571abca61a'), HumanMessage(content='hi! 我是张三', additional_kwargs={}, response_metadata={}, id='8857dbba-c497-493d-a22d-b5eaf271aa9a'), AIMessage(content='你好!', additional_kwargs={}, response_metadata={}, id='7f3468f1-442e-47d2-9949-1bc44dc1f8ea'), HumanMessage(content='我喜欢香草冰淇凌', additional_kwargs={}, response_metadata={}, id='16f2591c-e7e6-4add-94c4-0cf01257a345'), AIMessage(content='好的', additional_kwargs={}, response_metadata={}, id='f287ed96-002a-410b-9c9e-1b344681a86b'), HumanMessage(content=' 2 + 2是多少？', additional_kwargs={}, response_metadata={}, id='1a6d761a-6279-4c50-849a-ea6440429efe'), AIMessage(content='4', additional_kwargs={}, response_metadata={}, id='124fa4a6-ef7e-4228-9285-6096e45589a8'), HumanMessage(content='谢谢', additional_kwargs={}, response_metadata={}, id='27d6e8aa-3fc6-4ab8-9ea8-9fd5f10b0947'), AIMessage(content='不客气!', additional_kwargs={}, response_metadata={}, id='2daa98fd-5f48-4449-8163-6efe80a5fab2'), HumanMessage(content='我昨天去游泳了?', additional_kwargs={}, response_metadata={}, id='bf5f7c76-84a2-4686-a8ac-a3d7a0b4b644'), AIMessage(content='真不错!', additional_kwargs={}, response_metadata={}, id='1be8438b-eb30-4828-9339-998d147432e2'), HumanMessage(content='你还记得我吗，我叫什么?', additional_kwargs={}, response_metadata={}, id='cf41e429-746b-4e42-8826-8f0585783644')]\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "AIMessage(content='你叫张三。', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': 'c3651ce6-2a55-9bbe-afb1-9e278e1975f8', 'token_usage': {'input_tokens': 136, 'output_tokens': 5, 'total_tokens': 141, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--6ab43797-416b-4fe9-83bc-80c58d44c2e6-0')"
      ]
     },
     "execution_count": 80,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "config = {\"configurable\": {\"thread_id\": \"ab113\"}}\n",
    "query = \"你还记得我吗，我叫什么?\"\n",
    "language = \"粤语\"\n",
    "\n",
    "input_messages = messages + [HumanMessage(query)]\n",
    "output = app.invoke(\n",
    "    {\"messages\": input_messages, \"language\": language},\n",
    "    config,\n",
    ")\n",
    "# output[\"messages\"][-1].pretty_print()\n",
    "output[\"messages\"][-1]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3962a31d-4f8e-4fc7-9e69-fe066b0c0c84",
   "metadata": {},
   "source": [
    "### 第二次调用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 81,
   "id": "1405f05b-3011-4e52-8689-c38fb7df9016",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[SystemMessage(content=\"you're a good assistant\", additional_kwargs={}, response_metadata={}, id='8f111324-0235-491d-a4e0-d8571abca61a'), HumanMessage(content='hi! 我是张三', additional_kwargs={}, response_metadata={}, id='8857dbba-c497-493d-a22d-b5eaf271aa9a'), AIMessage(content='你好!', additional_kwargs={}, response_metadata={}, id='7f3468f1-442e-47d2-9949-1bc44dc1f8ea'), HumanMessage(content='我喜欢香草冰淇凌', additional_kwargs={}, response_metadata={}, id='16f2591c-e7e6-4add-94c4-0cf01257a345'), AIMessage(content='好的', additional_kwargs={}, response_metadata={}, id='f287ed96-002a-410b-9c9e-1b344681a86b'), HumanMessage(content=' 2 + 2是多少？', additional_kwargs={}, response_metadata={}, id='1a6d761a-6279-4c50-849a-ea6440429efe'), AIMessage(content='4', additional_kwargs={}, response_metadata={}, id='124fa4a6-ef7e-4228-9285-6096e45589a8'), HumanMessage(content='谢谢', additional_kwargs={}, response_metadata={}, id='27d6e8aa-3fc6-4ab8-9ea8-9fd5f10b0947'), AIMessage(content='不客气!', additional_kwargs={}, response_metadata={}, id='2daa98fd-5f48-4449-8163-6efe80a5fab2'), HumanMessage(content='我昨天去游泳了?', additional_kwargs={}, response_metadata={}, id='bf5f7c76-84a2-4686-a8ac-a3d7a0b4b644'), AIMessage(content='真不错!', additional_kwargs={}, response_metadata={}, id='1be8438b-eb30-4828-9339-998d147432e2'), HumanMessage(content='你还记得我吗，我叫什么?', additional_kwargs={}, response_metadata={}, id='cf41e429-746b-4e42-8826-8f0585783644'), AIMessage(content='你叫张三。', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': 'c3651ce6-2a55-9bbe-afb1-9e278e1975f8', 'token_usage': {'input_tokens': 136, 'output_tokens': 5, 'total_tokens': 141, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--6ab43797-416b-4fe9-83bc-80c58d44c2e6-0'), HumanMessage(content='你还记得我吗，我叫什么?', additional_kwargs={}, response_metadata={}, id='9d3ea89d-b32a-41ac-9370-32a7cf82cd42')]\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "AIMessage(content='我记得，你叫张三。', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': 'acf96138-a95f-936f-8dde-23b9a2f39cc8', 'token_usage': {'input_tokens': 160, 'output_tokens': 7, 'total_tokens': 167, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--50c094ce-1674-4d4b-b198-6038fcef8536-0')"
      ]
     },
     "execution_count": 81,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "input_messages = input_messages + [HumanMessage(query)]\n",
    "output = app.invoke(\n",
    "    {\"messages\": input_messages, \"language\": language},\n",
    "    config,\n",
    ")\n",
    "# output[\"messages\"][-1].pretty_print()\n",
    "output[\"messages\"][-1]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "805a145a-b0d5-4bba-becf-54f19a95fd1a",
   "metadata": {},
   "source": [
    "### 第三次调用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 82,
   "id": "9ed3fb0e-b379-4005-94d3-f11b7f870d1f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[SystemMessage(content=\"you're a good assistant\", additional_kwargs={}, response_metadata={}, id='8f111324-0235-491d-a4e0-d8571abca61a'), HumanMessage(content='我喜欢香草冰淇凌', additional_kwargs={}, response_metadata={}, id='16f2591c-e7e6-4add-94c4-0cf01257a345'), AIMessage(content='好的', additional_kwargs={}, response_metadata={}, id='f287ed96-002a-410b-9c9e-1b344681a86b'), HumanMessage(content=' 2 + 2是多少？', additional_kwargs={}, response_metadata={}, id='1a6d761a-6279-4c50-849a-ea6440429efe'), AIMessage(content='4', additional_kwargs={}, response_metadata={}, id='124fa4a6-ef7e-4228-9285-6096e45589a8'), HumanMessage(content='谢谢', additional_kwargs={}, response_metadata={}, id='27d6e8aa-3fc6-4ab8-9ea8-9fd5f10b0947'), AIMessage(content='不客气!', additional_kwargs={}, response_metadata={}, id='2daa98fd-5f48-4449-8163-6efe80a5fab2'), HumanMessage(content='我昨天去游泳了?', additional_kwargs={}, response_metadata={}, id='bf5f7c76-84a2-4686-a8ac-a3d7a0b4b644'), AIMessage(content='真不错!', additional_kwargs={}, response_metadata={}, id='1be8438b-eb30-4828-9339-998d147432e2'), HumanMessage(content='你还记得我吗，我叫什么?', additional_kwargs={}, response_metadata={}, id='cf41e429-746b-4e42-8826-8f0585783644'), AIMessage(content='你叫张三。', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': 'c3651ce6-2a55-9bbe-afb1-9e278e1975f8', 'token_usage': {'input_tokens': 136, 'output_tokens': 5, 'total_tokens': 141, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--6ab43797-416b-4fe9-83bc-80c58d44c2e6-0'), HumanMessage(content='你还记得我吗，我叫什么?', additional_kwargs={}, response_metadata={}, id='9d3ea89d-b32a-41ac-9370-32a7cf82cd42'), AIMessage(content='我记得，你叫张三。', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': 'acf96138-a95f-936f-8dde-23b9a2f39cc8', 'token_usage': {'input_tokens': 160, 'output_tokens': 7, 'total_tokens': 167, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--50c094ce-1674-4d4b-b198-6038fcef8536-0'), HumanMessage(content='你还记得我吗，我叫什么?', additional_kwargs={}, response_metadata={}, id='72592fa0-1cfa-480f-8f06-2e4050ee1800')]\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "AIMessage(content='抱歉，我实际上并不具备记忆功能，也无法记住用户的个人信息。我是阿里巴巴集团旗下的通义实验室自主研发的超大规模语言模型，每次对话都是独立的。不过你可以告诉我你的名字，我会尽力帮助你！', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': '81780c1f-e88c-9aca-8d07-3a186150b319', 'token_usage': {'input_tokens': 167, 'output_tokens': 45, 'total_tokens': 212, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--7260d14d-63ff-45c1-9575-baa5fa06199d-0')"
      ]
     },
     "execution_count": 82,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "input_messages = input_messages + [HumanMessage(query)]\n",
    "output = app.invoke(\n",
    "    {\"messages\": input_messages, \"language\": language},\n",
    "    config,\n",
    ")\n",
    "# output[\"messages\"][-1].pretty_print()\n",
    "output[\"messages\"][-1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 83,
   "id": "09b30557-c236-4d00-860b-5a15ad173cf0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[SystemMessage(content=\"you're a good assistant\", additional_kwargs={}, response_metadata={}, id='8f111324-0235-491d-a4e0-d8571abca61a'), HumanMessage(content='你还记得我吗，我叫什么?', additional_kwargs={}, response_metadata={}, id='72592fa0-1cfa-480f-8f06-2e4050ee1800'), AIMessage(content='抱歉，我实际上并不具备记忆功能，也无法记住用户的个人信息。我是阿里巴巴集团旗下的通义实验室自主研发的超大规模语言模型，每次对话都是独立的。不过你可以告诉我你的名字，我会尽力帮助你！', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': '81780c1f-e88c-9aca-8d07-3a186150b319', 'token_usage': {'input_tokens': 167, 'output_tokens': 45, 'total_tokens': 212, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--7260d14d-63ff-45c1-9575-baa5fa06199d-0'), HumanMessage(content='我昨天干啥去了?', additional_kwargs={}, response_metadata={}, id='5f45b027-8301-4413-b0da-e9b6f6b7a87a')]\n",
      "==================================\u001b[1m Ai Message \u001b[0m==================================\n",
      "\n",
      "抱歉，我无法知道你昨天去了哪里或做了什么事情，因为我没有访问你的个人历史数据。如果你忘记了某些事情，可以尝试回想一下昨天的日程安排或者查看手机上的记录，比如短信、通话记录、社交媒体动态等。如果需要提醒或帮助规划未来的事情，我很乐意协助！\n"
     ]
    }
   ],
   "source": [
    "query = \"我昨天干啥去了?\"\n",
    "input_messages = input_messages + [HumanMessage(query)]\n",
    "output = app.invoke(\n",
    "    {\"messages\": input_messages, \"language\": language},\n",
    "    config,\n",
    ")\n",
    "output[\"messages\"][-1].pretty_print()\n",
    "# output[\"messages\"]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "id": "551c7ca4-3eec-482d-9a3e-3f4d84886236",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[SystemMessage(content=\"you're a good assistant\", additional_kwargs={}, response_metadata={}, id='8f111324-0235-491d-a4e0-d8571abca61a'),\n",
       " HumanMessage(content='hi! 我是张三', additional_kwargs={}, response_metadata={}, id='8857dbba-c497-493d-a22d-b5eaf271aa9a'),\n",
       " AIMessage(content='你好!', additional_kwargs={}, response_metadata={}, id='7f3468f1-442e-47d2-9949-1bc44dc1f8ea'),\n",
       " HumanMessage(content='我喜欢香草冰淇凌', additional_kwargs={}, response_metadata={}, id='16f2591c-e7e6-4add-94c4-0cf01257a345'),\n",
       " AIMessage(content='好的', additional_kwargs={}, response_metadata={}, id='f287ed96-002a-410b-9c9e-1b344681a86b'),\n",
       " HumanMessage(content=' 2 + 2是多少？', additional_kwargs={}, response_metadata={}, id='1a6d761a-6279-4c50-849a-ea6440429efe'),\n",
       " AIMessage(content='4', additional_kwargs={}, response_metadata={}, id='124fa4a6-ef7e-4228-9285-6096e45589a8'),\n",
       " HumanMessage(content='谢谢', additional_kwargs={}, response_metadata={}, id='27d6e8aa-3fc6-4ab8-9ea8-9fd5f10b0947'),\n",
       " AIMessage(content='不客气!', additional_kwargs={}, response_metadata={}, id='2daa98fd-5f48-4449-8163-6efe80a5fab2'),\n",
       " HumanMessage(content='我昨天去游泳了?', additional_kwargs={}, response_metadata={}, id='bf5f7c76-84a2-4686-a8ac-a3d7a0b4b644'),\n",
       " AIMessage(content='真不错!', additional_kwargs={}, response_metadata={}, id='1be8438b-eb30-4828-9339-998d147432e2'),\n",
       " HumanMessage(content='你还记得我吗，我叫什么?', additional_kwargs={}, response_metadata={}, id='cf41e429-746b-4e42-8826-8f0585783644'),\n",
       " AIMessage(content='你叫张三。', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': 'c3651ce6-2a55-9bbe-afb1-9e278e1975f8', 'token_usage': {'input_tokens': 136, 'output_tokens': 5, 'total_tokens': 141, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--6ab43797-416b-4fe9-83bc-80c58d44c2e6-0'),\n",
       " HumanMessage(content='你还记得我吗，我叫什么?', additional_kwargs={}, response_metadata={}, id='9d3ea89d-b32a-41ac-9370-32a7cf82cd42'),\n",
       " AIMessage(content='我记得，你叫张三。', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': 'acf96138-a95f-936f-8dde-23b9a2f39cc8', 'token_usage': {'input_tokens': 160, 'output_tokens': 7, 'total_tokens': 167, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--50c094ce-1674-4d4b-b198-6038fcef8536-0'),\n",
       " HumanMessage(content='你还记得我吗，我叫什么?', additional_kwargs={}, response_metadata={}, id='72592fa0-1cfa-480f-8f06-2e4050ee1800'),\n",
       " AIMessage(content='抱歉，我实际上并不具备记忆功能，也无法记住用户的个人信息。我是阿里巴巴集团旗下的通义实验室自主研发的超大规模语言模型，每次对话都是独立的。不过你可以告诉我你的名字，我会尽力帮助你！', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': '81780c1f-e88c-9aca-8d07-3a186150b319', 'token_usage': {'input_tokens': 167, 'output_tokens': 45, 'total_tokens': 212, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--7260d14d-63ff-45c1-9575-baa5fa06199d-0'),\n",
       " HumanMessage(content='我昨天干啥去了?', additional_kwargs={}, response_metadata={}, id='5f45b027-8301-4413-b0da-e9b6f6b7a87a'),\n",
       " AIMessage(content='抱歉，我无法知道你昨天去了哪里或做了什么事情，因为我没有访问你的个人历史数据。如果你忘记了某些事情，可以尝试回想一下昨天的日程安排或者查看手机上的记录，比如短信、通话记录、社交媒体动态等。如果需要提醒或帮助规划未来的事情，我很乐意协助！', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': 'c2ae9990-941f-9d6d-ab8d-45bb9737f2d0', 'token_usage': {'input_tokens': 111, 'output_tokens': 63, 'total_tokens': 174, 'prompt_tokens_details': {'cached_tokens': 0}}}, id='run--c2a2fff5-9d1f-4ad7-962f-a2709ba22173-0')]"
      ]
     },
     "execution_count": 84,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "output[\"messages\"]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fde85fc3-39c0-42df-89aa-926c9a31032b",
   "metadata": {},
   "source": [
    "## 流式传输\n",
    "\n",
    "我们现在已经有了一个可以正常运行的聊天机器人。然而，对于聊天机器人应用来说，一个非常重要的用户体验（UX）考虑因素是**流式传输（streaming）**。  \n",
    "由于大语言模型（LLMs）有时需要一定时间才能生成响应，为了提升用户体验，大多数应用程序会采用一种方法：**逐个 token 地返回生成内容**。这样可以让用户看到逐步生成的结果。\n",
    "\n",
    "实际上，实现这一点非常简单！\n",
    "\n",
    "默认情况下，在我们的 LangGraph 应用中，`.stream` 会流式传输整个应用的步骤 —— 在这个例子中，就是模型响应这一个步骤。  \n",
    "通过设置 `stream_mode=\"messages\"`，我们可以改为**逐条消息地流式传输输出 tokens**："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 86,
   "id": "8a18821d-69f5-4767-94b0-e511a70f9f41",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[HumanMessage(content='嗨，我是托德，请告诉我一个笑话.', additional_kwargs={}, response_metadata={}, id='aa9bc15b-a1e1-41b7-810d-bbb4a1d1ae5a')]\n",
      "哎|哟|我的|老铁托德|！给你整一个|笑话：有一天啊|，一只北极熊|走进酒吧，坐下|之后跟服务员说|“给我来杯|可乐”。服务员|就问它：“|你咋在这呢|？这不是北极熊|待的地儿啊|！”北极熊指|指自己说：“|我这不是脱光|了嘛，你看|我像不像人|？”哈哈，这|笑话咋样，|笑不笑？||"
     ]
    }
   ],
   "source": [
    "config = {\"configurable\": {\"thread_id\": \"abc789\"}}\n",
    "query = \"嗨，我是托德，请告诉我一个笑话.\"\n",
    "language = \"东北话\"\n",
    "\n",
    "input_messages = [HumanMessage(query)]\n",
    "for chunk, metadata in app.stream(\n",
    "    {\"messages\": input_messages, \"language\": language},\n",
    "    config,\n",
    "    stream_mode=\"messages\",\n",
    "):\n",
    "    if isinstance(chunk, AIMessage):  # Filter to just model responses\n",
    "        print(chunk.content, end=\"|\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e5a93f17-b427-4096-910f-50b9b807218a",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
