{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "454012e1",
   "metadata": {},
   "source": [
    "## 1. 什么是 LangChain 的\"链\"？\n",
    "\n",
    "简单理解：链就是把多个处理步骤连接起来，数据从第一步流向最后一步，就像工厂的流水线一样。\n",
    "\n",
    "用生活中的例子来理解\n",
    "想象你要做一杯咖啡：\n",
    "\n",
    "磨豆子 → 2. 冲泡 → 3. 加糖 → 4. 装杯\n",
    "在 LangChain 中，链就是这样的流水线：\n",
    "\n",
    "准备提示词 → 2. 调用LLM → 3. 解析结果 → 4. 返回数据\n",
    "\n",
    "链还能支持 **异步调用**，**流式处理**，**批量处理**，**错误处理和重试** 等高级功能。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7520c37d",
   "metadata": {},
   "source": [
    "## 2. LCEL 语法\n",
    "\n",
    "### 2.1 Python 魔术方法基础\n",
    "\n",
    "在 Python 中，当你使用 | 操作符时，实际上调用的是对象的魔术方法：\n",
    "\n",
    "```python\n",
    "# 当你写 a | b 时，Python 实际执行：\n",
    "result = a.__or__(b)\n",
    "\n",
    "# 如果 a 没有 __or__ 方法，Python 会尝试：\n",
    "result = b.__ror__(a)  # ror = reverse or\n",
    "```\n",
    "\n",
    "**LangChain 中的实现**\n",
    "\n",
    "LangChain 的所有 Runnable 对象（包括 prompt、model、parser）都实现了这些魔术方法：\n",
    "\n",
    "```python\n",
    "class BaseRunnable:\n",
    "    def __or__(self, other):\n",
    "        \"\"\"实现 self | other\"\"\"\n",
    "        return RunnableSequence(first=self, last=other)\n",
    "    \n",
    "    def __ror__(self, other):\n",
    "        \"\"\"实现 other | self（当 other 没有 __or__ 时）\"\"\"\n",
    "        return RunnableSequence(first=other, last=self)\n",
    "```\n",
    "\n",
    "**实际效果**\n",
    "\n",
    "\n",
    "```python\n",
    "# 这三种写法是等价的：\n",
    "chain1 = prompt | model | parser\n",
    "chain2 = prompt.__or__(model).__or__(parser)\n",
    "chain3 = RunnableSequence(first=prompt, last=RunnableSequence(first=model, last=parser))\n",
    "```\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3b9f5737",
   "metadata": {},
   "outputs": [],
   "source": [
    "# prompt.__or__(model) 的伪代码实现\n",
    "\n",
    "class PromptTemplate(BaseRunnable):\n",
    "    def __init__(self, template, input_variables):\n",
    "        self.template = template\n",
    "        self.input_variables = input_variables\n",
    "    \n",
    "    def __or__(self, other):\n",
    "        \"\"\"\n",
    "        实现 prompt | model 的核心逻辑\n",
    "        \"\"\"\n",
    "        # 创建一个新的序列链\n",
    "        return RunnableSequence(\n",
    "            first=self,      # 当前的 prompt\n",
    "            last=other       # 传入的 model\n",
    "        )\n",
    "    \n",
    "    def invoke(self, inputs):\n",
    "        \"\"\"执行 prompt 的格式化\"\"\"\n",
    "        return self.template.format(**inputs)\n",
    "\n",
    "class RunnableSequence(BaseRunnable):\n",
    "    def __init__(self, first, last):\n",
    "        self.first = first    # 第一个组件（prompt）\n",
    "        self.last = last      # 第二个组件（model）\n",
    "    \n",
    "    def invoke(self, inputs):\n",
    "        \"\"\"\n",
    "        执行序列链的核心逻辑\n",
    "        \"\"\"\n",
    "        # 步骤1：执行第一个组件\n",
    "        intermediate_result = self.first.invoke(inputs)\n",
    "        \n",
    "        # 步骤2：将第一个组件的输出作为第二个组件的输入\n",
    "        final_result = self.last.invoke(intermediate_result)\n",
    "        \n",
    "        return final_result\n",
    "    \n",
    "    def __or__(self, other):\n",
    "        \"\"\"\n",
    "        支持继续链接：(prompt | model) | parser\n",
    "        \"\"\"\n",
    "        return RunnableSequence(\n",
    "            first=self,      # 当前的序列链\n",
    "            last=other       # 新的组件\n",
    "        )\n",
    "\n",
    "# 使用示例的内部执行流程\n",
    "prompt = PromptTemplate(\"分析：{text}\", [\"text\"])\n",
    "model = SomeLLM()\n",
    "\n",
    "# 当执行 prompt | model 时：\n",
    "chain = prompt.__or__(model)  # 返回 RunnableSequence(first=prompt, last=model)\n",
    "\n",
    "# 当执行 chain.invoke({\"text\": \"项目进度\"}) 时：\n",
    "# 1. 调用 RunnableSequence.invoke({\"text\": \"项目进度\"})\n",
    "# 2. intermediate_result = prompt.invoke({\"text\": \"项目进度\"})  # 返回 \"分析：项目进度\"\n",
    "# 3. final_result = model.invoke(\"分析：项目进度\")  # 返回 LLM 的响应\n",
    "# 4. 返回 final_result\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ef94b30a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 完整的三步链：prompt | model | parser\n",
    "def create_three_step_chain():\n",
    "    prompt = PromptTemplate(\"分析：{text}\", [\"text\"])\n",
    "    model = SomeLLM()\n",
    "    parser = SomeParser()\n",
    "    \n",
    "    # 第一步：prompt | model\n",
    "    step1_chain = prompt.__or__(model)\n",
    "    # 等价于：RunnableSequence(first=prompt, last=model)\n",
    "    \n",
    "    # 第二步：(prompt | model) | parser\n",
    "    final_chain = step1_chain.__or__(parser)\n",
    "    # 等价于：RunnableSequence(first=step1_chain, last=parser)\n",
    "    \n",
    "    return final_chain\n",
    "\n",
    "# 执行时的内部流程\n",
    "def execute_chain(inputs):\n",
    "    # inputs = {\"text\": \"项目进度\"}\n",
    "    \n",
    "    # 第一层 RunnableSequence.invoke()\n",
    "    # first = RunnableSequence(prompt, model)\n",
    "    # last = parser\n",
    "    \n",
    "    # 执行 first.invoke(inputs)\n",
    "    # 这会触发第二层 RunnableSequence.invoke()\n",
    "    # first = prompt, last = model\n",
    "    \n",
    "    # 执行 prompt.invoke({\"text\": \"项目进度\"})\n",
    "    prompt_result = \"分析：项目进度\"\n",
    "    \n",
    "    # 执行 model.invoke(\"分析：项目进度\")\n",
    "    model_result = \"这是一个关于项目进度的分析...\"\n",
    "    \n",
    "    # 第一层继续执行 last.invoke(model_result)\n",
    "    # 执行 parser.invoke(\"这是一个关于项目进度的分析...\")\n",
    "    final_result = {\"analysis\": \"项目进度良好\"}\n",
    "    \n",
    "    return final_result\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e02ba0f7",
   "metadata": {},
   "source": [
    "## 3 其他问题\n",
    "\n",
    "为什么需要 __ror__？\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4c1ffa03",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 假设你有一个自定义函数想要加入链中\n",
    "def custom_preprocessor(text):\n",
    "    return text.upper()\n",
    "\n",
    "# 如果 custom_preprocessor 没有 __or__ 方法\n",
    "# 但 prompt 有 __ror__ 方法，这样就能工作：\n",
    "chain = custom_preprocessor | prompt | model\n",
    "\n",
    "# Python 会调用：\n",
    "# prompt.__ror__(custom_preprocessor)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bf6a32f2",
   "metadata": {},
   "source": [
    "## 3. Runnable 对象接口\n",
    "\n",
    "Runnable 对象是一个可以被调用、异步执行、批处理、流式处理、并行处理的工作单元，通过schema属性、run方法定义。\n",
    "\n",
    "\n",
    "1. Runnable 核心执行方法:\n",
    "\n",
    "基础执行方法\n",
    "\n",
    "```python\n",
    "# 同步执行\n",
    "result = runnable.invoke(input)\n",
    "\n",
    "# 异步执行\n",
    "result = await runnable.ainvoke(input)\n",
    "\n",
    "# 批量执行\n",
    "results = runnable.batch([input1, input2, input3])\n",
    "results = await runnable.abatch([input1, input2, input3])\n",
    "\n",
    "# 流式执行\n",
    "for chunk in runnable.stream(input):\n",
    "    print(chunk)\n",
    "\n",
    "async for chunk in runnable.astream(input):\n",
    "    print(chunk)\n",
    "```\n",
    "\n",
    "输入输出类型\n",
    "\n",
    "```python\n",
    "# Runnable 是泛型，定义输入输出类型\n",
    "class MyRunnable(Runnable[Dict, str]):  # 输入Dict，输出str\n",
    "    def invoke(self, input: Dict) -> str:\n",
    "        return f\"处理结果: {input}\"\n",
    "```\n",
    "\n",
    "2. 链式组合操作\n",
    "管道操作符 |\n",
    "\n",
    "```python\n",
    "# 串行组合\n",
    "chain = prompt | model | parser\n",
    "\n",
    "# 等价于\n",
    "chain = RunnableSequence(first=prompt, last=RunnableSequence(first=model, last=parser))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "355108b2",
   "metadata": {},
   "source": [
    "### 3.1 schema 属性的作用\n",
    "\n",
    "#### 每个 Runnable 都有这两个属性\n",
    "runnable.input_schema   # 定义期望的输入格式\n",
    "\n",
    "runnable.output_schema  # 定义输出的数据格式\n",
    "\n",
    "#### 主要用途\n",
    "- 类型检查: 验证输入数据是否符合预期格式\n",
    "- 文档生成: 自动生成API文档\n",
    "- IDE支持: 提供代码补全和类型提示\n",
    "- 调试帮助: 快速定位数据格式问题\n",
    "\n",
    "### 3.2 Schema 的数据结构\n",
    "Schema 通常是 JSON Schema 格式或 Pydantic 模型：\n",
    "\n",
    "```python\n",
    "# JSON Schema 格式示例\n",
    "{\n",
    "    \"type\": \"object\",\n",
    "    \"properties\": {\n",
    "        \"text\": {\"type\": \"string\"},\n",
    "        \"temperature\": {\"type\": \"number\", \"minimum\": 0, \"maximum\": 2}\n",
    "    },\n",
    "    \"required\": [\"text\"]\n",
    "}\n",
    "\n",
    "# Pydantic 模型格式\n",
    "class InputModel(BaseModel):\n",
    "    text: str\n",
    "    temperature: float = 0.7\n",
    "```\n",
    "### 3.3 实际例子 PromptTemplate 的 Schema\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6c76bb00",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "输入 Schema:\n",
      "{'properties': {'age': {'title': 'Age', 'type': 'string'}, 'name': {'title': 'Name', 'type': 'string'}}, 'required': ['age', 'name'], 'title': 'PromptInput', 'type': 'object'}\n",
      "输出 Schema:\n",
      "{'$defs': {'AIMessage': {'additionalProperties': True, 'description': 'Message from an AI.\\n\\nAIMessage is returned from a chat model as a response to a prompt.\\n\\nThis message represents the output of the model and consists of both\\nthe raw output as returned by the model together standardized fields\\n(e.g., tool calls, usage metadata) added by the LangChain framework.', 'properties': {'content': {'anyOf': [{'type': 'string'}, {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': True, 'type': 'object'}]}, 'type': 'array'}], 'title': 'Content'}, 'additional_kwargs': {'additionalProperties': True, 'title': 'Additional Kwargs', 'type': 'object'}, 'response_metadata': {'additionalProperties': True, 'title': 'Response Metadata', 'type': 'object'}, 'type': {'const': 'ai', 'default': 'ai', 'title': 'Type', 'type': 'string'}, 'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Name'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Id'}, 'example': {'default': False, 'title': 'Example', 'type': 'boolean'}, 'tool_calls': {'default': [], 'items': {'$ref': '#/$defs/ToolCall'}, 'title': 'Tool Calls', 'type': 'array'}, 'invalid_tool_calls': {'default': [], 'items': {'$ref': '#/$defs/InvalidToolCall'}, 'title': 'Invalid Tool Calls', 'type': 'array'}, 'usage_metadata': {'anyOf': [{'$ref': '#/$defs/UsageMetadata'}, {'type': 'null'}], 'default': None}}, 'required': ['content'], 'title': 'AIMessage', 'type': 'object'}, 'AIMessageChunk': {'additionalProperties': True, 'description': 'Message chunk from an AI.', 'properties': {'content': {'anyOf': [{'type': 'string'}, {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': True, 'type': 'object'}]}, 'type': 'array'}], 'title': 'Content'}, 'additional_kwargs': {'additionalProperties': True, 'title': 'Additional Kwargs', 'type': 'object'}, 'response_metadata': {'additionalProperties': True, 'title': 'Response Metadata', 'type': 'object'}, 'type': {'const': 'AIMessageChunk', 'default': 'AIMessageChunk', 'title': 'Type', 'type': 'string'}, 'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Name'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Id'}, 'example': {'default': False, 'title': 'Example', 'type': 'boolean'}, 'tool_calls': {'default': [], 'items': {'$ref': '#/$defs/ToolCall'}, 'title': 'Tool Calls', 'type': 'array'}, 'invalid_tool_calls': {'default': [], 'items': {'$ref': '#/$defs/InvalidToolCall'}, 'title': 'Invalid Tool Calls', 'type': 'array'}, 'usage_metadata': {'anyOf': [{'$ref': '#/$defs/UsageMetadata'}, {'type': 'null'}], 'default': None}, 'tool_call_chunks': {'default': [], 'items': {'$ref': '#/$defs/ToolCallChunk'}, 'title': 'Tool Call Chunks', 'type': 'array'}}, 'required': ['content'], 'title': 'AIMessageChunk', 'type': 'object'}, 'ChatMessage': {'additionalProperties': True, 'description': 'Message that can be assigned an arbitrary speaker (i.e. role).', 'properties': {'content': {'anyOf': [{'type': 'string'}, {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': True, 'type': 'object'}]}, 'type': 'array'}], 'title': 'Content'}, 'additional_kwargs': {'additionalProperties': True, 'title': 'Additional Kwargs', 'type': 'object'}, 'response_metadata': {'additionalProperties': True, 'title': 'Response Metadata', 'type': 'object'}, 'type': {'const': 'chat', 'default': 'chat', 'title': 'Type', 'type': 'string'}, 'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Name'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Id'}, 'role': {'title': 'Role', 'type': 'string'}}, 'required': ['content', 'role'], 'title': 'ChatMessage', 'type': 'object'}, 'ChatMessageChunk': {'additionalProperties': True, 'description': 'Chat Message chunk.', 'properties': {'content': {'anyOf': [{'type': 'string'}, {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': True, 'type': 'object'}]}, 'type': 'array'}], 'title': 'Content'}, 'additional_kwargs': {'additionalProperties': True, 'title': 'Additional Kwargs', 'type': 'object'}, 'response_metadata': {'additionalProperties': True, 'title': 'Response Metadata', 'type': 'object'}, 'type': {'const': 'ChatMessageChunk', 'default': 'ChatMessageChunk', 'title': 'Type', 'type': 'string'}, 'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Name'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Id'}, 'role': {'title': 'Role', 'type': 'string'}}, 'required': ['content', 'role'], 'title': 'ChatMessageChunk', 'type': 'object'}, 'ChatPromptValueConcrete': {'description': 'Chat prompt value which explicitly lists out the message types it accepts.\\n\\nFor use in external schemas.', 'properties': {'messages': {'items': {'oneOf': [{'$ref': '#/$defs/AIMessage'}, {'$ref': '#/$defs/HumanMessage'}, {'$ref': '#/$defs/ChatMessage'}, {'$ref': '#/$defs/SystemMessage'}, {'$ref': '#/$defs/FunctionMessage'}, {'$ref': '#/$defs/ToolMessage'}, {'$ref': '#/$defs/AIMessageChunk'}, {'$ref': '#/$defs/HumanMessageChunk'}, {'$ref': '#/$defs/ChatMessageChunk'}, {'$ref': '#/$defs/SystemMessageChunk'}, {'$ref': '#/$defs/FunctionMessageChunk'}, {'$ref': '#/$defs/ToolMessageChunk'}]}, 'title': 'Messages', 'type': 'array'}, 'type': {'const': 'ChatPromptValueConcrete', 'default': 'ChatPromptValueConcrete', 'title': 'Type', 'type': 'string'}}, 'required': ['messages'], 'title': 'ChatPromptValueConcrete', 'type': 'object'}, 'FunctionMessage': {'additionalProperties': True, 'description': 'Message for passing the result of executing a tool back to a model.\\n\\nFunctionMessage are an older version of the ToolMessage schema, and\\ndo not contain the tool_call_id field.\\n\\nThe tool_call_id field is used to associate the tool call request with the\\ntool call response. This is useful in situations where a chat model is able\\nto request multiple tool calls in parallel.', 'properties': {'content': {'anyOf': [{'type': 'string'}, {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': True, 'type': 'object'}]}, 'type': 'array'}], 'title': 'Content'}, 'additional_kwargs': {'additionalProperties': True, 'title': 'Additional Kwargs', 'type': 'object'}, 'response_metadata': {'additionalProperties': True, 'title': 'Response Metadata', 'type': 'object'}, 'type': {'const': 'function', 'default': 'function', 'title': 'Type', 'type': 'string'}, 'name': {'title': 'Name', 'type': 'string'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Id'}}, 'required': ['content', 'name'], 'title': 'FunctionMessage', 'type': 'object'}, 'FunctionMessageChunk': {'additionalProperties': True, 'description': 'Function Message chunk.', 'properties': {'content': {'anyOf': [{'type': 'string'}, {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': True, 'type': 'object'}]}, 'type': 'array'}], 'title': 'Content'}, 'additional_kwargs': {'additionalProperties': True, 'title': 'Additional Kwargs', 'type': 'object'}, 'response_metadata': {'additionalProperties': True, 'title': 'Response Metadata', 'type': 'object'}, 'type': {'const': 'FunctionMessageChunk', 'default': 'FunctionMessageChunk', 'title': 'Type', 'type': 'string'}, 'name': {'title': 'Name', 'type': 'string'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Id'}}, 'required': ['content', 'name'], 'title': 'FunctionMessageChunk', 'type': 'object'}, 'HumanMessage': {'additionalProperties': True, 'description': 'Message from a human.\\n\\nHumanMessages are messages that are passed in from a human to the model.\\n\\nExample:\\n\\n    .. code-block:: python\\n\\n        from langchain_core.messages import HumanMessage, SystemMessage\\n\\n        messages = [\\n            SystemMessage(\\n                content=\"You are a helpful assistant! Your name is Bob.\"\\n            ),\\n            HumanMessage(\\n                content=\"What is your name?\"\\n            )\\n        ]\\n\\n        # Instantiate a chat model and invoke it with the messages\\n        model = ...\\n        print(model.invoke(messages))', 'properties': {'content': {'anyOf': [{'type': 'string'}, {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': True, 'type': 'object'}]}, 'type': 'array'}], 'title': 'Content'}, 'additional_kwargs': {'additionalProperties': True, 'title': 'Additional Kwargs', 'type': 'object'}, 'response_metadata': {'additionalProperties': True, 'title': 'Response Metadata', 'type': 'object'}, 'type': {'const': 'human', 'default': 'human', 'title': 'Type', 'type': 'string'}, 'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Name'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Id'}, 'example': {'default': False, 'title': 'Example', 'type': 'boolean'}}, 'required': ['content'], 'title': 'HumanMessage', 'type': 'object'}, 'HumanMessageChunk': {'additionalProperties': True, 'description': 'Human Message chunk.', 'properties': {'content': {'anyOf': [{'type': 'string'}, {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': True, 'type': 'object'}]}, 'type': 'array'}], 'title': 'Content'}, 'additional_kwargs': {'additionalProperties': True, 'title': 'Additional Kwargs', 'type': 'object'}, 'response_metadata': {'additionalProperties': True, 'title': 'Response Metadata', 'type': 'object'}, 'type': {'const': 'HumanMessageChunk', 'default': 'HumanMessageChunk', 'title': 'Type', 'type': 'string'}, 'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Name'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Id'}, 'example': {'default': False, 'title': 'Example', 'type': 'boolean'}}, 'required': ['content'], 'title': 'HumanMessageChunk', 'type': 'object'}, 'InputTokenDetails': {'description': 'Breakdown of input token counts.\\n\\nDoes *not* need to sum to full input token count. Does *not* need to have all keys.\\n\\nExample:\\n\\n    .. code-block:: python\\n\\n        {\\n            \"audio\": 10,\\n            \"cache_creation\": 200,\\n            \"cache_read\": 100,\\n        }\\n\\n.. versionadded:: 0.3.9\\n\\nMay also hold extra provider-specific keys.', 'properties': {'audio': {'title': 'Audio', 'type': 'integer'}, 'cache_creation': {'title': 'Cache Creation', 'type': 'integer'}, 'cache_read': {'title': 'Cache Read', 'type': 'integer'}}, 'title': 'InputTokenDetails', 'type': 'object'}, 'InvalidToolCall': {'description': 'Allowance for errors made by LLM.\\n\\nHere we add an `error` key to surface errors made during generation\\n(e.g., invalid JSON arguments.)', 'properties': {'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Name'}, 'args': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Args'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Id'}, 'error': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Error'}, 'type': {'const': 'invalid_tool_call', 'title': 'Type', 'type': 'string'}}, 'required': ['name', 'args', 'id', 'error'], 'title': 'InvalidToolCall', 'type': 'object'}, 'OutputTokenDetails': {'description': 'Breakdown of output token counts.\\n\\nDoes *not* need to sum to full output token count. Does *not* need to have all keys.\\n\\nExample:\\n\\n    .. code-block:: python\\n\\n        {\\n            \"audio\": 10,\\n            \"reasoning\": 200,\\n        }\\n\\n.. versionadded:: 0.3.9', 'properties': {'audio': {'title': 'Audio', 'type': 'integer'}, 'reasoning': {'title': 'Reasoning', 'type': 'integer'}}, 'title': 'OutputTokenDetails', 'type': 'object'}, 'StringPromptValue': {'description': 'String prompt value.', 'properties': {'text': {'title': 'Text', 'type': 'string'}, 'type': {'const': 'StringPromptValue', 'default': 'StringPromptValue', 'title': 'Type', 'type': 'string'}}, 'required': ['text'], 'title': 'StringPromptValue', 'type': 'object'}, 'SystemMessage': {'additionalProperties': True, 'description': 'Message for priming AI behavior.\\n\\nThe system message is usually passed in as the first of a sequence\\nof input messages.\\n\\nExample:\\n\\n    .. code-block:: python\\n\\n        from langchain_core.messages import HumanMessage, SystemMessage\\n\\n        messages = [\\n            SystemMessage(\\n                content=\"You are a helpful assistant! Your name is Bob.\"\\n            ),\\n            HumanMessage(\\n                content=\"What is your name?\"\\n            )\\n        ]\\n\\n        # Define a chat model and invoke it with the messages\\n        print(model.invoke(messages))', 'properties': {'content': {'anyOf': [{'type': 'string'}, {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': True, 'type': 'object'}]}, 'type': 'array'}], 'title': 'Content'}, 'additional_kwargs': {'additionalProperties': True, 'title': 'Additional Kwargs', 'type': 'object'}, 'response_metadata': {'additionalProperties': True, 'title': 'Response Metadata', 'type': 'object'}, 'type': {'const': 'system', 'default': 'system', 'title': 'Type', 'type': 'string'}, 'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Name'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Id'}}, 'required': ['content'], 'title': 'SystemMessage', 'type': 'object'}, 'SystemMessageChunk': {'additionalProperties': True, 'description': 'System Message chunk.', 'properties': {'content': {'anyOf': [{'type': 'string'}, {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': True, 'type': 'object'}]}, 'type': 'array'}], 'title': 'Content'}, 'additional_kwargs': {'additionalProperties': True, 'title': 'Additional Kwargs', 'type': 'object'}, 'response_metadata': {'additionalProperties': True, 'title': 'Response Metadata', 'type': 'object'}, 'type': {'const': 'SystemMessageChunk', 'default': 'SystemMessageChunk', 'title': 'Type', 'type': 'string'}, 'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Name'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Id'}}, 'required': ['content'], 'title': 'SystemMessageChunk', 'type': 'object'}, 'ToolCall': {'description': 'Represents a request to call a tool.\\n\\nExample:\\n\\n    .. code-block:: python\\n\\n        {\\n            \"name\": \"foo\",\\n            \"args\": {\"a\": 1},\\n            \"id\": \"123\"\\n        }\\n\\n    This represents a request to call the tool named \"foo\" with arguments {\"a\": 1}\\n    and an identifier of \"123\".', 'properties': {'name': {'title': 'Name', 'type': 'string'}, 'args': {'additionalProperties': True, 'title': 'Args', 'type': 'object'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Id'}, 'type': {'const': 'tool_call', 'title': 'Type', 'type': 'string'}}, 'required': ['name', 'args', 'id'], 'title': 'ToolCall', 'type': 'object'}, 'ToolCallChunk': {'description': 'A chunk of a tool call (e.g., as part of a stream).\\n\\nWhen merging ToolCallChunks (e.g., via AIMessageChunk.__add__),\\nall string attributes are concatenated. Chunks are only merged if their\\nvalues of `index` are equal and not None.\\n\\nExample:\\n\\n.. code-block:: python\\n\\n    left_chunks = [ToolCallChunk(name=\"foo\", args=\\'{\"a\":\\', index=0)]\\n    right_chunks = [ToolCallChunk(name=None, args=\\'1}\\', index=0)]\\n\\n    (\\n        AIMessageChunk(content=\"\", tool_call_chunks=left_chunks)\\n        + AIMessageChunk(content=\"\", tool_call_chunks=right_chunks)\\n    ).tool_call_chunks == [ToolCallChunk(name=\\'foo\\', args=\\'{\"a\":1}\\', index=0)]', 'properties': {'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Name'}, 'args': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Args'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Id'}, 'index': {'anyOf': [{'type': 'integer'}, {'type': 'null'}], 'title': 'Index'}, 'type': {'const': 'tool_call_chunk', 'title': 'Type', 'type': 'string'}}, 'required': ['name', 'args', 'id', 'index'], 'title': 'ToolCallChunk', 'type': 'object'}, 'ToolMessage': {'additionalProperties': True, 'description': 'Message for passing the result of executing a tool back to a model.\\n\\nToolMessages contain the result of a tool invocation. Typically, the result\\nis encoded inside the `content` field.\\n\\nExample: A ToolMessage representing a result of 42 from a tool call with id\\n\\n    .. code-block:: python\\n\\n        from langchain_core.messages import ToolMessage\\n\\n        ToolMessage(content=\\'42\\', tool_call_id=\\'call_Jja7J89XsjrOLA5r!MEOW!SL\\')\\n\\n\\nExample: A ToolMessage where only part of the tool output is sent to the model\\n    and the full output is passed in to artifact.\\n\\n    .. versionadded:: 0.2.17\\n\\n    .. code-block:: python\\n\\n        from langchain_core.messages import ToolMessage\\n\\n        tool_output = {\\n            \"stdout\": \"From the graph we can see that the correlation between x and y is ...\",\\n            \"stderr\": None,\\n            \"artifacts\": {\"type\": \"image\", \"base64_data\": \"/9j/4gIcSU...\"},\\n        }\\n\\n        ToolMessage(\\n            content=tool_output[\"stdout\"],\\n            artifact=tool_output,\\n            tool_call_id=\\'call_Jja7J89XsjrOLA5r!MEOW!SL\\',\\n        )\\n\\nThe tool_call_id field is used to associate the tool call request with the\\ntool call response. This is useful in situations where a chat model is able\\nto request multiple tool calls in parallel.', 'properties': {'content': {'anyOf': [{'type': 'string'}, {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': True, 'type': 'object'}]}, 'type': 'array'}], 'title': 'Content'}, 'additional_kwargs': {'additionalProperties': True, 'title': 'Additional Kwargs', 'type': 'object'}, 'response_metadata': {'additionalProperties': True, 'title': 'Response Metadata', 'type': 'object'}, 'type': {'const': 'tool', 'default': 'tool', 'title': 'Type', 'type': 'string'}, 'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Name'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Id'}, 'tool_call_id': {'title': 'Tool Call Id', 'type': 'string'}, 'artifact': {'default': None, 'title': 'Artifact'}, 'status': {'default': 'success', 'enum': ['success', 'error'], 'title': 'Status', 'type': 'string'}}, 'required': ['content', 'tool_call_id'], 'title': 'ToolMessage', 'type': 'object'}, 'ToolMessageChunk': {'additionalProperties': True, 'description': 'Tool Message chunk.', 'properties': {'content': {'anyOf': [{'type': 'string'}, {'items': {'anyOf': [{'type': 'string'}, {'additionalProperties': True, 'type': 'object'}]}, 'type': 'array'}], 'title': 'Content'}, 'additional_kwargs': {'additionalProperties': True, 'title': 'Additional Kwargs', 'type': 'object'}, 'response_metadata': {'additionalProperties': True, 'title': 'Response Metadata', 'type': 'object'}, 'type': {'const': 'ToolMessageChunk', 'default': 'ToolMessageChunk', 'title': 'Type', 'type': 'string'}, 'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Name'}, 'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'title': 'Id'}, 'tool_call_id': {'title': 'Tool Call Id', 'type': 'string'}, 'artifact': {'default': None, 'title': 'Artifact'}, 'status': {'default': 'success', 'enum': ['success', 'error'], 'title': 'Status', 'type': 'string'}}, 'required': ['content', 'tool_call_id'], 'title': 'ToolMessageChunk', 'type': 'object'}, 'UsageMetadata': {'description': 'Usage metadata for a message, such as token counts.\\n\\nThis is a standard representation of token usage that is consistent across models.\\n\\nExample:\\n\\n    .. code-block:: python\\n\\n        {\\n            \"input_tokens\": 350,\\n            \"output_tokens\": 240,\\n            \"total_tokens\": 590,\\n            \"input_token_details\": {\\n                \"audio\": 10,\\n                \"cache_creation\": 200,\\n                \"cache_read\": 100,\\n            },\\n            \"output_token_details\": {\\n                \"audio\": 10,\\n                \"reasoning\": 200,\\n            }\\n        }\\n\\n.. versionchanged:: 0.3.9\\n\\n    Added ``input_token_details`` and ``output_token_details``.', 'properties': {'input_tokens': {'title': 'Input Tokens', 'type': 'integer'}, 'output_tokens': {'title': 'Output Tokens', 'type': 'integer'}, 'total_tokens': {'title': 'Total Tokens', 'type': 'integer'}, 'input_token_details': {'$ref': '#/$defs/InputTokenDetails'}, 'output_token_details': {'$ref': '#/$defs/OutputTokenDetails'}}, 'required': ['input_tokens', 'output_tokens', 'total_tokens'], 'title': 'UsageMetadata', 'type': 'object'}}, 'anyOf': [{'$ref': '#/$defs/StringPromptValue'}, {'$ref': '#/$defs/ChatPromptValueConcrete'}], 'title': 'PromptTemplateOutput'}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\Administrator\\AppData\\Local\\Temp\\ipykernel_29948\\2222964862.py:10: PydanticDeprecatedSince20: The `schema` method is deprecated; use `model_json_schema` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.11/migration/\n",
      "  print(prompt.input_schema.schema())\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.prompts import PromptTemplate\n",
    "\n",
    "prompt = PromptTemplate(\n",
    "    input_variables=[\"name\", \"age\"],\n",
    "    template=\"你好，我是{name}，今年{age}岁\"\n",
    ")\n",
    "\n",
    "# 查看输入 schema\n",
    "print(\"输入 Schema:\")\n",
    "print(prompt.input_schema.schema())\n",
    "\n",
    "# 查看输出 schema\n",
    "print(\"输出 Schema:\")\n",
    "print(prompt.output_schema.schema())\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "77d87c20",
   "metadata": {},
   "source": [
    "**练习**\n",
    "1. 掌握 LLM 的 Schema\n",
    "2. 掌握链式组合的 Schema\n",
    "3. 并行组合的 Schema （后面讲并行）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "73f2ef98",
   "metadata": {},
   "source": [
    "### 常见的应用场景\n",
    "\n",
    "```python\n",
    "# API 文档生成\n",
    "def generate_api_docs(runnable):\n",
    "    \"\"\"根据 schema 生成 API 文档\"\"\"\n",
    "    input_schema = runnable.input_schema.schema()\n",
    "    output_schema = runnable.output_schema.schema()\n",
    "    \n",
    "    docs = f\"\"\"\n",
    "    API 接口文档:\n",
    "    \n",
    "    输入格式:\n",
    "    {json.dumps(input_schema, indent=2, ensure_ascii=False)}\n",
    "    \n",
    "    输出格式:\n",
    "    {json.dumps(output_schema, indent=2, ensure_ascii=False)}\n",
    "    \"\"\"\n",
    "    return docs\n",
    "\n",
    "# 输入验证\n",
    "def validate_input(runnable, input_data):\n",
    "    \"\"\"验证输入数据是否符合 schema\"\"\"\n",
    "    try:\n",
    "        # 使用 schema 验证输入\n",
    "        validated = runnable.input_schema(**input_data)\n",
    "        return True, validated\n",
    "    except Exception as e:\n",
    "        return False, str(e)\n",
    "\n",
    "# 使用示例\n",
    "is_valid, result = validate_input(analyzer, {\n",
    "    \"name\": \"张三\",\n",
    "    \"age\": 30,\n",
    "    \"email\": \"zhangsan@example.com\"\n",
    "})\n",
    "\n",
    "# 类型安全的调用\n",
    "def safe_invoke(runnable, input_data):\n",
    "    \"\"\"类型安全的调用方法\"\"\"\n",
    "    # 验证输入\n",
    "    is_valid, validated_input = validate_input(runnable, input_data)\n",
    "    if not is_valid:\n",
    "        raise ValueError(f\"输入验证失败: {validated_input}\")\n",
    "    \n",
    "    # 执行调用\n",
    "    result = runnable.invoke(validated_input)\n",
    "    \n",
    "    # 可以进一步验证输出格式\n",
    "    return result\n",
    "\n",
    "# 调试\n",
    "# 快速查看 schema\n",
    "def inspect_runnable(runnable):\n",
    "    print(f\"组件类型: {type(runnable).__name__}\")\n",
    "    print(f\"输入类型: {runnable.input_schema}\")\n",
    "    print(f\"输出类型: {runnable.output_schema}\")\n",
    "    \n",
    "    # 如果是链，递归查看每个组件\n",
    "    if hasattr(runnable, 'steps'):\n",
    "        for i, step in enumerate(runnable.steps):\n",
    "            print(f\"步骤 {i+1}: {type(step).__name__}\")\n",
    "\n",
    "# 测试数据生成\n",
    "def generate_test_data(schema):\n",
    "    \"\"\"根据 schema 生成测试数据\"\"\"\n",
    "    # 这里可以实现根据 schema 自动生成测试数据的逻辑\n",
    "    pass\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6f5401a6",
   "metadata": {},
   "source": [
    "### ainvoke 异步方法\n",
    "\n",
    "#### 1. ainvoke() 方法的基本原理\n",
    "为什么需要 ainvoke()？\n",
    "\n",
    "```python\n",
    "# 同步调用 - 会阻塞当前线程\n",
    "result = runnable.invoke(input)  # 等待几秒钟才返回\n",
    "\n",
    "# 异步调用 - 不阻塞，可以并发执行其他任务\n",
    "result = await runnable.ainvoke(input)  # 非阻塞执行\n",
    "```\n",
    "\n",
    "ainvoke() 的声明结构\n",
    "```python\n",
    "class BaseRunnable:\n",
    "    async def ainvoke(self, input, config=None):\n",
    "        \"\"\"异步版本的 invoke 方法\"\"\"\n",
    "        # 如果子类没有实现异步版本，就用线程池执行同步版本\n",
    "        loop = asyncio.get_running_loop()\n",
    "        return await loop.run_in_executor(\n",
    "            None,  # 使用默认线程池\n",
    "            self.invoke,  # 要执行的同步函数\n",
    "            input,  # 传递给 invoke 的参数\n",
    "            config\n",
    "        )\n",
    "```\n",
    "\n",
    "#### 2. 执行流程详解\n",
    "\n",
    "步骤1：获取事件循环  \n",
    "\n",
    "loop = asyncio.get_running_loop()\n",
    "\n",
    "作用：\n",
    "\n",
    "- 获取当前正在运行的异步事件循环\n",
    "- 事件循环是异步程序的核心，负责调度和执行异步任务\n",
    "- 如果没有运行中的事件循环，会抛出 RuntimeError\n",
    "\n",
    "步骤2：使用线程池执行器\n",
    "\n",
    "await loop.run_in_executor(None, self.invoke, input, config)\n",
    "\n",
    "参数解释：\n",
    "\n",
    "- None：使用默认的 ThreadPoolExecutor\n",
    "- self.invoke：要在线程池中执行的同步函数\n",
    "- input, config：传递给函数的参数\n",
    "\n",
    "步骤3：线程池执行\n",
    "\n",
    "```python\n",
    "# 内部执行过程（简化版）\n",
    "def run_in_executor(executor, func, *args):\n",
    "    # 1. 将同步函数提交到线程池\n",
    "    future = executor.submit(func, *args)\n",
    "    \n",
    "    # 2. 将线程池的 Future 转换为 asyncio.Future\n",
    "    asyncio_future = asyncio.wrap_future(future)\n",
    "    \n",
    "    # 3. 返回可等待的 Future 对象\n",
    "    return asyncio_future\n",
    "```\n",
    "\n",
    "#### 3. 完整的执行示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8b3a1599",
   "metadata": {},
   "outputs": [],
   "source": [
    "import asyncio\n",
    "from concurrent.futures import ThreadPoolExecutor\n",
    "\n",
    "class SimpleRunnable:\n",
    "    def invoke(self, input):\n",
    "        \"\"\"同步方法 - 模拟耗时操作\"\"\"\n",
    "        import time\n",
    "        print(f\"开始处理: {input}\")\n",
    "        time.sleep(2)  # 模拟耗时操作\n",
    "        print(f\"处理完成: {input}\")\n",
    "        return f\"结果: {input}\"\n",
    "    \n",
    "    async def ainvoke(self, input):\n",
    "        \"\"\"异步方法 - 使用线程池执行同步方法\"\"\"\n",
    "        print(f\"异步调用开始: {input}\")\n",
    "        \n",
    "        # 步骤1: 获取事件循环\n",
    "        loop = asyncio.get_running_loop()\n",
    "        print(f\"获取到事件循环: {loop}\")\n",
    "        \n",
    "        # 步骤2: 在线程池中执行同步方法\n",
    "        print(f\"提交到线程池执行...\")\n",
    "        result = await loop.run_in_executor(\n",
    "            None,           # 默认线程池\n",
    "            self.invoke,    # 同步方法\n",
    "            input          # 参数\n",
    "        )\n",
    "        \n",
    "        print(f\"异步调用完成: {input}\")\n",
    "        return result\n",
    "\n",
    "# 使用示例\n",
    "async def demo():\n",
    "    runnable = SimpleRunnable()\n",
    "    \n",
    "    # 并发执行多个异步调用\n",
    "    tasks = [\n",
    "        runnable.ainvoke(\"任务1\"),\n",
    "        runnable.ainvoke(\"任务2\"), \n",
    "        runnable.ainvoke(\"任务3\")\n",
    "    ]\n",
    "    \n",
    "    # 等待所有任务完成\n",
    "    results = await asyncio.gather(*tasks)\n",
    "    print(f\"所有结果: {results}\")\n",
    "\n",
    "# 运行演示\n",
    "asyncio.run(demo())\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "395ab8f2",
   "metadata": {},
   "source": [
    "#### 4. 为什么使用 run_in_executor？\n",
    "\n",
    "问题：同步代码在异步环境中的困境\n",
    "\n",
    "```python\n",
    "# 错误的做法 - 会阻塞整个事件循环\n",
    "async def bad_example():\n",
    "    result1 = some_sync_function()  # 阻塞2秒\n",
    "    result2 = another_sync_function()  # 又阻塞2秒\n",
    "    # 总共需要4秒，无法并发\n",
    "```\n",
    "解决方案：线程池执行\n",
    "```python\n",
    "# 正确的做法 - 使用线程池\n",
    "async def good_example():\n",
    "    loop = asyncio.get_running_loop()\n",
    "    \n",
    "    # 并发执行，总共只需要2秒\n",
    "    task1 = loop.run_in_executor(None, some_sync_function)\n",
    "    task2 = loop.run_in_executor(None, another_sync_function)\n",
    "    \n",
    "    result1, result2 = await asyncio.gather(task1, task2)\n",
    "```\n",
    "#### 5. 事件循环的工作原理\n",
    "```python\n",
    "# 事件循环的简化模型\n",
    "class EventLoop:\n",
    "    def __init__(self):\n",
    "        self.tasks = []\n",
    "        self.thread_pool = ThreadPoolExecutor()\n",
    "    \n",
    "    def run_in_executor(self, executor, func, *args):\n",
    "        \"\"\"在线程池中执行同步函数\"\"\"\n",
    "        # 1. 提交到线程池\n",
    "        future = self.thread_pool.submit(func, *args)\n",
    "        \n",
    "        # 2. 创建异步 Future\n",
    "        async_future = asyncio.Future()\n",
    "        \n",
    "        # 3. 当线程池任务完成时，设置异步 Future 的结果\n",
    "        def on_done(thread_future):\n",
    "            try:\n",
    "                result = thread_future.result()\n",
    "                async_future.set_result(result)\n",
    "            except Exception as e:\n",
    "                async_future.set_exception(e)\n",
    "        \n",
    "        future.add_done_callback(on_done)\n",
    "        return async_future\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "10e4d526",
   "metadata": {},
   "source": [
    "#### 6. 何时使用异步\n",
    "\n",
    "```python\n",
    "# LLM 调用通常是网络请求，天然适合异步\n",
    "class AsyncLLM:\n",
    "    async def ainvoke(self, prompt):\n",
    "        # 直接使用异步 HTTP 客户端\n",
    "        async with aiohttp.ClientSession() as session:\n",
    "            response = await session.post(api_url, json={\"prompt\": prompt})\n",
    "            return await response.json()\n",
    "\n",
    "# 但某些组件可能只有同步实现\n",
    "class SyncProcessor:\n",
    "    def invoke(self, input):\n",
    "        # 复杂的同步处理逻辑\n",
    "        return process_data(input)\n",
    "    \n",
    "    async def ainvoke(self, input):\n",
    "        # 使用线程池包装同步方法\n",
    "        loop = asyncio.get_running_loop()\n",
    "        return await loop.run_in_executor(None, self.invoke, input)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f3746049",
   "metadata": {},
   "source": [
    "## stream方法\n",
    "1. stream 方法返回迭代器：使用 yield 逐步返回结果\n",
    "2. 实时显示：用 print(chunk, end=\"\", flush=True) 实现逐字显示效果\n",
    "3. 与 invoke 对比：invoke 一次性返回，stream 分步返回\n",
    "\n",
    "\n",
    "```python\n",
    "\"\"\"\n",
    "Stream 方法\n",
    "\"\"\"\n",
    "from langchain_core.runnables import Runnable\n",
    "from typing import Iterator\n",
    "import time\n",
    "\n",
    "class SimpleStreamRunnable(Runnable[str, str]):\n",
    "    \"\"\"简单的流式输出演示\"\"\"\n",
    "    \n",
    "    def invoke(self, input: str) -> str:\n",
    "        \"\"\"普通调用 - 一次性返回完整结果\"\"\"\n",
    "        return f\"完整处理结果: {input}\"\n",
    "    \n",
    "    def stream(self, input: str) -> Iterator[str]:\n",
    "        \"\"\"流式调用 - 逐步返回部分结果\"\"\"\n",
    "        words = f\"逐步处理结果: {input}\".split()\n",
    "        \n",
    "        for word in words:\n",
    "            time.sleep(0.5)  # 模拟处理延迟\n",
    "            yield word + \" \"  # 逐个返回单词\n",
    "\n",
    "# 使用演示\n",
    "def demo_stream():\n",
    "    runnable = SimpleStreamRunnable()\n",
    "    \n",
    "    print(\"=== 普通调用 ===\")\n",
    "    result = runnable.invoke(\"测试文本\")\n",
    "    print(result)\n",
    "    \n",
    "    print(\"\\n=== 流式调用 ===\")\n",
    "    for chunk in runnable.stream(\"测试文本\"):\n",
    "        print(chunk, end=\"\", flush=True)  # 实时显示，不换行\n",
    "    \n",
    "    print(\"\\n\\n演示完成\")\n",
    "\n",
    "# 运行演示\n",
    "demo_stream()\n",
    "```\n",
    "\n",
    "**astream 是 stream 的异步版本，astream的默认实现调用了ainvoke**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e07c1f87",
   "metadata": {},
   "source": [
    "### batch方法\n",
    "\n",
    "batch方法用于处理多个输入（批处理）\n",
    "\n",
    "工程价值：\n",
    "- 性能优化：批量处理比逐个处理快3-5倍\n",
    "- 资源管理：分批处理避免内存溢出\n",
    "- 错误处理：批次失败时自动降级为单个处理\n",
    "- 监控报告：提供详细的处理统计和分析\n",
    "\n",
    "适用场景：\n",
    "- 大量文档批量分析\n",
    "- 客户反馈批量处理\n",
    "- 数据清洗和标注\n",
    "- 内容审核和分类\n",
    "- 报告自动生成"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "5a37d657",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "=== 使用自定义 Runnable ===\n",
      "输入: ['报告A', '合同B', '邮件C']\n",
      "批量处理结果:\n",
      "  分析结果: 报告A -> 类型:文档, 重要性:中等\n",
      "  分析结果: 合同B -> 类型:文档, 重要性:中等\n",
      "  分析结果: 邮件C -> 类型:文档, 重要性:中等\n",
      "\n",
      "=== 使用 PromptTemplate ===\n",
      "输入: ['项目进展', '用户反馈', '市场分析']\n",
      "提示词批量生成结果:\n",
      "  text='请分析: 项目进展'\n",
      "  text='请分析: 用户反馈'\n",
      "  text='请分析: 市场分析'\n",
      "\n",
      "=== 链式 batch ===\n",
      "链式批量处理结果:\n",
      "  分析结果: text='请分析: 项目进展' -> 类型:文档, 重要性:中等\n",
      "  分析结果: text='请分析: 用户反馈' -> 类型:文档, 重要性:中等\n",
      "  分析结果: text='请分析: 市场分析' -> 类型:文档, 重要性:中等\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "LangChain 原生 batch 方法演示\n",
    "\"\"\"\n",
    "from langchain_core.runnables import Runnable\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain_community.llms import Tongyi\n",
    "from typing import Optional, Dict, Any\n",
    "\n",
    "class SimpleAnalyzer(Runnable[str, str]):\n",
    "    \"\"\"简单分析器\"\"\"\n",
    "    \n",
    "    def invoke(self, input: str, config: Optional[Dict[str, Any]] = None) -> str:\n",
    "        \"\"\"单个分析\"\"\"\n",
    "        return f\"分析结果: {input} -> 类型:文档, 重要性:中等\"\n",
    "\n",
    "def demo_langchain_batch():\n",
    "    \"\"\"LangChain batch 方法演示\"\"\"\n",
    "    \n",
    "    # 1. 使用自定义 Runnable\n",
    "    analyzer = SimpleAnalyzer()\n",
    "    \n",
    "    inputs = [\"报告A\", \"合同B\", \"邮件C\"]\n",
    "    \n",
    "    print(\"=== 使用自定义 Runnable ===\")\n",
    "    print(f\"输入: {inputs}\")\n",
    "    \n",
    "    # LangChain 的 batch 方法\n",
    "    results = analyzer.batch(inputs)\n",
    "    \n",
    "    print(\"批量处理结果:\")\n",
    "    for result in results:\n",
    "        print(f\"  {result}\")\n",
    "    \n",
    "    # 2. 使用 PromptTemplate\n",
    "    print(\"\\n=== 使用 PromptTemplate ===\")\n",
    "    \n",
    "    prompt = PromptTemplate(\n",
    "        input_variables=[\"text\"],\n",
    "        template=\"请分析: {text}\"\n",
    "    )\n",
    "    \n",
    "    texts = [\"项目进展\", \"用户反馈\", \"市场分析\"]\n",
    "    inputs_dict = [{\"text\": text} for text in texts]\n",
    "    \n",
    "    print(f\"输入: {texts}\")\n",
    "    \n",
    "    # PromptTemplate 的 batch 方法\n",
    "    prompt_results = prompt.batch(inputs_dict)\n",
    "    \n",
    "    print(\"提示词批量生成结果:\")\n",
    "    for result in prompt_results:\n",
    "        print(f\"  {result}\")\n",
    "    \n",
    "    # 3. 链式 batch\n",
    "    print(\"\\n=== 链式 batch ===\")\n",
    "    \n",
    "    # 创建简单链\n",
    "    chain = prompt | analyzer\n",
    "    \n",
    "    # 链的 batch 方法\n",
    "    chain_results = chain.batch(inputs_dict)\n",
    "    \n",
    "    print(\"链式批量处理结果:\")\n",
    "    for result in chain_results:\n",
    "        print(f\"  {result}\")\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    demo_langchain_batch()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c837d8ae",
   "metadata": {},
   "source": [
    "### 并行组合\n",
    "\n",
    "```python\n",
    "from langchain_core.runnables import RunnableParallel\n",
    "\n",
    "# 并行执行多个分支\n",
    "parallel = RunnableParallel({\n",
    "    \"summary\": summary_chain,\n",
    "    \"keywords\": keyword_chain,\n",
    "    \"sentiment\": sentiment_chain\n",
    "})\n",
    "\n",
    "# 输入会同时发送给所有分支\n",
    "result = parallel.invoke({\"text\": \"分析这段文本\"})\n",
    "# 输出: {\"summary\": \"...\", \"keywords\": [...], \"sentiment\": \"positive\"}\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4bcea9b6",
   "metadata": {},
   "source": [
    "\n",
    "### 条件分支\n",
    "\n",
    "RunnableBranch 是一种重要的路由机制，它根据给定的条件选择不同的处理路径。在运行过程中还能根据输入动态选择不同的执行路径。\n",
    "\n",
    "```python\n",
    "from langchain_core.runnables import RunnableBranch\n",
    "\n",
    "# 根据条件选择不同处理路径\n",
    "branch = RunnableBranch(\n",
    "    (lambda x: x[\"type\"] == \"question\", qa_chain),\n",
    "    (lambda x: x[\"type\"] == \"summary\", summary_chain),\n",
    "    default_chain\n",
    ")\n",
    "```\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "3e563e86",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "=== RunnableBranch 条件分支演示 ===\n",
      "\n",
      "--- 测试用例 1 ---\n",
      "类型: question\n",
      "内容: 什么是人工智能？\n",
      "处理结果: 人工智能（Artificial Intelligence，简称 AI）是指由人创造的能够感知环境、学习知识、逻辑推理并执行任务的智能体。它是一门研究如何让计算机模拟人类智能行为的科学技术，旨在使机器能够完成一些通常需要人类智能才能胜任的复杂任务。\n",
      "\n",
      "人工智能的核心目标是让机器具备以下一些能力：\n",
      "\n",
      "- **学习能力**：从数据中自动学习规律，并用于预测或决策（如机器学习、深度学习）。\n",
      "- **推理能力**：根据已有知识进行逻辑推导（如专家系统）。\n",
      "- **感知能力**：识别和理解图像、声音、语言等信息（如计算机视觉、语音识别）。\n",
      "- **语言能力**：理解和生成人类语言（如自然语言处理）。\n",
      "- **规划与决策能力**：在复杂环境中做出合理决策（如自动驾驶、游戏AI）。\n",
      "\n",
      "### 人工智能的主要分支包括：\n",
      "\n",
      "1. **机器学习（Machine Learning）**：通过数据训练模型，使计算机自动改进性能而无需显式编程。\n",
      "2. **深度学习（Deep Learning）**：基于神经网络的机器学习方法，擅长处理图像、语音、文本等非结构化数据。\n",
      "3. **自然语言处理（Natural Language Processing, NLP）**：让机器理解和生成人类语言。\n",
      "4. **计算机视觉（Computer Vision）**：使机器能够“看懂”图像或视频。\n",
      "5. **机器人学（Robotics）**：结合感知、决策与执行，使机器人能完成复杂任务。\n",
      "6. **专家系统（Expert Systems）**：模拟人类专家知识解决特定问题。\n",
      "\n",
      "### 人工智能的应用领域：\n",
      "\n",
      "- 自动驾驶汽车\n",
      "- 智能语音助手（如 Siri、Alexa）\n",
      "- 推荐系统（如 Netflix、淘宝推荐）\n",
      "- 医疗诊断辅助\n",
      "- 金融风控与量化交易\n",
      "- 工业自动化\n",
      "\n",
      "### 人工智能的分类：\n",
      "\n",
      "- **弱人工智能（Narrow AI）**：专注于特定任务的人工智能，如语音识别、图像识别。\n",
      "- **强人工智能（General AI）**：理论上具备与人类相当的通用认知能力，目前尚未实现。\n",
      "\n",
      "总结来说，人工智能是一门多学科交叉技术，融合了计算机科学、数学、心理学、语言学等多个领域的知识，旨在让机器具备类似人类的智能行为。\n",
      "\n",
      "--- 测试用例 2 ---\n",
      "类型: summary\n",
      "内容: 人工智能是计算机科学的一个分支，致力于创建能够执行通常需要人类智能的任务的系统。它包括机器学习、深度学习、自然语言处理等多个子领域。\n",
      "处理结果: 人工智能是计算机科学的一个分支，旨在开发能够执行通常需人类智能完成任务的系统，涵盖机器学习、深度学习、自然语言处理等多个子领域。\n",
      "\n",
      "--- 测试用例 3 ---\n",
      "类型: other\n",
      "内容: 今天天气很好\n",
      "处理结果: 这句话“今天天气很好”是一个简单的陈述句，表达的是对当天天气状况的正面评价。我们可以从多个角度来分析这句话：\n",
      "\n",
      "---\n",
      "\n",
      "### 一、语言结构分析\n",
      "\n",
      "- **“今天”**：时间副词，表示说话的当天。\n",
      "- **“天气”**：主语，指大气状况，如温度、湿度、风力、降水等。\n",
      "- **“很好”**：谓语部分，是“很 + 形容词（好）”的结构，表示程度较高地肯定天气状况。\n",
      "\n",
      "整体结构为：**时间状语 + 主语 + 谓语（形容词）**\n",
      "\n",
      "---\n",
      "\n",
      "### 二、语义分析\n",
      "\n",
      "- 表达者对天气的主观感受是正面的。\n",
      "- “很好”是一个比较宽泛的评价，可能意味着阳光明媚、气温适宜、空气清新等。\n",
      "- 这句话没有提供具体的天气数据（如温度、风速等），因此属于**定性描述**而非定量说明。\n",
      "\n",
      "---\n",
      "\n",
      "### 三、语用功能分析\n",
      "\n",
      "- **寒暄功能**：在日常交流中，这句话常用于打招呼或开启对话，例如：“今天天气很好，适合出去走走。”\n",
      "- **情绪表达**：传达出说话者因天气好而心情愉悦。\n",
      "- **建议或引导**：可能隐含着进行户外活动的建议，比如散步、郊游、运动等。\n",
      "\n",
      "---\n",
      "\n",
      "### 四、文化与情境分析\n",
      "\n",
      "- 在不同地区，“很好”的标准可能不同。例如：\n",
      "  - 北方人可能认为晴朗、气温适中为“很好”；\n",
      "  - 南方人可能更关注湿度是否舒适；\n",
      "  - 海边居民可能更在意风力大小。\n",
      "- 在某些情境中（如农业、旅游、交通等），天气状况对人们的生活影响较大，这句话也可能隐含着对工作或活动的积极预期。\n",
      "\n",
      "---\n",
      "\n",
      "### 五、情感与心理层面\n",
      "\n",
      "- 天气晴朗通常与积极情绪相关联，科学研究表明阳光可以促进人体分泌血清素，有助于提升情绪。\n",
      "- 因此，这句话也可能反映出说话者当天心情愉快、精神状态良好。\n",
      "\n",
      "---\n",
      "\n",
      "### 总结\n",
      "\n",
      "“今天天气很好”虽然是一句简单的话，但蕴含了时间、主语、评价、情绪和潜在意图等多重信息。它既可以作为日常交流的寒暄用语，也可以作为表达情绪、引导行为的表达方式。\n",
      "\n",
      "如果你希望更具体地了解今天的天气，可以补充查询实时天气数据，例如温度、风速、空气质量等。需要我帮你查吗？😊\n",
      "批量结果 1: 机器学习（Machine Learning）是人工智能（AI）的一个分支，其核心目标是让计算机通过数据自动学习规律，并利用这些规律对未知数据进行预测或决策，而无需依赖明确的程序指令。\n",
      "\n",
      "### 简单来说：\n",
      "**机器学习是一种让计算机从经验（数据）中学习模型（规则），并用该模型进行预测或决策的方法。**\n",
      "\n",
      "---\n",
      "\n",
      "## 一、机器学习的基本概念\n",
      "\n",
      "- **数据（Data）**：机器学习的基础，包括输入数据和对应的输出（在有监督学习中）。\n",
      "- **模型（Model）**：从数据中学习到的数学表达式或规则，用于对新数据进行预测。\n",
      "- **训练（Training）**：使用数据来调整模型参数的过程。\n",
      "- **预测（Prediction）**：使用训练好的模型对未知数据进行输出估计。\n",
      "\n",
      "---\n",
      "\n",
      "## 二、机器学习的主要类型\n",
      "\n",
      "1. **监督学习（Supervised Learning）**\n",
      "   - 输入数据有明确的标签（输出结果）。\n",
      "   - 目标是学习输入与输出之间的映射关系。\n",
      "   - 常见任务：分类（如图像识别）、回归（如房价预测）\n",
      "   - 常用算法：线性回归、逻辑回归、支持向量机（SVM）、决策树、随机森林、神经网络等。\n",
      "\n",
      "2. **无监督学习（Unsupervised Learning）**\n",
      "   - 输入数据没有标签。\n",
      "   - 目标是发现数据中的结构或模式。\n",
      "   - 常见任务：聚类（如客户分群）、降维（如PCA）\n",
      "   - 常用算法：K均值聚类、主成分分析（PCA）\n",
      "\n",
      "3. **半监督学习（Semi-supervised Learning）**\n",
      "   - 结合少量有标签数据和大量无标签数据进行学习。\n",
      "   - 适用于标注数据获取困难的场景。\n",
      "\n",
      "4. **强化学习（Reinforcement Learning）**\n",
      "   - 模型通过与环境的交互来学习策略，以最大化某种奖励。\n",
      "   - 常用于游戏、机器人控制、自动驾驶等。\n",
      "   - 常用算法：Q-learning、Deep Q-Network（DQN）\n",
      "\n",
      "---\n",
      "\n",
      "## 三、机器学习的应用领域\n",
      "\n",
      "- 图像识别（如人脸识别）\n",
      "- 自然语言处理（如机器翻译、语音识别）\n",
      "- 推荐系统（如电商平台推荐）\n",
      "- 金融风控（如信用评分）\n",
      "- 医疗诊断（如辅助癌症筛查）\n",
      "- 自动驾驶\n",
      "- 游戏AI（如AlphaGo）\n",
      "\n",
      "---\n",
      "\n",
      "## 四、机器学习的基本流程\n",
      "\n",
      "1. **数据收集与预处理**\n",
      "2. **选择模型**\n",
      "3. **训练模型**\n",
      "4. **评估模型**\n",
      "5. **调优与部署**\n",
      "\n",
      "---\n",
      "\n",
      "## 总结一句话：\n",
      "\n",
      "> **机器学习是让计算机通过学习数据中的规律，自动构建模型，从而实现预测或决策的一种方法。**\n",
      "\n",
      "如果你对某个方面（如某类算法、某个应用）感兴趣，我可以进一步详细解释。\n",
      "批量结果 2: 当然，以下是对“机器学习是人工智能的一个重要分支……”这一内容的简要总结：\n",
      "\n",
      "**总结：**  \n",
      "机器学习是人工智能的核心组成部分，其主要目标是让计算机通过分析数据自动学习规律，并利用这些规律对未知数据进行预测或决策，而无需依赖明确的程序指令。它广泛应用于图像识别、自然语言处理、推荐系统等领域，常见的学习方法包括监督学习、无监督学习和强化学习等。\n",
      "\n",
      "如果你有更长的原文内容，我可以为你提供更精确的总结。\n",
      "批量结果 3: 您提供的分析请求“随机内容”过于宽泛，无法进行有效分析。请提供更具体的内容或明确分析目的，例如：\n",
      "\n",
      "1. **文本分析**：需要提供具体文本内容\n",
      "2. **数据规律**：需要提供数据样本和分析目标\n",
      "3. **随机性检测**：需要说明检测维度（如统计分布、模式识别等）\n",
      "4. **其他需求**：请补充具体场景（如密码学、模拟实验等）\n",
      "\n",
      "建议补充以下信息：\n",
      "- 具体内容样本（文字/数字/文件）\n",
      "- 分析目的（科研/商业/学习等）\n",
      "- 特别关注的维度（如安全性/均匀性/可信度等）\n",
      "\n",
      "这将帮助我提供更精准的分析框架和解决方案。\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "RunnableBranch 条件分支演示\n",
    "\"\"\"\n",
    "from langchain_core.runnables import RunnableBranch, RunnableLambda\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain_community.llms import Tongyi\n",
    "from typing import Dict, Any\n",
    "\n",
    "# === 前面的准备工作 ===\n",
    "\n",
    "# 1. 创建不同类型的处理链\n",
    "# 问答链\n",
    "qa_prompt = PromptTemplate(\n",
    "    input_variables=[\"content\"],\n",
    "    template=\"请回答以下问题：{content}\"\n",
    ")\n",
    "qa_llm = Tongyi(model_name=\"qwen-max\", temperature=0.1)\n",
    "qa_chain = qa_prompt | qa_llm\n",
    "\n",
    "# 摘要链\n",
    "summary_prompt = PromptTemplate(\n",
    "    input_variables=[\"content\"], \n",
    "    template=\"请总结以下内容：{content}\"\n",
    ")\n",
    "summary_llm = Tongyi(model_name=\"qwen-max\", temperature=0.3)\n",
    "summary_chain = summary_prompt | summary_llm\n",
    "\n",
    "# 默认处理链\n",
    "default_prompt = PromptTemplate(\n",
    "    input_variables=[\"content\"],\n",
    "    template=\"请分析以下内容：{content}\"\n",
    ")\n",
    "default_llm = Tongyi(model_name=\"qwen-max\", temperature=0.5)\n",
    "default_chain = default_prompt | default_llm\n",
    "\n",
    "# === 核心代码：创建条件分支 ===\n",
    "branch = RunnableBranch(\n",
    "    (lambda x: x[\"type\"] == \"question\", qa_chain),\n",
    "    (lambda x: x[\"type\"] == \"summary\", summary_chain),\n",
    "    default_chain  # 默认分支\n",
    ")\n",
    "\n",
    "# === 后面的使用方法 ===\n",
    "\n",
    "# 使用示例\n",
    "def demo_branch():\n",
    "    \"\"\"演示条件分支的使用\"\"\"\n",
    "    \n",
    "    # 测试数据\n",
    "    test_cases = [\n",
    "        {\n",
    "            \"type\": \"question\",\n",
    "            \"content\": \"什么是人工智能？\"\n",
    "        },\n",
    "        {\n",
    "            \"type\": \"summary\", \n",
    "            \"content\": \"人工智能是计算机科学的一个分支，致力于创建能够执行通常需要人类智能的任务的系统。它包括机器学习、深度学习、自然语言处理等多个子领域。\"\n",
    "        },\n",
    "        {\n",
    "            \"type\": \"other\",\n",
    "            \"content\": \"今天天气很好\"\n",
    "        }\n",
    "    ]\n",
    "    \n",
    "    print(\"=== RunnableBranch 条件分支演示 ===\")\n",
    "    \n",
    "    for i, test_case in enumerate(test_cases, 1):\n",
    "        print(f\"\\n--- 测试用例 {i} ---\")\n",
    "        print(f\"类型: {test_case['type']}\")\n",
    "        print(f\"内容: {test_case['content']}\")\n",
    "        \n",
    "        # 执行条件分支\n",
    "        result = branch.invoke(test_case)\n",
    "        print(f\"处理结果: {result}\")\n",
    "\n",
    "# 更复杂的条件分支示例\n",
    "def create_advanced_branch():\n",
    "    \"\"\"创建更复杂的条件分支\"\"\"\n",
    "    \n",
    "    # 定义更多条件\n",
    "    advanced_branch = RunnableBranch(\n",
    "        # 条件1：问题类型\n",
    "        (lambda x: x[\"type\"] == \"question\", qa_chain),\n",
    "        \n",
    "        # 条件2：摘要类型\n",
    "        (lambda x: x[\"type\"] == \"summary\", summary_chain),\n",
    "        \n",
    "        # 条件3：长文本（超过100字）\n",
    "        (lambda x: len(x[\"content\"]) > 100, \n",
    "         PromptTemplate(input_variables=[\"content\"], template=\"这是长文本，请详细分析：{content}\") | default_llm),\n",
    "        \n",
    "        # 条件4：包含特定关键词\n",
    "        (lambda x: \"紧急\" in x[\"content\"], \n",
    "         PromptTemplate(input_variables=[\"content\"], template=\"紧急处理：{content}\") | default_llm),\n",
    "        \n",
    "        # 默认分支\n",
    "        default_chain\n",
    "    )\n",
    "    \n",
    "    return advanced_branch\n",
    "\n",
    "# 异步使用\n",
    "async def demo_async_branch():\n",
    "    \"\"\"异步使用条件分支\"\"\"\n",
    "    test_input = {\n",
    "        \"type\": \"question\",\n",
    "        \"content\": \"如何使用 LangChain？\"\n",
    "    }\n",
    "    \n",
    "    # 异步调用\n",
    "    result = await branch.ainvoke(test_input)\n",
    "    print(f\"异步结果: {result}\")\n",
    "\n",
    "# 批量处理\n",
    "def demo_batch_branch():\n",
    "    \"\"\"批量处理演示\"\"\"\n",
    "    batch_inputs = [\n",
    "        {\"type\": \"question\", \"content\": \"什么是机器学习？\"},\n",
    "        {\"type\": \"summary\", \"content\": \"机器学习是人工智能的一个重要分支...\"},\n",
    "        {\"type\": \"other\", \"content\": \"随机内容\"}\n",
    "    ]\n",
    "    \n",
    "    # 批量处理\n",
    "    results = branch.batch(batch_inputs)\n",
    "    \n",
    "    for i, result in enumerate(results):\n",
    "        print(f\"批量结果 {i+1}: {result}\")\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    # 运行演示\n",
    "    demo_branch()\n",
    "    \n",
    "    # 高级分支演示\n",
    "    advanced_branch = create_advanced_branch()\n",
    "    \n",
    "    # 批量处理演示\n",
    "    demo_batch_branch()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "125f9508",
   "metadata": {},
   "source": [
    "### 输入输出处理\n",
    "\n",
    "RunnableLambda（自定义函数）\n",
    "```python\n",
    "from langchain_core.runnables import RunnableLambda\n",
    "\n",
    "# 将普通函数转换为 Runnable，实现与 LCEL 组件兼容\n",
    "def preprocess(text):\n",
    "    return text.upper().strip()\n",
    "\n",
    "preprocessor = RunnableLambda(preprocess)\n",
    "chain = preprocessor | model\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fe066531",
   "metadata": {},
   "source": [
    "### 错误处理和容错\n",
    "重试机制\n",
    "```python\n",
    "from langchain_core.runnables import RunnableRetry\n",
    "\n",
    "# 自动重试\n",
    "retry_runnable = RunnableRetry(\n",
    "    bound=model,\n",
    "    max_attempts=3,\n",
    "    wait_exponential_jitter=True\n",
    ")\n",
    "```\n",
    "\n",
    "回退机制\n",
    "```python\n",
    "# 主要方法失败时使用备用方法\n",
    "fallback_chain = primary_model.with_fallbacks([backup_model, simple_template])\n",
    "```\n",
    "\n",
    "超时控制\n",
    "```python\n",
    "# 设置执行超时\n",
    "with_timeout = model.with_timeout(30.0)  # 30秒超时\n",
    "```\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a8a01f7c",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:__main__:处理客户咨询: 我的API调用出现500错误，怎么解决？...\n",
      "WARNING:__main__:第1次尝试失败，1.0秒后重试: module 'signal' has no attribute 'SIGALRM'\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "=== 企业级 LangChain 客服系统演示 ===\n",
      "\n",
      "--- 单个处理演示 ---\n",
      "\n",
      "客户咨询 1:\n",
      "问题: 我的API调用出现500错误，怎么解决？\n",
      "用户信息: {'user_id': '12345', 'plan': '企业版', 'region': '北京'}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:__main__:第2次尝试失败，2.0秒后重试: module 'signal' has no attribute 'SIGALRM'\n",
      "ERROR:__main__:重试失败，已达到最大尝试次数: module 'signal' has no attribute 'SIGALRM'\n",
      "ERROR:__main__:处理失败: module 'signal' has no attribute 'SIGALRM'\n",
      "INFO:__main__:处理客户咨询: 我想查看本月的账单详情...\n",
      "WARNING:__main__:第1次尝试失败，1.0秒后重试: module 'signal' has no attribute 'SIGALRM'\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "回复: 系统出现异常，请联系技术支持。\n",
      "类别: system_error\n",
      "置信度: 0.0\n",
      "需要人工: True\n",
      "处理时间: 3.0秒\n",
      "状态: error\n",
      "\n",
      "客户咨询 2:\n",
      "问题: 我想查看本月的账单详情\n",
      "用户信息: {'user_id': '67890', 'plan': '标准版', 'region': '上海'}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:__main__:第2次尝试失败，2.0秒后重试: module 'signal' has no attribute 'SIGALRM'\n",
      "ERROR:__main__:重试失败，已达到最大尝试次数: module 'signal' has no attribute 'SIGALRM'\n",
      "ERROR:__main__:处理失败: module 'signal' has no attribute 'SIGALRM'\n",
      "INFO:__main__:处理客户咨询: 你们的服务怎么样？...\n",
      "WARNING:__main__:第1次尝试失败，1.0秒后重试: module 'signal' has no attribute 'SIGALRM'\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "回复: 系统出现异常，请联系技术支持。\n",
      "类别: system_error\n",
      "置信度: 0.0\n",
      "需要人工: True\n",
      "处理时间: 3.01秒\n",
      "状态: error\n",
      "\n",
      "客户咨询 3:\n",
      "问题: 你们的服务怎么样？\n",
      "用户信息: {'user_id': '11111', 'plan': '免费版', 'region': '深圳'}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:__main__:第2次尝试失败，2.0秒后重试: module 'signal' has no attribute 'SIGALRM'\n",
      "ERROR:__main__:重试失败，已达到最大尝试次数: module 'signal' has no attribute 'SIGALRM'\n",
      "ERROR:__main__:处理失败: module 'signal' has no attribute 'SIGALRM'\n",
      "INFO:__main__:开始批量处理 3 个咨询\n",
      "INFO:__main__:处理客户咨询: 我的API调用出现500错误，怎么解决？...\n",
      "WARNING:__main__:第1次尝试失败，1.0秒后重试: module 'signal' has no attribute 'SIGALRM'\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "回复: 系统出现异常，请联系技术支持。\n",
      "类别: system_error\n",
      "置信度: 0.0\n",
      "需要人工: True\n",
      "处理时间: 3.01秒\n",
      "状态: error\n",
      "\n",
      "--- 批量处理演示 ---\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:__main__:第2次尝试失败，2.0秒后重试: module 'signal' has no attribute 'SIGALRM'\n",
      "ERROR:__main__:重试失败，已达到最大尝试次数: module 'signal' has no attribute 'SIGALRM'\n",
      "ERROR:__main__:处理失败: module 'signal' has no attribute 'SIGALRM'\n",
      "INFO:__main__:处理客户咨询: 我想查看本月的账单详情...\n",
      "WARNING:__main__:第1次尝试失败，1.0秒后重试: module 'signal' has no attribute 'SIGALRM'\n",
      "WARNING:__main__:第2次尝试失败，2.0秒后重试: module 'signal' has no attribute 'SIGALRM'\n",
      "ERROR:__main__:重试失败，已达到最大尝试次数: module 'signal' has no attribute 'SIGALRM'\n",
      "ERROR:__main__:处理失败: module 'signal' has no attribute 'SIGALRM'\n",
      "INFO:__main__:处理客户咨询: 你们的服务怎么样？...\n",
      "WARNING:__main__:第1次尝试失败，1.0秒后重试: module 'signal' has no attribute 'SIGALRM'\n",
      "WARNING:__main__:第2次尝试失败，2.0秒后重试: module 'signal' has no attribute 'SIGALRM'\n",
      "ERROR:__main__:重试失败，已达到最大尝试次数: module 'signal' has no attribute 'SIGALRM'\n",
      "ERROR:__main__:处理失败: module 'signal' has no attribute 'SIGALRM'\n",
      "INFO:__main__:批量处理完成\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "批量处理了 3 个咨询\n",
      "结果 1: system_error - 系统出现异常，请联系技术支持。...\n",
      "结果 2: system_error - 系统出现异常，请联系技术支持。...\n",
      "结果 3: system_error - 系统出现异常，请联系技术支持。...\n",
      "\n",
      "--- 性能统计 ---\n",
      "总请求数: 6\n",
      "成功请求数: 0\n",
      "失败请求数: 6\n",
      "成功率: 0.0%\n",
      "平均响应时间: 0.0秒\n"
     ]
    }
   ],
   "source": [
    "\"\"\"\n",
    "企业级 LangChain 应用 - 智能客服系统\n",
    "兼容当前版本，包含完整的错误处理和容错机制\n",
    "\"\"\"\n",
    "from langchain_core.runnables import RunnableBranch, RunnableLambda\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain_community.llms import Tongyi\n",
    "from langchain_core.output_parsers import BaseOutputParser\n",
    "from langchain_core.exceptions import OutputParserException\n",
    "from typing import Dict, List, Optional, Any\n",
    "import json\n",
    "import logging\n",
    "import time\n",
    "import asyncio\n",
    "from functools import wraps\n",
    "\n",
    "# 配置日志\n",
    "logging.basicConfig(level=logging.INFO)\n",
    "logger = logging.getLogger(__name__)\n",
    "\n",
    "def retry_with_backoff(max_attempts=3, base_delay=1.0):\n",
    "    \"\"\"自定义重试装饰器，实现指数退避\"\"\"\n",
    "    def decorator(func):\n",
    "        @wraps(func)\n",
    "        def wrapper(*args, **kwargs):\n",
    "            for attempt in range(max_attempts):\n",
    "                try:\n",
    "                    return func(*args, **kwargs)\n",
    "                except Exception as e:\n",
    "                    if attempt == max_attempts - 1:\n",
    "                        logger.error(f\"重试失败，已达到最大尝试次数: {e}\")\n",
    "                        raise e\n",
    "                    \n",
    "                    delay = base_delay * (2 ** attempt)  # 指数退避\n",
    "                    logger.warning(f\"第{attempt + 1}次尝试失败，{delay}秒后重试: {e}\")\n",
    "                    time.sleep(delay)\n",
    "            return None\n",
    "        return wrapper\n",
    "    return decorator\n",
    "\n",
    "def timeout_handler(timeout_seconds=30.0):\n",
    "    \"\"\"超时控制装饰器\"\"\"\n",
    "    def decorator(func):\n",
    "        @wraps(func)\n",
    "        def wrapper(*args, **kwargs):\n",
    "            import signal\n",
    "            \n",
    "            def timeout_signal_handler(signum, frame):\n",
    "                raise TimeoutError(f\"操作超时 ({timeout_seconds}秒)\")\n",
    "            \n",
    "            # 设置超时信号\n",
    "            old_handler = signal.signal(signal.SIGALRM, timeout_signal_handler)\n",
    "            signal.alarm(int(timeout_seconds))\n",
    "            \n",
    "            try:\n",
    "                result = func(*args, **kwargs)\n",
    "                signal.alarm(0)  # 取消超时\n",
    "                return result\n",
    "            except TimeoutError:\n",
    "                logger.error(f\"操作超时: {timeout_seconds}秒\")\n",
    "                raise\n",
    "            finally:\n",
    "                signal.signal(signal.SIGALRM, old_handler)\n",
    "        return wrapper\n",
    "    return decorator\n",
    "\n",
    "class CustomerServiceResponse(BaseOutputParser[Dict]):\n",
    "    \"\"\"客服响应解析器\"\"\"\n",
    "    \n",
    "    def parse(self, text: str) -> Dict:\n",
    "        try:\n",
    "            # 尝试解析JSON格式\n",
    "            if '{' in text and '}' in text:\n",
    "                start = text.find('{')\n",
    "                end = text.rfind('}') + 1\n",
    "                json_str = text[start:end]\n",
    "                return json.loads(json_str)\n",
    "            \n",
    "            # 如果不是JSON，返回简单格式\n",
    "            return {\n",
    "                \"response\": text.strip(),\n",
    "                \"category\": \"general\",\n",
    "                \"confidence\": 0.8,\n",
    "                \"requires_human\": False\n",
    "            }\n",
    "        except Exception as e:\n",
    "            raise OutputParserException(f\"解析失败: {e}\")\n",
    "    \n",
    "    def get_format_instructions(self) -> str:\n",
    "        return \"\"\"请以JSON格式回复：\n",
    "{\n",
    "  \"response\": \"回复内容\",\n",
    "  \"category\": \"问题类别(technical/billing/general)\",\n",
    "  \"confidence\": 0.9,\n",
    "  \"requires_human\": false\n",
    "}\"\"\"\n",
    "\n",
    "class EnterpriseCustomerService:\n",
    "    \"\"\"企业级客服系统\"\"\"\n",
    "    \n",
    "    def __init__(self):\n",
    "        self.setup_models()  # 设置AI模型\n",
    "        self.setup_chains()   # 设置处理链\n",
    "        self.setup_fallback_system() # 设置容错系统\n",
    "        self.performance_stats = {    # 性能统计\n",
    "            \"total_requests\": 0,\n",
    "            \"successful_requests\": 0,\n",
    "            \"failed_requests\": 0,\n",
    "            \"average_response_time\": 0.0\n",
    "        }\n",
    "    \n",
    "    def setup_models(self):\n",
    "        \"\"\"设置模型\"\"\"\n",
    "        # 主要模型 - 高性能\n",
    "        self.primary_model = Tongyi(\n",
    "            model_name=\"qwen-max\",\n",
    "            temperature=0.3,\n",
    "            max_tokens=500\n",
    "        )\n",
    "        \n",
    "        # 备用模型 - 稳定性优先\n",
    "        self.backup_model = Tongyi(\n",
    "            model_name=\"qwen-plus\", \n",
    "            temperature=0.1,\n",
    "            max_tokens=300\n",
    "        )\n",
    "    \n",
    "    def setup_chains(self):\n",
    "        \"\"\"设置处理链\"\"\"\n",
    "        self.parser = CustomerServiceResponse()\n",
    "        \n",
    "        # 技术问题处理链\n",
    "        tech_prompt = PromptTemplate(\n",
    "            input_variables=[\"question\", \"user_info\"],\n",
    "            template=\"\"\"你是技术支持专家，请回答用户的技术问题。\n",
    "\n",
    "                用户信息：{user_info}\n",
    "                问题：{question}\n",
    "\n",
    "                {format_instructions}\n",
    "\n",
    "                请提供专业的技术解答。\"\"\",\n",
    "            partial_variables={\"format_instructions\": self.parser.get_format_instructions()}\n",
    "        )\n",
    "        \n",
    "        # 账单问题处理链\n",
    "        billing_prompt = PromptTemplate(\n",
    "            input_variables=[\"question\", \"user_info\"],\n",
    "            template=\"\"\"你是账单客服专员，请处理用户的账单相关问题。\n",
    "\n",
    "            用户信息：{user_info}\n",
    "            问题：{question}\n",
    "\n",
    "            {format_instructions}\n",
    "\n",
    "            请提供准确的账单信息和解决方案。\"\"\",\n",
    "            partial_variables={\"format_instructions\": self.parser.get_format_instructions()}\n",
    "        )\n",
    "        \n",
    "        # 通用问题处理链\n",
    "        general_prompt = PromptTemplate(\n",
    "            input_variables=[\"question\", \"user_info\"],\n",
    "            template=\"\"\"你是客服代表，请友好地回答用户问题。\n",
    "\n",
    "        用户信息：{user_info}\n",
    "        问题：{question}\n",
    "\n",
    "        {format_instructions}\n",
    "\n",
    "        请提供有帮助的回复。\"\"\",\n",
    "            partial_variables={\"format_instructions\": self.parser.get_format_instructions()}\n",
    "        )\n",
    "        \n",
    "        # 创建处理链\n",
    "        self.tech_chain = tech_prompt | self.primary_model | self.parser\n",
    "        self.billing_chain = billing_prompt | self.primary_model | self.parser\n",
    "        self.general_chain = general_prompt | self.primary_model | self.parser\n",
    "    \n",
    "    def setup_fallback_system(self):\n",
    "        \"\"\"设置容错系统\"\"\"\n",
    "        \n",
    "        # 创建带有回退机制的处理函数\n",
    "        def create_fallback_chain(primary_chain, chain_name):\n",
    "            def fallback_processor(input_data):\n",
    "                try:\n",
    "                    # 第一层：主要链处理\n",
    "                    return primary_chain.invoke(input_data)\n",
    "                except Exception as e:\n",
    "                    logger.warning(f\"{chain_name} 主链失败，尝试备用模型: {e}\")\n",
    "                    try:\n",
    "                        # 第二层：备用模型处理\n",
    "                        backup_chain = primary_chain.first | self.backup_model | self.parser\n",
    "                        return backup_chain.invoke(input_data)\n",
    "                    except Exception as e2:\n",
    "                        logger.error(f\"{chain_name} 备用模型失败，使用简单响应: {e2}\")\n",
    "                        # 第三层：简单响应\n",
    "                        return {\n",
    "                            \"response\": \"抱歉，系统暂时繁忙，请稍后重试或联系人工客服。\",\n",
    "                            \"category\": \"system_error\",\n",
    "                            \"confidence\": 1.0,\n",
    "                            \"requires_human\": True\n",
    "                        }\n",
    "            \n",
    "            return RunnableLambda(fallback_processor)\n",
    "        \n",
    "        # 为每个链添加回退机制\n",
    "        self.tech_chain_with_fallback = create_fallback_chain(self.tech_chain, \"技术支持\")\n",
    "        self.billing_chain_with_fallback = create_fallback_chain(self.billing_chain, \"账单服务\")\n",
    "        self.general_chain_with_fallback = create_fallback_chain(self.general_chain, \"通用服务\")\n",
    "        \n",
    "        # 创建智能路由分支\n",
    "        self.smart_router = RunnableBranch(\n",
    "            (self._is_technical_question, self.tech_chain_with_fallback),\n",
    "            (self._is_billing_question, self.billing_chain_with_fallback),\n",
    "            self.general_chain_with_fallback  # 默认分支\n",
    "        )\n",
    "    \n",
    "    def _is_technical_question(self, x: Dict) -> bool:\n",
    "        \"\"\"判断是否为技术问题\"\"\"\n",
    "        question = x.get(\"question\", \"\").lower()\n",
    "        tech_keywords = [\"bug\", \"错误\", \"故障\", \"技术\", \"API\", \"代码\", \"系统\", \"登录\", \"密码\"]\n",
    "        return any(keyword in question for keyword in tech_keywords)\n",
    "    \n",
    "    def _is_billing_question(self, x: Dict) -> bool:\n",
    "        \"\"\"判断是否为账单问题\"\"\"\n",
    "        question = x.get(\"question\", \"\").lower()\n",
    "        billing_keywords = [\"账单\", \"费用\", \"付款\", \"充值\", \"退款\", \"价格\", \"订单\"]\n",
    "        return any(keyword in question for keyword in billing_keywords)\n",
    "    \n",
    "    @retry_with_backoff(max_attempts=3, base_delay=1.0)  # 重试机制\n",
    "    @timeout_handler(timeout_seconds=30.0)               # 超时控制\n",
    "    def _process_with_retry_and_timeout(self, input_data: Dict) -> Dict:\n",
    "        \"\"\"带重试和超时的处理方法\"\"\"\n",
    "        return self.smart_router.invoke(input_data)\n",
    "    \n",
    "    def process_customer_inquiry(self, question: str, user_info: Dict) -> Dict:\n",
    "        \"\"\"处理客户咨询\"\"\"\n",
    "        start_time = time.time()\n",
    "        self.performance_stats[\"total_requests\"] += 1\n",
    "        \n",
    "        try:\n",
    "            logger.info(f\"处理客户咨询: {question[:50]}...\")\n",
    "            \n",
    "            # 准备输入\n",
    "            input_data = {\n",
    "                \"question\": question,\n",
    "                \"user_info\": json.dumps(user_info, ensure_ascii=False)\n",
    "            }\n",
    "            \n",
    "            # 执行带重试和超时的处理\n",
    "            result = self._process_with_retry_and_timeout(input_data)\n",
    "            \n",
    "            # 添加处理时间和状态\n",
    "            processing_time = round(time.time() - start_time, 2)\n",
    "            result[\"processing_time\"] = processing_time\n",
    "            result[\"status\"] = \"success\"\n",
    "            \n",
    "            # 更新性能统计\n",
    "            self.performance_stats[\"successful_requests\"] += 1\n",
    "            self._update_average_response_time(processing_time)\n",
    "            \n",
    "            logger.info(f\"处理完成，耗时: {processing_time}秒\")\n",
    "            return result\n",
    "            \n",
    "        except Exception as e:\n",
    "            processing_time = round(time.time() - start_time, 2)\n",
    "            self.performance_stats[\"failed_requests\"] += 1\n",
    "            \n",
    "            logger.error(f\"处理失败: {e}\")\n",
    "            return {\n",
    "                \"response\": \"系统出现异常，请联系技术支持。\",\n",
    "                \"category\": \"system_error\",\n",
    "                \"confidence\": 0.0,\n",
    "                \"requires_human\": True,\n",
    "                \"status\": \"error\",\n",
    "                \"error\": str(e),\n",
    "                \"processing_time\": processing_time\n",
    "            }\n",
    "    \n",
    "    def _update_average_response_time(self, new_time: float):\n",
    "        \"\"\"更新平均响应时间\"\"\"\n",
    "        total_successful = self.performance_stats[\"successful_requests\"]\n",
    "        current_avg = self.performance_stats[\"average_response_time\"]\n",
    "        \n",
    "        # 计算新的平均值\n",
    "        new_avg = ((current_avg * (total_successful - 1)) + new_time) / total_successful\n",
    "        self.performance_stats[\"average_response_time\"] = round(new_avg, 2)\n",
    "    \n",
    "    def batch_process_inquiries(self, inquiries: List[Dict]) -> List[Dict]:\n",
    "        \"\"\"批量处理客户咨询\"\"\"\n",
    "        logger.info(f\"开始批量处理 {len(inquiries)} 个咨询\")\n",
    "        \n",
    "        results = []\n",
    "        for inquiry in inquiries:\n",
    "            result = self.process_customer_inquiry(\n",
    "                inquiry[\"question\"],\n",
    "                inquiry.get(\"user_info\", {})\n",
    "            )\n",
    "            results.append(result)\n",
    "        \n",
    "        logger.info(f\"批量处理完成\")\n",
    "        return results\n",
    "    \n",
    "    def get_performance_stats(self) -> Dict:\n",
    "        \"\"\"获取性能统计\"\"\"\n",
    "        stats = self.performance_stats.copy()\n",
    "        if stats[\"total_requests\"] > 0:\n",
    "            stats[\"success_rate\"] = round(\n",
    "                (stats[\"successful_requests\"] / stats[\"total_requests\"]) * 100, 2\n",
    "            )\n",
    "        else:\n",
    "            stats[\"success_rate\"] = 0.0\n",
    "        \n",
    "        return stats\n",
    "\n",
    "def demo_enterprise_application():\n",
    "    \"\"\"企业应用演示\"\"\"\n",
    "    print(\"=== 企业级 LangChain 客服系统演示 ===\")\n",
    "    \n",
    "    # 初始化系统\n",
    "    customer_service = EnterpriseCustomerService()\n",
    "    \n",
    "    # 测试用例\n",
    "    test_cases = [\n",
    "        {\n",
    "            \"question\": \"我的API调用出现500错误，怎么解决？\",\n",
    "            \"user_info\": {\"user_id\": \"12345\", \"plan\": \"企业版\", \"region\": \"北京\"}\n",
    "        },\n",
    "        {\n",
    "            \"question\": \"我想查看本月的账单详情\",\n",
    "            \"user_info\": {\"user_id\": \"67890\", \"plan\": \"标准版\", \"region\": \"上海\"}\n",
    "        },\n",
    "        {\n",
    "            \"question\": \"你们的服务怎么样？\",\n",
    "            \"user_info\": {\"user_id\": \"11111\", \"plan\": \"免费版\", \"region\": \"深圳\"}\n",
    "        }\n",
    "    ]\n",
    "    \n",
    "    # 单个处理演示\n",
    "    print(\"\\n--- 单个处理演示 ---\")\n",
    "    for i, case in enumerate(test_cases, 1):\n",
    "        print(f\"\\n客户咨询 {i}:\")\n",
    "        print(f\"问题: {case['question']}\")\n",
    "        print(f\"用户信息: {case['user_info']}\")\n",
    "        \n",
    "        result = customer_service.process_customer_inquiry(\n",
    "            case[\"question\"], \n",
    "            case[\"user_info\"]\n",
    "        )\n",
    "        \n",
    "        print(f\"回复: {result['response']}\")\n",
    "        print(f\"类别: {result['category']}\")\n",
    "        print(f\"置信度: {result['confidence']}\")\n",
    "        print(f\"需要人工: {result['requires_human']}\")\n",
    "        print(f\"处理时间: {result['processing_time']}秒\")\n",
    "        print(f\"状态: {result['status']}\")\n",
    "    \n",
    "    # 批量处理演示\n",
    "    print(f\"\\n--- 批量处理演示 ---\")\n",
    "    batch_results = customer_service.batch_process_inquiries(test_cases)\n",
    "    \n",
    "    print(f\"批量处理了 {len(batch_results)} 个咨询\")\n",
    "    for i, result in enumerate(batch_results, 1):\n",
    "        print(f\"结果 {i}: {result['category']} - {result['response'][:50]}...\")\n",
    "    \n",
    "    # 性能统计\n",
    "    print(f\"\\n--- 性能统计 ---\")\n",
    "    stats = customer_service.get_performance_stats()\n",
    "    print(f\"总请求数: {stats['total_requests']}\")\n",
    "    print(f\"成功请求数: {stats['successful_requests']}\")\n",
    "    print(f\"失败请求数: {stats['failed_requests']}\")\n",
    "    print(f\"成功率: {stats['success_rate']}%\")\n",
    "    print(f\"平均响应时间: {stats['average_response_time']}秒\")\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    demo_enterprise_application()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "375457a7",
   "metadata": {},
   "source": [
    "### 5. 配置和定制\n",
    "运行时配置\n",
    "\n",
    "```python\n",
    "# 传递配置参数\n",
    "config = {\"temperature\": 0.7, \"max_tokens\": 100}\n",
    "result = model.invoke(input, config=config)\n",
    "\n",
    "# 绑定默认配置\n",
    "bound_model = model.bind(temperature=0.7)\n",
    "```\n",
    "\n",
    "\n",
    "标签和元数据\n",
    "\n",
    "```python\n",
    "# 添加标签用于追踪\n",
    "tagged_chain = chain.with_tags([\"production\", \"v1.0\"])\n",
    "\n",
    "# 添加元数据\n",
    "chain_with_metadata = chain.with_metadata({\"version\": \"1.0\", \"author\": \"team\"})\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ed3755c2",
   "metadata": {},
   "source": [
    "6. 调试和监控\n",
    "中间结果获取\n",
    "\n",
    "```python\n",
    "# 获取每一步的输出\n",
    "def debug_step(step_output):\n",
    "    print(f\"步骤输出: {step_output}\")\n",
    "    return step_output\n",
    "\n",
    "debug_chain = prompt | RunnableLambda(debug_step) | model\n",
    "```\n",
    "\n",
    "性能监控\n",
    "\n",
    "```python\n",
    "import time\n",
    "\n",
    "def timing_wrapper(runnable):\n",
    "    def timed_invoke(input):\n",
    "        start = time.time()\n",
    "        result = runnable.invoke(input)\n",
    "        print(f\"执行时间: {time.time() - start:.2f}秒\")\n",
    "        return result\n",
    "    return RunnableLambda(timed_invoke)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0853753d",
   "metadata": {},
   "source": [
    "7. 自定义 Runnable\n",
    "基本实现\n",
    "\n",
    "```python\n",
    "from langchain_core.runnables import Runnable\n",
    "\n",
    "class CustomProcessor(Runnable[str, Dict]):\n",
    "    def invoke(self, input: str, config=None) -> Dict:\n",
    "        # 自定义处理逻辑\n",
    "        return {\"processed\": input.upper(), \"length\": len(input)}\n",
    "    \n",
    "    async def ainvoke(self, input: str, config=None) -> Dict:\n",
    "        # 异步版本\n",
    "        return self.invoke(input, config)\n",
    "```\n",
    "\n",
    "\n",
    "完整实现模板\n",
    "```python\n",
    "class AdvancedRunnable(Runnable[InputType, OutputType]):\n",
    "    def __init__(self, param1, param2):\n",
    "        self.param1 = param1\n",
    "        self.param2 = param2\n",
    "    \n",
    "    def invoke(self, input: InputType, config=None) -> OutputType:\n",
    "        # 同步执行逻辑\n",
    "        pass\n",
    "    \n",
    "    async def ainvoke(self, input: InputType, config=None) -> OutputType:\n",
    "        # 异步执行逻辑\n",
    "        pass\n",
    "    \n",
    "    def batch(self, inputs: List[InputType], config=None) -> List[OutputType]:\n",
    "        # 批量处理逻辑\n",
    "        return [self.invoke(inp, config) for inp in inputs]\n",
    "    \n",
    "    def stream(self, input: InputType, config=None):\n",
    "        # 流式处理逻辑\n",
    "        yield from self._stream_generator(input)\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0c99e6f",
   "metadata": {},
   "source": [
    "### 最佳实践\n",
    "\n",
    "```python\n",
    "# 1. 明确类型定义\n",
    "class MyChain(Runnable[Dict[str, str], Dict[str, Any]]):\n",
    "    pass\n",
    "\n",
    "# 2. 合理使用并行\n",
    "parallel_processing = RunnableParallel({\n",
    "    \"fast_analysis\": quick_model,\n",
    "    \"detailed_analysis\": detailed_model\n",
    "})\n",
    "\n",
    "# 3. 添加错误处理\n",
    "robust_chain = (\n",
    "    preprocessing |\n",
    "    model.with_fallbacks([backup_model]) |\n",
    "    postprocessing\n",
    ")\n",
    "\n",
    "# 4. 使用配置管理\n",
    "configurable_chain = model.configurable_fields(\n",
    "    temperature=ConfigurableField(id=\"temp\", name=\"Temperature\")\n",
    ")\n",
    "```"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "MLOps",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
