{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "ce925e1ba65bdea6",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 输入类型和输出类型\n",
    "https://python.langchain.com.cn/docs/expression_language/interface\n",
    "是一个标准接口，可以轻松定义自定义链并以标准方式调用它们。 标准接口包括：\n",
    "\n",
    "- [`stream`](https://python.langchain.com.cn/docs/expression_language/interface#stream): 流式返回响应的块\n",
    "- [`invoke`](https://python.langchain.com.cn/docs/expression_language/interface#invoke): 在输入上调用链\n",
    "- [`batch`](https://python.langchain.com.cn/docs/expression_language/interface#batch): 在输入列表上调用链\n",
    "\n",
    "这些方法也有对应的异步方法:\n",
    "\n",
    "- [`astream`](https://python.langchain.com.cn/docs/expression_language/interface#async-stream): 异步流式返回响应的块\n",
    "- [`ainvoke`](https://python.langchain.com.cn/docs/expression_language/interface#async-invoke): 异步在输入上调用链\n",
    "- [`abatch`](https://python.langchain.com.cn/docs/expression_language/interface#async-batch): 异步在输入列表上调用链\n",
    "- [`astream_log`](https://python.langchain.com.cn/docs/expression_language/interface#async-stream-intermediate-steps): 异步流式返回中间步骤，以及最终响应\n",
    "- [`astream_events`](https://python.langchain.com.cn/docs/expression_language/interface#async-stream-events): **beta** 异步流式返回链中发生的事件（在 `langchain-core` 0.1.14 中引入）\n",
    "\n",
    "| 组件  | 输入类型 | 输出类型 |\n",
    "| --- | --- | --- |\n",
    "| Prompt | 字典  | PromptValue |\n",
    "| ChatModel | 单个字符串、聊天消息列表或 PromptValue | ChatMessage |\n",
    "| LLM | 单个字符串、聊天消息列表或 PromptValue | 字符串 |\n",
    "| OutputParser | LLM 或 ChatModel 的输出 | 取决于解析器 |\n",
    "| Retriever | 单个字符串 | 文档列表 |\n",
    "| Tool | 单个字符串或字典，取决于工具 | 取决于工具 |\n",
    "\n",
    "所有可运行对象都公开输入和输出的**模式**以检查输入和输出:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "fe86fa22ff348e4a",
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-25T05:12:48.934843Z",
     "start_time": "2024-10-25T05:12:48.846264Z"
    }
   },
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "from dotenv import load_dotenv\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "load_dotenv()\n",
    "\n",
    "llm = ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\"\n",
    ")\n",
    "template = \"Tell me a joke about {topic}\"\n",
    "prompt = ChatPromptTemplate.from_template(template).with_config({\"run_name\": \"my_template\", \"tags\": [\"my_template\"]})\n",
    "chain = prompt | llm"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9ac38ab4158f4e6d",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 输入模式\n",
    "Runnable 接受的输入的描述。 这是从任何 Runnable 的结构动态生成的 Pydantic 模型。 您可以调用 .schema() 来获取其 JSONSchema 表示形式。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "db4bb3648f773ba5",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-10-25T03:19:27.737700Z",
     "start_time": "2024-10-25T03:19:27.715535Z"
    },
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'properties': {'topic': {'title': 'Topic', 'type': 'string'}},\n",
       " 'required': ['topic'],\n",
       " 'title': 'PromptInput',\n",
       " 'type': 'object'}"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "#链的输入模式是其第一个部分（prompt）的输入模式。\n",
    "chain.input_schema.schema()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d2a56736a2a8e821",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 输出模式\n",
    "对由可运行对象产生的输出的描述。 这是根据任何可运行对象的结构动态生成的 Pydantic 模型。 您可以调用 .schema() 来获取 JSONSchema 表示形式。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "9244ae2650312ab9",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-10-25T03:23:13.387851Z",
     "start_time": "2024-10-25T03:23:13.332915Z"
    },
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'$defs': {'AIMessage': {'additionalProperties': True,\n",
       "   'description': 'Message from an AI.\\n\\nAIMessage is returned from a chat model as a response to a prompt.\\n\\nThis message represents the output of the model and consists of both\\nthe raw output as returned by the model together standardized fields\\n(e.g., tool calls, usage metadata) added by the LangChain framework.',\n",
       "   'properties': {'content': {'anyOf': [{'type': 'string'},\n",
       "      {'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]},\n",
       "       'type': 'array'}],\n",
       "     'title': 'Content'},\n",
       "    'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
       "    'response_metadata': {'title': 'Response Metadata', 'type': 'object'},\n",
       "    'type': {'const': 'ai',\n",
       "     'default': 'ai',\n",
       "     'enum': ['ai'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'},\n",
       "    'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Name'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Id'},\n",
       "    'example': {'default': False, 'title': 'Example', 'type': 'boolean'},\n",
       "    'tool_calls': {'default': [],\n",
       "     'items': {'$ref': '#/$defs/ToolCall'},\n",
       "     'title': 'Tool Calls',\n",
       "     'type': 'array'},\n",
       "    'invalid_tool_calls': {'default': [],\n",
       "     'items': {'$ref': '#/$defs/InvalidToolCall'},\n",
       "     'title': 'Invalid Tool Calls',\n",
       "     'type': 'array'},\n",
       "    'usage_metadata': {'anyOf': [{'$ref': '#/$defs/UsageMetadata'},\n",
       "      {'type': 'null'}],\n",
       "     'default': None}},\n",
       "   'required': ['content'],\n",
       "   'title': 'AIMessage',\n",
       "   'type': 'object'},\n",
       "  'AIMessageChunk': {'additionalProperties': True,\n",
       "   'description': 'Message chunk from an AI.',\n",
       "   'properties': {'content': {'anyOf': [{'type': 'string'},\n",
       "      {'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]},\n",
       "       'type': 'array'}],\n",
       "     'title': 'Content'},\n",
       "    'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
       "    'response_metadata': {'title': 'Response Metadata', 'type': 'object'},\n",
       "    'type': {'const': 'AIMessageChunk',\n",
       "     'default': 'AIMessageChunk',\n",
       "     'enum': ['AIMessageChunk'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'},\n",
       "    'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Name'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Id'},\n",
       "    'example': {'default': False, 'title': 'Example', 'type': 'boolean'},\n",
       "    'tool_calls': {'default': [],\n",
       "     'items': {'$ref': '#/$defs/ToolCall'},\n",
       "     'title': 'Tool Calls',\n",
       "     'type': 'array'},\n",
       "    'invalid_tool_calls': {'default': [],\n",
       "     'items': {'$ref': '#/$defs/InvalidToolCall'},\n",
       "     'title': 'Invalid Tool Calls',\n",
       "     'type': 'array'},\n",
       "    'usage_metadata': {'anyOf': [{'$ref': '#/$defs/UsageMetadata'},\n",
       "      {'type': 'null'}],\n",
       "     'default': None},\n",
       "    'tool_call_chunks': {'default': [],\n",
       "     'items': {'$ref': '#/$defs/ToolCallChunk'},\n",
       "     'title': 'Tool Call Chunks',\n",
       "     'type': 'array'}},\n",
       "   'required': ['content'],\n",
       "   'title': 'AIMessageChunk',\n",
       "   'type': 'object'},\n",
       "  'ChatMessage': {'additionalProperties': True,\n",
       "   'description': 'Message that can be assigned an arbitrary speaker (i.e. role).',\n",
       "   'properties': {'content': {'anyOf': [{'type': 'string'},\n",
       "      {'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]},\n",
       "       'type': 'array'}],\n",
       "     'title': 'Content'},\n",
       "    'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
       "    'response_metadata': {'title': 'Response Metadata', 'type': 'object'},\n",
       "    'type': {'const': 'chat',\n",
       "     'default': 'chat',\n",
       "     'enum': ['chat'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'},\n",
       "    'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Name'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Id'},\n",
       "    'role': {'title': 'Role', 'type': 'string'}},\n",
       "   'required': ['content', 'role'],\n",
       "   'title': 'ChatMessage',\n",
       "   'type': 'object'},\n",
       "  'ChatMessageChunk': {'additionalProperties': True,\n",
       "   'description': 'Chat Message chunk.',\n",
       "   'properties': {'content': {'anyOf': [{'type': 'string'},\n",
       "      {'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]},\n",
       "       'type': 'array'}],\n",
       "     'title': 'Content'},\n",
       "    'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
       "    'response_metadata': {'title': 'Response Metadata', 'type': 'object'},\n",
       "    'type': {'const': 'ChatMessageChunk',\n",
       "     'default': 'ChatMessageChunk',\n",
       "     'enum': ['ChatMessageChunk'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'},\n",
       "    'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Name'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Id'},\n",
       "    'role': {'title': 'Role', 'type': 'string'}},\n",
       "   'required': ['content', 'role'],\n",
       "   'title': 'ChatMessageChunk',\n",
       "   'type': 'object'},\n",
       "  'FunctionMessage': {'additionalProperties': True,\n",
       "   'description': 'Message for passing the result of executing a tool back to a model.\\n\\nFunctionMessage are an older version of the ToolMessage schema, and\\ndo not contain the tool_call_id field.\\n\\nThe tool_call_id field is used to associate the tool call request with the\\ntool call response. This is useful in situations where a chat model is able\\nto request multiple tool calls in parallel.',\n",
       "   'properties': {'content': {'anyOf': [{'type': 'string'},\n",
       "      {'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]},\n",
       "       'type': 'array'}],\n",
       "     'title': 'Content'},\n",
       "    'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
       "    'response_metadata': {'title': 'Response Metadata', 'type': 'object'},\n",
       "    'type': {'const': 'function',\n",
       "     'default': 'function',\n",
       "     'enum': ['function'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'},\n",
       "    'name': {'title': 'Name', 'type': 'string'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Id'}},\n",
       "   'required': ['content', 'name'],\n",
       "   'title': 'FunctionMessage',\n",
       "   'type': 'object'},\n",
       "  'FunctionMessageChunk': {'additionalProperties': True,\n",
       "   'description': 'Function Message chunk.',\n",
       "   'properties': {'content': {'anyOf': [{'type': 'string'},\n",
       "      {'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]},\n",
       "       'type': 'array'}],\n",
       "     'title': 'Content'},\n",
       "    'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
       "    'response_metadata': {'title': 'Response Metadata', 'type': 'object'},\n",
       "    'type': {'const': 'FunctionMessageChunk',\n",
       "     'default': 'FunctionMessageChunk',\n",
       "     'enum': ['FunctionMessageChunk'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'},\n",
       "    'name': {'title': 'Name', 'type': 'string'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Id'}},\n",
       "   'required': ['content', 'name'],\n",
       "   'title': 'FunctionMessageChunk',\n",
       "   'type': 'object'},\n",
       "  'HumanMessage': {'additionalProperties': True,\n",
       "   'description': 'Message from a human.\\n\\nHumanMessages are messages that are passed in from a human to the model.\\n\\nExample:\\n\\n    .. code-block:: python\\n\\n        from langchain_core.messages import HumanMessage, SystemMessage\\n\\n        messages = [\\n            SystemMessage(\\n                content=\"You are a helpful assistant! Your name is Bob.\"\\n            ),\\n            HumanMessage(\\n                content=\"What is your name?\"\\n            )\\n        ]\\n\\n        # Instantiate a chat model and invoke it with the messages\\n        model = ...\\n        print(model.invoke(messages))',\n",
       "   'properties': {'content': {'anyOf': [{'type': 'string'},\n",
       "      {'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]},\n",
       "       'type': 'array'}],\n",
       "     'title': 'Content'},\n",
       "    'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
       "    'response_metadata': {'title': 'Response Metadata', 'type': 'object'},\n",
       "    'type': {'const': 'human',\n",
       "     'default': 'human',\n",
       "     'enum': ['human'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'},\n",
       "    'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Name'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Id'},\n",
       "    'example': {'default': False, 'title': 'Example', 'type': 'boolean'}},\n",
       "   'required': ['content'],\n",
       "   'title': 'HumanMessage',\n",
       "   'type': 'object'},\n",
       "  'HumanMessageChunk': {'additionalProperties': True,\n",
       "   'description': 'Human Message chunk.',\n",
       "   'properties': {'content': {'anyOf': [{'type': 'string'},\n",
       "      {'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]},\n",
       "       'type': 'array'}],\n",
       "     'title': 'Content'},\n",
       "    'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
       "    'response_metadata': {'title': 'Response Metadata', 'type': 'object'},\n",
       "    'type': {'const': 'HumanMessageChunk',\n",
       "     'default': 'HumanMessageChunk',\n",
       "     'enum': ['HumanMessageChunk'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'},\n",
       "    'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Name'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Id'},\n",
       "    'example': {'default': False, 'title': 'Example', 'type': 'boolean'}},\n",
       "   'required': ['content'],\n",
       "   'title': 'HumanMessageChunk',\n",
       "   'type': 'object'},\n",
       "  'InputTokenDetails': {'description': 'Breakdown of input token counts.\\n\\nDoes *not* need to sum to full input token count. Does *not* need to have all keys.\\n\\nExample:\\n\\n    .. code-block:: python\\n\\n        {\\n            \"audio\": 10,\\n            \"cache_creation\": 200,\\n            \"cache_read\": 100,\\n        }\\n\\n.. versionadded:: 0.3.9',\n",
       "   'properties': {'audio': {'title': 'Audio', 'type': 'integer'},\n",
       "    'cache_creation': {'title': 'Cache Creation', 'type': 'integer'},\n",
       "    'cache_read': {'title': 'Cache Read', 'type': 'integer'}},\n",
       "   'title': 'InputTokenDetails',\n",
       "   'type': 'object'},\n",
       "  'InvalidToolCall': {'description': 'Allowance for errors made by LLM.\\n\\nHere we add an `error` key to surface errors made during generation\\n(e.g., invalid JSON arguments.)',\n",
       "   'properties': {'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'title': 'Name'},\n",
       "    'args': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Args'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Id'},\n",
       "    'error': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'title': 'Error'},\n",
       "    'type': {'const': 'invalid_tool_call',\n",
       "     'enum': ['invalid_tool_call'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'}},\n",
       "   'required': ['name', 'args', 'id', 'error'],\n",
       "   'title': 'InvalidToolCall',\n",
       "   'type': 'object'},\n",
       "  'OutputTokenDetails': {'description': 'Breakdown of output token counts.\\n\\nDoes *not* need to sum to full output token count. Does *not* need to have all keys.\\n\\nExample:\\n\\n    .. code-block:: python\\n\\n        {\\n            \"audio\": 10,\\n            \"reasoning\": 200,\\n        }\\n\\n.. versionadded:: 0.3.9',\n",
       "   'properties': {'audio': {'title': 'Audio', 'type': 'integer'},\n",
       "    'reasoning': {'title': 'Reasoning', 'type': 'integer'}},\n",
       "   'title': 'OutputTokenDetails',\n",
       "   'type': 'object'},\n",
       "  'SystemMessage': {'additionalProperties': True,\n",
       "   'description': 'Message for priming AI behavior.\\n\\nThe system message is usually passed in as the first of a sequence\\nof input messages.\\n\\nExample:\\n\\n    .. code-block:: python\\n\\n        from langchain_core.messages import HumanMessage, SystemMessage\\n\\n        messages = [\\n            SystemMessage(\\n                content=\"You are a helpful assistant! Your name is Bob.\"\\n            ),\\n            HumanMessage(\\n                content=\"What is your name?\"\\n            )\\n        ]\\n\\n        # Define a chat model and invoke it with the messages\\n        print(model.invoke(messages))',\n",
       "   'properties': {'content': {'anyOf': [{'type': 'string'},\n",
       "      {'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]},\n",
       "       'type': 'array'}],\n",
       "     'title': 'Content'},\n",
       "    'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
       "    'response_metadata': {'title': 'Response Metadata', 'type': 'object'},\n",
       "    'type': {'const': 'system',\n",
       "     'default': 'system',\n",
       "     'enum': ['system'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'},\n",
       "    'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Name'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Id'}},\n",
       "   'required': ['content'],\n",
       "   'title': 'SystemMessage',\n",
       "   'type': 'object'},\n",
       "  'SystemMessageChunk': {'additionalProperties': True,\n",
       "   'description': 'System Message chunk.',\n",
       "   'properties': {'content': {'anyOf': [{'type': 'string'},\n",
       "      {'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]},\n",
       "       'type': 'array'}],\n",
       "     'title': 'Content'},\n",
       "    'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
       "    'response_metadata': {'title': 'Response Metadata', 'type': 'object'},\n",
       "    'type': {'const': 'SystemMessageChunk',\n",
       "     'default': 'SystemMessageChunk',\n",
       "     'enum': ['SystemMessageChunk'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'},\n",
       "    'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Name'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Id'}},\n",
       "   'required': ['content'],\n",
       "   'title': 'SystemMessageChunk',\n",
       "   'type': 'object'},\n",
       "  'ToolCall': {'description': 'Represents a request to call a tool.\\n\\nExample:\\n\\n    .. code-block:: python\\n\\n        {\\n            \"name\": \"foo\",\\n            \"args\": {\"a\": 1},\\n            \"id\": \"123\"\\n        }\\n\\n    This represents a request to call the tool named \"foo\" with arguments {\"a\": 1}\\n    and an identifier of \"123\".',\n",
       "   'properties': {'name': {'title': 'Name', 'type': 'string'},\n",
       "    'args': {'title': 'Args', 'type': 'object'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Id'},\n",
       "    'type': {'const': 'tool_call',\n",
       "     'enum': ['tool_call'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'}},\n",
       "   'required': ['name', 'args', 'id'],\n",
       "   'title': 'ToolCall',\n",
       "   'type': 'object'},\n",
       "  'ToolCallChunk': {'description': 'A chunk of a tool call (e.g., as part of a stream).\\n\\nWhen merging ToolCallChunks (e.g., via AIMessageChunk.__add__),\\nall string attributes are concatenated. Chunks are only merged if their\\nvalues of `index` are equal and not None.\\n\\nExample:\\n\\n.. code-block:: python\\n\\n    left_chunks = [ToolCallChunk(name=\"foo\", args=\\'{\"a\":\\', index=0)]\\n    right_chunks = [ToolCallChunk(name=None, args=\\'1}\\', index=0)]\\n\\n    (\\n        AIMessageChunk(content=\"\", tool_call_chunks=left_chunks)\\n        + AIMessageChunk(content=\"\", tool_call_chunks=right_chunks)\\n    ).tool_call_chunks == [ToolCallChunk(name=\\'foo\\', args=\\'{\"a\":1}\\', index=0)]',\n",
       "   'properties': {'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'title': 'Name'},\n",
       "    'args': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Args'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'title': 'Id'},\n",
       "    'index': {'anyOf': [{'type': 'integer'}, {'type': 'null'}],\n",
       "     'title': 'Index'},\n",
       "    'type': {'const': 'tool_call_chunk',\n",
       "     'enum': ['tool_call_chunk'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'}},\n",
       "   'required': ['name', 'args', 'id', 'index'],\n",
       "   'title': 'ToolCallChunk',\n",
       "   'type': 'object'},\n",
       "  'ToolMessage': {'additionalProperties': True,\n",
       "   'description': 'Message for passing the result of executing a tool back to a model.\\n\\nToolMessages contain the result of a tool invocation. Typically, the result\\nis encoded inside the `content` field.\\n\\nExample: A ToolMessage representing a result of 42 from a tool call with id\\n\\n    .. code-block:: python\\n\\n        from langchain_core.messages import ToolMessage\\n\\n        ToolMessage(content=\\'42\\', tool_call_id=\\'call_Jja7J89XsjrOLA5r!MEOW!SL\\')\\n\\n\\nExample: A ToolMessage where only part of the tool output is sent to the model\\n    and the full output is passed in to artifact.\\n\\n    .. versionadded:: 0.2.17\\n\\n    .. code-block:: python\\n\\n        from langchain_core.messages import ToolMessage\\n\\n        tool_output = {\\n            \"stdout\": \"From the graph we can see that the correlation between x and y is ...\",\\n            \"stderr\": None,\\n            \"artifacts\": {\"type\": \"image\", \"base64_data\": \"/9j/4gIcSU...\"},\\n        }\\n\\n        ToolMessage(\\n            content=tool_output[\"stdout\"],\\n            artifact=tool_output,\\n            tool_call_id=\\'call_Jja7J89XsjrOLA5r!MEOW!SL\\',\\n        )\\n\\nThe tool_call_id field is used to associate the tool call request with the\\ntool call response. This is useful in situations where a chat model is able\\nto request multiple tool calls in parallel.',\n",
       "   'properties': {'content': {'anyOf': [{'type': 'string'},\n",
       "      {'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]},\n",
       "       'type': 'array'}],\n",
       "     'title': 'Content'},\n",
       "    'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
       "    'response_metadata': {'title': 'Response Metadata', 'type': 'object'},\n",
       "    'type': {'const': 'tool',\n",
       "     'default': 'tool',\n",
       "     'enum': ['tool'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'},\n",
       "    'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Name'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Id'},\n",
       "    'tool_call_id': {'title': 'Tool Call Id', 'type': 'string'},\n",
       "    'artifact': {'default': None, 'title': 'Artifact'},\n",
       "    'status': {'default': 'success',\n",
       "     'enum': ['success', 'error'],\n",
       "     'title': 'Status',\n",
       "     'type': 'string'}},\n",
       "   'required': ['content', 'tool_call_id'],\n",
       "   'title': 'ToolMessage',\n",
       "   'type': 'object'},\n",
       "  'ToolMessageChunk': {'additionalProperties': True,\n",
       "   'description': 'Tool Message chunk.',\n",
       "   'properties': {'content': {'anyOf': [{'type': 'string'},\n",
       "      {'items': {'anyOf': [{'type': 'string'}, {'type': 'object'}]},\n",
       "       'type': 'array'}],\n",
       "     'title': 'Content'},\n",
       "    'additional_kwargs': {'title': 'Additional Kwargs', 'type': 'object'},\n",
       "    'response_metadata': {'title': 'Response Metadata', 'type': 'object'},\n",
       "    'type': {'const': 'ToolMessageChunk',\n",
       "     'default': 'ToolMessageChunk',\n",
       "     'enum': ['ToolMessageChunk'],\n",
       "     'title': 'Type',\n",
       "     'type': 'string'},\n",
       "    'name': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Name'},\n",
       "    'id': {'anyOf': [{'type': 'string'}, {'type': 'null'}],\n",
       "     'default': None,\n",
       "     'title': 'Id'},\n",
       "    'tool_call_id': {'title': 'Tool Call Id', 'type': 'string'},\n",
       "    'artifact': {'default': None, 'title': 'Artifact'},\n",
       "    'status': {'default': 'success',\n",
       "     'enum': ['success', 'error'],\n",
       "     'title': 'Status',\n",
       "     'type': 'string'}},\n",
       "   'required': ['content', 'tool_call_id'],\n",
       "   'title': 'ToolMessageChunk',\n",
       "   'type': 'object'},\n",
       "  'UsageMetadata': {'description': 'Usage metadata for a message, such as token counts.\\n\\nThis is a standard representation of token usage that is consistent across models.\\n\\nExample:\\n\\n    .. code-block:: python\\n\\n        {\\n            \"input_tokens\": 350,\\n            \"output_tokens\": 240,\\n            \"total_tokens\": 590,\\n            \"input_token_details\": {\\n                \"audio\": 10,\\n                \"cache_creation\": 200,\\n                \"cache_read\": 100,\\n            },\\n            \"output_token_details\": {\\n                \"audio\": 10,\\n                \"reasoning\": 200,\\n            }\\n        }\\n\\n.. versionchanged:: 0.3.9\\n\\n    Added ``input_token_details`` and ``output_token_details``.',\n",
       "   'properties': {'input_tokens': {'title': 'Input Tokens', 'type': 'integer'},\n",
       "    'output_tokens': {'title': 'Output Tokens', 'type': 'integer'},\n",
       "    'total_tokens': {'title': 'Total Tokens', 'type': 'integer'},\n",
       "    'input_token_details': {'$ref': '#/$defs/InputTokenDetails'},\n",
       "    'output_token_details': {'$ref': '#/$defs/OutputTokenDetails'}},\n",
       "   'required': ['input_tokens', 'output_tokens', 'total_tokens'],\n",
       "   'title': 'UsageMetadata',\n",
       "   'type': 'object'}},\n",
       " 'oneOf': [{'$ref': '#/$defs/AIMessage'},\n",
       "  {'$ref': '#/$defs/HumanMessage'},\n",
       "  {'$ref': '#/$defs/ChatMessage'},\n",
       "  {'$ref': '#/$defs/SystemMessage'},\n",
       "  {'$ref': '#/$defs/FunctionMessage'},\n",
       "  {'$ref': '#/$defs/ToolMessage'},\n",
       "  {'$ref': '#/$defs/AIMessageChunk'},\n",
       "  {'$ref': '#/$defs/HumanMessageChunk'},\n",
       "  {'$ref': '#/$defs/ChatMessageChunk'},\n",
       "  {'$ref': '#/$defs/SystemMessageChunk'},\n",
       "  {'$ref': '#/$defs/FunctionMessageChunk'},\n",
       "  {'$ref': '#/$defs/ToolMessageChunk'}],\n",
       " 'title': 'ChatOpenAIOutput'}"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 链的输出模式是其最后一部分的输出模式，本例中是 ChatModel，它输出一个 ChatMessage\n",
    "chain.output_schema.schema()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e333aca8eeadfa84",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Stream"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "82331986a3efbfe2",
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "for s in chain.stream({\"topic\": \"bears\"}):\n",
    "    print(s.content, end=\"\", flush=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a7c4efa954cfc39c",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Invoke\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "812a3b1fda204c52",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-10-25T03:27:49.176239Z",
     "start_time": "2024-10-25T03:27:47.182398Z"
    },
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "AIMessage(content=\"Sure, here's a bear-y funny joke for you:\\n\\nWhy don't bears like fast food?\\n\\nBecause they can't stand being in a hurry!\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 30, 'prompt_tokens': 24, 'total_tokens': 54, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-47dd4a20-6d9c-4dbb-b15f-9a0ac86dc4e8-0', usage_metadata={'input_tokens': 24, 'output_tokens': 30, 'total_tokens': 54, 'input_token_details': {}, 'output_token_details': {}})"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain.invoke({\"topic\": \"bears\"})"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c28a832b62527388",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Batch\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "dafaa7e2bea05209",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-10-25T03:29:06.415320Z",
     "start_time": "2024-10-25T03:29:02.856004Z"
    },
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[AIMessage(content='Sure, here\\'s a light-hearted bear joke for you:\\n\\nWhy don\\'t bears ever play hide and seek?\\n\\nBecause every time they try, someone says, \"Ready or not, here I come!\" and all the good hiding spots are already taken by the other bears who just want to take a nap!', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 61, 'prompt_tokens': 24, 'total_tokens': 85, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-b0a57694-0ac0-4920-8097-4bb64c816b86-0', usage_metadata={'input_tokens': 24, 'output_tokens': 61, 'total_tokens': 85, 'input_token_details': {}, 'output_token_details': {}}),\n",
       " AIMessage(content=\"Sure, here's a dog joke for you:\\n\\nWhy don't dogs make good dancers?\\n\\nBecause they have two left feet!\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 25, 'prompt_tokens': 24, 'total_tokens': 49, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-148ed8c4-d546-4176-bcef-6939685da505-0', usage_metadata={'input_tokens': 24, 'output_tokens': 25, 'total_tokens': 49, 'input_token_details': {}, 'output_token_details': {}})]"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain.batch([{\"topic\": \"bears\"}, {\"topic\": \"dogs\"}])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4bf016b88763cc05",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "设置并行度"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "b9f1049b8e439c33",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-10-25T03:34:44.260355Z",
     "start_time": "2024-10-25T03:34:41.140261Z"
    },
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[AIMessage(content=\"Sure, here's a light-hearted bear joke for you:\\n\\nWhy don't bears ever play hide and seek?\\n\\nBecause they're always found in the paws-ible places!\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 35, 'prompt_tokens': 24, 'total_tokens': 59, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-43812f6d-1728-486e-a96a-bd7b295a64c1-0', usage_metadata={'input_tokens': 24, 'output_tokens': 35, 'total_tokens': 59, 'input_token_details': {}, 'output_token_details': {}}),\n",
       " AIMessage(content=\"Sure, here's a dog joke for you:\\n\\nWhy don't dogs make good dancers?\\n\\nBecause they have two left feet!\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 25, 'prompt_tokens': 24, 'total_tokens': 49, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-6144b8f8-d064-4855-8ffb-48aafa5d538e-0', usage_metadata={'input_tokens': 24, 'output_tokens': 25, 'total_tokens': 49, 'input_token_details': {}, 'output_token_details': {}})]"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "chain.batch([{\"topic\": \"bears\"}, {\"topic\": \"dogs\"}], config={\"max_concurrency\": 2})"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7f34fdfcdb18b97d",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Async Stream"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "bb4099ee39b94455",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-10-25T03:37:05.687268Z",
     "start_time": "2024-10-25T03:37:03.306388Z"
    },
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Sure, here's a light-hearted bear joke for you:\n",
      "Why don't bears like to use the computer?\n",
      "\n",
      "Because they're afraid of the mouse!"
     ]
    }
   ],
   "source": [
    "async for s in chain.astream({\"topic\": \"bears\"}):\n",
    "    print(s.content, end=\"\", flush=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af7391cabb5d0110",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## Async Invoke"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "f2bbba2d38ae2df5",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-10-25T03:39:16.006723Z",
     "start_time": "2024-10-25T03:39:15.980290Z"
    },
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/var/folders/n2/51f_nb3j37g1wvdg9lmmxk200000gn/T/ipykernel_94956/4237726030.py:1: RuntimeWarning: coroutine 'RunnableSequence.ainvoke' was never awaited\n",
      "  async for s in chain.ainvoke({\"topic\": \"bears\"}) :\n",
      "RuntimeWarning: Enable tracemalloc to get the object allocation traceback\n"
     ]
    },
    {
     "ename": "TypeError",
     "evalue": "'async for' requires an object with __aiter__ method, got coroutine",
     "output_type": "error",
     "traceback": [
      "\u001B[0;31m---------------------------------------------------------------------------\u001B[0m",
      "\u001B[0;31mTypeError\u001B[0m                                 Traceback (most recent call last)",
      "Cell \u001B[0;32mIn[13], line 1\u001B[0m\n\u001B[0;32m----> 1\u001B[0m \u001B[38;5;28;01masync\u001B[39;00m \u001B[38;5;28;01mfor\u001B[39;00m s \u001B[38;5;129;01min\u001B[39;00m chain\u001B[38;5;241m.\u001B[39mainvoke({\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mtopic\u001B[39m\u001B[38;5;124m\"\u001B[39m: \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mbears\u001B[39m\u001B[38;5;124m\"\u001B[39m}) :\n\u001B[1;32m      2\u001B[0m     \u001B[38;5;28mprint\u001B[39m(s\u001B[38;5;241m.\u001B[39mcontent, end\u001B[38;5;241m=\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m\"\u001B[39m, flush\u001B[38;5;241m=\u001B[39m\u001B[38;5;28;01mTrue\u001B[39;00m)\n",
      "\u001B[0;31mTypeError\u001B[0m: 'async for' requires an object with __aiter__ method, got coroutine"
     ]
    }
   ],
   "source": [
    "await chain.ainvoke({\"topic\": \"bears\"})"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ac611fb99be2b4b8",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 异步流事件\n",
    "\n",
    ">注意：在 langchain-core 0.2.0 中引入\n",
    "\n",
    "目前，当使用 astream_events API 时，请确保以下所有内容都能正常工作：\n",
    "\n",
    "- 在整个代码中尽可能使用`async`（包括异步工具等）\n",
    "- 如果定义自定义函数/运行器，请传递回调。\n",
    "- 每当使用不是 LCEL 上的运行器时，请确保在 LLM 上调用`.astream()` 而不是`.ainvoke` 以强制 LLM 流式传输令牌。\n",
    "\n",
    "### 事件参考\n",
    "下面是一个参考表，显示了各种 Runnable 对象可能发出的一些事件。  \n",
    "表后面包含一些 Runnable 的定义。\n",
    "\n",
    "⚠️ 当流式处理时，输入的可运行对象将在输入流被完全消耗之后才可用。这意味着输入将在对应的`end`钩子而不是`start`事件中可用。\n",
    "\n",
    "| 事件  | 名称  | 块   | 输入  | 输出  |\n",
    "| --- | --- | --- | --- | --- |\n",
    "| on_chat_model_start | [model name] |     | {\"messages\": [[SystemMessage, HumanMessage]]} |     |\n",
    "| on_chat_model_stream | [model name] | AIMessageChunk(content=\"hello\") |     |     |\n",
    "| on_chat_model_end | [model name] |     | {\"messages\": [[SystemMessage, HumanMessage]]} | {\"generations\": [...], \"llm_output\": None, ...} |\n",
    "| on_llm_start | [model name] |     | {'input': 'hello'} |     |\n",
    "| on_llm_stream | [model name] | 'Hello' |     |     |\n",
    "| on_llm_end | [model name] |     | 'Hello human!' |     |\n",
    "| on_chain_start | format_docs |     |     |     |\n",
    "| on_chain_stream | format_docs | \"hello world!, goodbye world!\" |     |     |\n",
    "| on_chain_end | format_docs |     | [Document(...)] | \"hello world!, goodbye world!\" |\n",
    "| on_tool_start | some_tool |     | {\"x\": 1, \"y\": \"2\"} |     |\n",
    "| on_tool_stream | some_tool | {\"x\": 1, \"y\": \"2\"} |     |     |\n",
    "| on_tool_end | some_tool |     |     | {\"x\": 1, \"y\": \"2\"} |\n",
    "| on_retriever_start | [retriever name] |     | {\"query\": \"hello\"} |     |\n",
    "| on_retriever_chunk | [retriever name] | {documents: [...]} |     |     |\n",
    "| on_retriever_end | [retriever name] |     | {\"query\": \"hello\"} | {documents: [...]} |\n",
    "| on_prompt_start | [template_name] |     | {\"question\": \"hello\"} |     |\n",
    "| on_prompt_end | [template_name] |     | {\"question\": \"hello\"} | ChatPromptValue(messages: [SystemMessage, ...]) |\n",
    "\n",
    "下面是上述事件相关联的声明："
   ]
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "RunnableLambda(format_docs)\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.runnables import RunnableLambda\n",
    "from langchain_core.documents import Document\n",
    "from typing import List\n",
    "\n",
    "\n",
    "# \n",
    "def format_docs(docs: List[Document]) -> str:\n",
    "    \"\"\"\n",
    "    Format the documents\n",
    "    :param docs: \n",
    "    :return: \n",
    "    \"\"\"\n",
    "    return \",\".join(doc.page_content for doc in docs)\n",
    "\n",
    "\n",
    "format_docs = RunnableLambda(format_docs)\n",
    "\n",
    "\n",
    "def some_tool(x: int, y: str) -> dict:\n",
    "    \"\"\"\n",
    "    Some tool docs\n",
    "    :param x: \n",
    "    :param y: \n",
    "    :return: \n",
    "    \"\"\"\n",
    "    return {\"x\": x, \"y\": y}\n",
    "\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-25T05:09:49.746100Z",
     "start_time": "2024-10-25T05:09:49.740174Z"
    }
   },
   "id": "d2d2386d2ce43128",
   "execution_count": 15
  },
  {
   "cell_type": "markdown",
   "source": [
    "定义一个新的链，以便更有趣地展示astream_events接口（以及稍后的astream_log接口）"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "6eccb70a4c6e2203"
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "\n",
    "from langchain_core.runnables import RunnablePassthrough\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain_community.vectorstores import FAISS\n",
    "from langchain_community.embeddings.dashscope import DashScopeEmbeddings\n",
    "from dotenv import load_dotenv\n",
    "import os\n",
    "\n",
    "load_dotenv()\n",
    "\n",
    "template = \"\"\"\n",
    "    Answer the question based only on the following context:\n",
    "{context}\n",
    "\n",
    "Question: {question}\n",
    "\"\"\"\n",
    "\n",
    "prompt = ChatPromptTemplate.from_template(template)\n",
    "\n",
    "#初始化向量存储\n",
    "vectorstore = FAISS.from_texts([\"harrison worked at kensho\"],\n",
    "                               embedding=DashScopeEmbeddings(dashscope_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "                                                             model=\"text-embedding-v2\"))\n",
    "#创建检索\n",
    "\n",
    "retriever = vectorstore.as_retriever()\n",
    "llm = ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\"\n",
    ")\n",
    "# llm = Tongyi()\n",
    "sd = {\"context\": retriever.with_config(run_name=\"Docs\"),\n",
    "      \"question\": RunnablePassthrough()}\n",
    "\n",
    "retriever_chain = ({\"context\": retriever.with_config(run_name=\"Docs\"),\n",
    "                    \"question\": RunnablePassthrough()} | prompt | llm.with_config(\n",
    "    run_name=\"my_llm\") | StrOutputParser())\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-25T06:01:12.362029Z",
     "start_time": "2024-10-25T06:01:12.017967Z"
    }
   },
   "id": "437e0dac77389a34",
   "execution_count": 43
  },
  {
   "cell_type": "markdown",
   "source": [
    "现在让我们使用astream_events从检索器和LLM获取事件"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "53be41a54c36bf07"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--\n",
      "Retrieved the following documents:\n",
      "[Document(metadata={}, page_content='harrison worked at kensho')]\n",
      "Streaming LLM:\n",
      "|Based on| the| provided| context, Harrison worked| at Kensho.||\n",
      "Done streaming LLM.\n"
     ]
    }
   ],
   "source": [
    "async for event in retriever_chain.astream_events([\"what did harrison work at?\"], version=\"v1\",\n",
    "                                                  include_names=[\"Docs\", \"my_llm\"]):\n",
    "    kind = event[\"event\"]\n",
    "    if kind == \"on_chat_model_stream\":\n",
    "        print(event[\"data\"][\"chunk\"].content, end=\"|\")\n",
    "    elif kind in {\"on_chat_model_start\"}:\n",
    "        print(\"Streaming LLM:\")\n",
    "    elif kind in {\"on_chat_model_end\"}:\n",
    "        print()\n",
    "        print(\"Done streaming LLM.\")\n",
    "    elif kind == \"on_retriever_end\":\n",
    "        print(\"--\")\n",
    "        print(\"Retrieved the following documents:\")\n",
    "        print(event[\"data\"][\"output\"][\"documents\"])\n",
    "    elif kind == \"on_tool_end\":\n",
    "        print(f\"Ended tool: {event['name']}\")\n",
    "    else:\n",
    "        pass"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-25T06:44:10.695220Z",
     "start_time": "2024-10-25T06:44:09.411075Z"
    }
   },
   "id": "5d512dd86f6f00f5",
   "execution_count": 52
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 异步流中间步骤\n",
    "所有运行器还有一个方法.astream_log()，用于流式传输（随时发生）链/序列的所有或部分中间步骤。\n",
    "这对于向用户显示进度、使用中间结果或调试链很有用。\n",
    "您可以流式传\n",
    "输所有步骤（默认）或按名称、标记或元数据包含/排除步骤。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "f12dcc42ed811481"
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "from typing import TypedDict, Optional, Dict, Any\n",
    "\n",
    "\n",
    "class LogEntry(TypedDict):\n",
    "    id: str\n",
    "    \"\"\"子运行的ID。\"\"\"\n",
    "    name: str\n",
    "    \"\"\"正在运行的对象的名称。\"\"\"\n",
    "    type: str\n",
    "    \"\"\"正在运行的对象的类型，例如 prompt、chain、llm 等。\"\"\"\n",
    "    tags: List[str]\n",
    "    \"\"\"运行的标签列表。\"\"\"\n",
    "    metadata: Dict[str, Any]\n",
    "    \"\"\"运行的元数据的键值对。\"\"\"\n",
    "    start_time: str\n",
    "    \"\"\"运行开始时的 ISO-8601 时间戳。\"\"\"\n",
    "\n",
    "    streamed_output_str: List[str]\n",
    "    \"\"\"此运行流式传输的 LLM 令牌列表（如果适用）。\"\"\"\n",
    "    final_output: Optional[Any]\n",
    "    \"\"\"此运行的最终输出。\n",
    "    仅在运行成功完成后才可用。\"\"\"\n",
    "    end_time: Optional[str]\n",
    "    \"\"\"运行结束时的 ISO-8601 时间戳。\n",
    "    仅在运行成功完成后才可用。\"\"\"\n",
    "\n",
    "\n",
    "class RunState(TypedDict):\n",
    "    id: str\n",
    "    \"\"\"运行的ID。\"\"\"\n",
    "    streamed_output: List[Any]\n",
    "    \"\"\"由 Runnable.stream() 流式传输的输出块列表。\"\"\"\n",
    "    final_output: Optional[Any]\n",
    "    \"\"\"运行的最终输出，通常是对 streamed_output 进行聚合（`+`）的结果。\n",
    "    仅在运行成功完成后才可用。\"\"\"\n",
    "\n",
    "    logs: Dict[str, LogEntry]\n",
    "    \"\"\"运行名称到子运行的映射。如果提供了过滤器，此列表将只包含与过滤器匹配的运行。\"\"\""
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-25T06:36:45.620179Z",
     "start_time": "2024-10-25T06:36:45.612587Z"
    }
   },
   "id": "861101ddf8fe5ece",
   "execution_count": 45
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [],
   "metadata": {
    "collapsed": false
   },
   "id": "7f1740de6614b82c"
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 流式传输 JSONPatch 块\n",
    "流式传输 JSONPatch 在 HTTP 服务器中，然后在客户端应用操作以重建运行状态。有关从如何Runnable 构建 Web 服务器的工具，请参见LangServe\n"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "97649e15e1daf7e8"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "----------------------------------------\n",
      "RunLog({'final_output': None,\n",
      " 'id': '7c979cd7-e46a-4da8-8da8-32948bc01ef9',\n",
      " 'logs': {},\n",
      " 'name': 'RunnableSequence',\n",
      " 'streamed_output': [],\n",
      " 'type': 'chain'})\n",
      "----------------------------------------\n",
      "RunLog({'final_output': None,\n",
      " 'id': '7c979cd7-e46a-4da8-8da8-32948bc01ef9',\n",
      " 'logs': {'Docs': {'end_time': None,\n",
      "                   'final_output': None,\n",
      "                   'id': '07e97858-d8cd-4fbb-977e-d3a1acb9937a',\n",
      "                   'metadata': {'ls_embedding_provider': 'DashScopeEmbeddings',\n",
      "                                'ls_retriever_name': 'vectorstore',\n",
      "                                'ls_vector_store_provider': 'FAISS'},\n",
      "                   'name': 'Docs',\n",
      "                   'start_time': '2024-10-25T06:44:37.912+00:00',\n",
      "                   'streamed_output': [],\n",
      "                   'streamed_output_str': [],\n",
      "                   'tags': ['map:key:context', 'FAISS', 'DashScopeEmbeddings'],\n",
      "                   'type': 'retriever'}},\n",
      " 'name': 'RunnableSequence',\n",
      " 'streamed_output': [],\n",
      " 'type': 'chain'})\n",
      "----------------------------------------\n",
      "RunLog({'final_output': None,\n",
      " 'id': '7c979cd7-e46a-4da8-8da8-32948bc01ef9',\n",
      " 'logs': {'Docs': {'end_time': '2024-10-25T06:44:38.151+00:00',\n",
      "                   'final_output': {'documents': [Document(metadata={}, page_content='harrison worked at kensho')]},\n",
      "                   'id': '07e97858-d8cd-4fbb-977e-d3a1acb9937a',\n",
      "                   'metadata': {'ls_embedding_provider': 'DashScopeEmbeddings',\n",
      "                                'ls_retriever_name': 'vectorstore',\n",
      "                                'ls_vector_store_provider': 'FAISS'},\n",
      "                   'name': 'Docs',\n",
      "                   'start_time': '2024-10-25T06:44:37.912+00:00',\n",
      "                   'streamed_output': [],\n",
      "                   'streamed_output_str': [],\n",
      "                   'tags': ['map:key:context', 'FAISS', 'DashScopeEmbeddings'],\n",
      "                   'type': 'retriever'}},\n",
      " 'name': 'RunnableSequence',\n",
      " 'streamed_output': [],\n",
      " 'type': 'chain'})\n",
      "----------------------------------------\n",
      "RunLog({'final_output': '',\n",
      " 'id': '7c979cd7-e46a-4da8-8da8-32948bc01ef9',\n",
      " 'logs': {'Docs': {'end_time': '2024-10-25T06:44:38.151+00:00',\n",
      "                   'final_output': {'documents': [Document(metadata={}, page_content='harrison worked at kensho')]},\n",
      "                   'id': '07e97858-d8cd-4fbb-977e-d3a1acb9937a',\n",
      "                   'metadata': {'ls_embedding_provider': 'DashScopeEmbeddings',\n",
      "                                'ls_retriever_name': 'vectorstore',\n",
      "                                'ls_vector_store_provider': 'FAISS'},\n",
      "                   'name': 'Docs',\n",
      "                   'start_time': '2024-10-25T06:44:37.912+00:00',\n",
      "                   'streamed_output': [],\n",
      "                   'streamed_output_str': [],\n",
      "                   'tags': ['map:key:context', 'FAISS', 'DashScopeEmbeddings'],\n",
      "                   'type': 'retriever'}},\n",
      " 'name': 'RunnableSequence',\n",
      " 'streamed_output': [''],\n",
      " 'type': 'chain'})\n",
      "----------------------------------------\n",
      "RunLog({'final_output': 'Harrison',\n",
      " 'id': '7c979cd7-e46a-4da8-8da8-32948bc01ef9',\n",
      " 'logs': {'Docs': {'end_time': '2024-10-25T06:44:38.151+00:00',\n",
      "                   'final_output': {'documents': [Document(metadata={}, page_content='harrison worked at kensho')]},\n",
      "                   'id': '07e97858-d8cd-4fbb-977e-d3a1acb9937a',\n",
      "                   'metadata': {'ls_embedding_provider': 'DashScopeEmbeddings',\n",
      "                                'ls_retriever_name': 'vectorstore',\n",
      "                                'ls_vector_store_provider': 'FAISS'},\n",
      "                   'name': 'Docs',\n",
      "                   'start_time': '2024-10-25T06:44:37.912+00:00',\n",
      "                   'streamed_output': [],\n",
      "                   'streamed_output_str': [],\n",
      "                   'tags': ['map:key:context', 'FAISS', 'DashScopeEmbeddings'],\n",
      "                   'type': 'retriever'}},\n",
      " 'name': 'RunnableSequence',\n",
      " 'streamed_output': ['', 'Harrison'],\n",
      " 'type': 'chain'})\n",
      "----------------------------------------\n",
      "RunLog({'final_output': 'Harrison worked',\n",
      " 'id': '7c979cd7-e46a-4da8-8da8-32948bc01ef9',\n",
      " 'logs': {'Docs': {'end_time': '2024-10-25T06:44:38.151+00:00',\n",
      "                   'final_output': {'documents': [Document(metadata={}, page_content='harrison worked at kensho')]},\n",
      "                   'id': '07e97858-d8cd-4fbb-977e-d3a1acb9937a',\n",
      "                   'metadata': {'ls_embedding_provider': 'DashScopeEmbeddings',\n",
      "                                'ls_retriever_name': 'vectorstore',\n",
      "                                'ls_vector_store_provider': 'FAISS'},\n",
      "                   'name': 'Docs',\n",
      "                   'start_time': '2024-10-25T06:44:37.912+00:00',\n",
      "                   'streamed_output': [],\n",
      "                   'streamed_output_str': [],\n",
      "                   'tags': ['map:key:context', 'FAISS', 'DashScopeEmbeddings'],\n",
      "                   'type': 'retriever'}},\n",
      " 'name': 'RunnableSequence',\n",
      " 'streamed_output': ['', 'Harrison', ' worked'],\n",
      " 'type': 'chain'})\n",
      "----------------------------------------\n",
      "RunLog({'final_output': 'Harrison worked at',\n",
      " 'id': '7c979cd7-e46a-4da8-8da8-32948bc01ef9',\n",
      " 'logs': {'Docs': {'end_time': '2024-10-25T06:44:38.151+00:00',\n",
      "                   'final_output': {'documents': [Document(metadata={}, page_content='harrison worked at kensho')]},\n",
      "                   'id': '07e97858-d8cd-4fbb-977e-d3a1acb9937a',\n",
      "                   'metadata': {'ls_embedding_provider': 'DashScopeEmbeddings',\n",
      "                                'ls_retriever_name': 'vectorstore',\n",
      "                                'ls_vector_store_provider': 'FAISS'},\n",
      "                   'name': 'Docs',\n",
      "                   'start_time': '2024-10-25T06:44:37.912+00:00',\n",
      "                   'streamed_output': [],\n",
      "                   'streamed_output_str': [],\n",
      "                   'tags': ['map:key:context', 'FAISS', 'DashScopeEmbeddings'],\n",
      "                   'type': 'retriever'}},\n",
      " 'name': 'RunnableSequence',\n",
      " 'streamed_output': ['', 'Harrison', ' worked', ' at'],\n",
      " 'type': 'chain'})\n",
      "----------------------------------------\n",
      "RunLog({'final_output': 'Harrison worked at Kensho.',\n",
      " 'id': '7c979cd7-e46a-4da8-8da8-32948bc01ef9',\n",
      " 'logs': {'Docs': {'end_time': '2024-10-25T06:44:38.151+00:00',\n",
      "                   'final_output': {'documents': [Document(metadata={}, page_content='harrison worked at kensho')]},\n",
      "                   'id': '07e97858-d8cd-4fbb-977e-d3a1acb9937a',\n",
      "                   'metadata': {'ls_embedding_provider': 'DashScopeEmbeddings',\n",
      "                                'ls_retriever_name': 'vectorstore',\n",
      "                                'ls_vector_store_provider': 'FAISS'},\n",
      "                   'name': 'Docs',\n",
      "                   'start_time': '2024-10-25T06:44:37.912+00:00',\n",
      "                   'streamed_output': [],\n",
      "                   'streamed_output_str': [],\n",
      "                   'tags': ['map:key:context', 'FAISS', 'DashScopeEmbeddings'],\n",
      "                   'type': 'retriever'}},\n",
      " 'name': 'RunnableSequence',\n",
      " 'streamed_output': ['', 'Harrison', ' worked', ' at', ' Kensho.'],\n",
      " 'type': 'chain'})\n",
      "----------------------------------------\n",
      "RunLog({'final_output': 'Harrison worked at Kensho.',\n",
      " 'id': '7c979cd7-e46a-4da8-8da8-32948bc01ef9',\n",
      " 'logs': {'Docs': {'end_time': '2024-10-25T06:44:38.151+00:00',\n",
      "                   'final_output': {'documents': [Document(metadata={}, page_content='harrison worked at kensho')]},\n",
      "                   'id': '07e97858-d8cd-4fbb-977e-d3a1acb9937a',\n",
      "                   'metadata': {'ls_embedding_provider': 'DashScopeEmbeddings',\n",
      "                                'ls_retriever_name': 'vectorstore',\n",
      "                                'ls_vector_store_provider': 'FAISS'},\n",
      "                   'name': 'Docs',\n",
      "                   'start_time': '2024-10-25T06:44:37.912+00:00',\n",
      "                   'streamed_output': [],\n",
      "                   'streamed_output_str': [],\n",
      "                   'tags': ['map:key:context', 'FAISS', 'DashScopeEmbeddings'],\n",
      "                   'type': 'retriever'}},\n",
      " 'name': 'RunnableSequence',\n",
      " 'streamed_output': ['', 'Harrison', ' worked', ' at', ' Kensho.', ''],\n",
      " 'type': 'chain'})\n"
     ]
    }
   ],
   "source": [
    "async for chunk in retriever_chain.astream_log([\"what did harrison work at?\"], include_names=[\"Docs\"], diff=False):\n",
    "    print(\"-\" * 40)\n",
    "    print(chunk)"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-25T06:44:39.055839Z",
     "start_time": "2024-10-25T06:44:37.903176Z"
    }
   },
   "id": "29a486aa09a771c0",
   "execution_count": 54
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 并行处理\n",
    "LangChain表达式语言如何支持并行请求。 例如，当使用RunnableParallel（通常写成字典形式）时，它会并行执行每个元素。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "f5d96b8017ccf43f"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CPU times: user 22 ms, sys: 4.48 ms, total: 26.5 ms\n",
      "Wall time: 4.4 s\n"
     ]
    },
    {
     "data": {
      "text/plain": "{'joke': AIMessage(content='当然可以，这里有一个轻松的猫咪笑话：\\n\\n为什么猫不喜欢玩扑克牌？\\n\\n因为每次它们想数数的时候，总有一只老鼠（ace）跑掉！ \\n\\n希望这个笑话能给你带来一丝微笑！', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 45, 'prompt_tokens': 24, 'total_tokens': 69, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-46e537bd-e043-49e6-8a89-5a60e40729d1-0', usage_metadata={'input_tokens': 24, 'output_tokens': 45, 'total_tokens': 69, 'input_token_details': {}, 'output_token_details': {}}),\n 'poem': AIMessage(content='夜幕低垂星点点，猫咪轻跃过篱墙。', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 29, 'total_tokens': 44, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-2d1cbc1f-c7fb-4231-936a-438fbf2abea9-0', usage_metadata={'input_tokens': 29, 'output_tokens': 15, 'total_tokens': 44, 'input_token_details': {}, 'output_token_details': {}})}"
     },
     "execution_count": 60,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "\n",
    "from langchain_core.runnables import RunnablePassthrough, RunnableParallel\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from dotenv import load_dotenv\n",
    "import os\n",
    "\n",
    "load_dotenv()\n",
    "\n",
    "retriever = vectorstore.as_retriever()\n",
    "llm = ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\"\n",
    ")\n",
    "\n",
    "chain1 = ChatPromptTemplate.from_template(\"告诉我一个关于{topic}的笑话\") | llm\n",
    "chain2 = ChatPromptTemplate.from_template(\"写一首关于{topic}的短诗(2行)\") | llm\n",
    "\n",
    "combined = RunnableParallel(joke=chain1, poem=chain2)\n",
    "%time combined.invoke({\"topic\": \"猫\"})\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-25T07:47:10.961402Z",
     "start_time": "2024-10-25T07:47:06.474793Z"
    }
   },
   "id": "1a33e67ddb605859",
   "execution_count": 60
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [],
   "metadata": {
    "collapsed": false
   },
   "id": "cd01cc1dd797a4f9"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
