{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# LangChain的LCEL和Runnable你搞懂了吗\n",
    "LangChain的LCEL估计行业内的朋友都听过，但是LCEL里的RunnablePassthrough、RunnableParallel、RunnableBranch、RunnableLambda又是什么意思？什么场景下用？\n",
    "\n",
    "## 1、LCEL的定义和原理\n",
    "LangChain的核心是Chain，即对多个组件的一系列调用。\n",
    "\n",
    "LCEL是LangChain 定义的表达式语言，是一种更加高效简洁的调用一系列组件的方式。\n",
    "\n",
    "LCEL使用方式就是：以一堆管道符（\"|\"）串联所有实现了Runnable接口的组件。\n",
    "\n",
    "比如这样："
   ],
   "id": "3d7419c3674fece3"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-13T11:15:38.970016Z",
     "start_time": "2025-04-13T11:15:38.141186Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.output_parsers import CommaSeparatedListOutputParser\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_ollama import ChatOllama\n",
    "\n",
    "prompt_tpl = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"{parser_instructions}\"),\n",
    "        (\"human\", \"列出{cityName}的{viewPointNum}个著名景点。\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "output_parser = CommaSeparatedListOutputParser()\n",
    "parser_instructions = output_parser.get_format_instructions()\n",
    "\n",
    "model = ChatOllama(base_url=\"http://10.2.4.31:11434\", model=\"qwen2.5:latest\")\n",
    "\n",
    "chain = prompt_tpl | model | output_parser\n",
    "\n",
    "response = chain.invoke(\n",
    "    {\"cityName\": \"南京\", \"viewPointNum\": 3, \"parser_instructions\": parser_instructions}\n",
    ")\n",
    "response"
   ],
   "id": "fa2b5945c239e708",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['明孝陵', '南京长江大桥', '玄武湖']"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 2
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "所以LangChain为了让组件能以LCEL的方式快速简洁的被调用，计划将所有组件都实现Runnable接口。比如我们常用的PromptTemplate 、LLMChain 、StructuredOutputParser 等等。\n",
    "\n",
    "管道符（\"|\"）在Python里就类似or运算（或运算），比如A|B，就是A.or(B)。\n",
    "\n",
    "那对应到LangChain的Runnable接口里，这个or运算是怎么实现的呢？一起看到源码：\n",
    "\n",
    "\n",
    "\n",
    "LangChain通过or将所有的Runnable串联起来，在通过invoke去一个个执行，上一个组件的输出，作为下一个组件的输入。\n",
    "\n",
    "\n",
    "\n",
    "LangChain这风格怎么有点像神经网络呀，不得不说，这个世界到处都是相似的草台班子。嗨！\n",
    "\n",
    "总结起来讲就是：LangChain的每个组件都实现了Runnable，通过LCEL方式，将多个组件串联到一起，最后一个个执行每个组件的invoke方法。上一个组件的输出是下一个组件的输入。\n",
    "\n",
    "\n",
    "\n",
    "## 2、Runnable的含义和应用场景\n",
    "### 2.1、RunnablePassthrough\n",
    "#### 定义\n",
    "\n",
    "RunnablePassthrough 主要用在链中传递数据。RunnablePassthrough一般用在链的第一个位置，用于接收用户的输入。如果处在中间位置，则用于接收上一步的输出。\n",
    "\n",
    "#### 应用场景\n",
    "\n",
    "比如，依旧使用上面的例子，接受用户输入的城市，如果输入城市是南京，则替换成北京，其余不变。代码如下。此处的{}和RunnablePassthrough.assign()是同一个语义。\n"
   ],
   "id": "ff7aa781c1aa07b7"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-13T11:17:52.933535Z",
     "start_time": "2025-04-13T11:17:52.237741Z"
    }
   },
   "cell_type": "code",
   "source": [
    "chain = (\n",
    "        {\n",
    "            \"cityName\": lambda x: '北京' if x[\"cityName\"] == '南京' else x[\"cityName\"],\n",
    "            \"viewPointNum\": lambda x: x[\"viewPointNum\"],\n",
    "            \"parser_instructions\": lambda x: x[\"parser_instructions\"],\n",
    "        }\n",
    "        | prompt_tpl\n",
    "        | model\n",
    "        | output_parser\n",
    ")\n",
    "response = chain.invoke(\n",
    "    {\"cityName\": \"南京\", \"viewPointNum\": 3, \"parser_instructions\": parser_instructions}\n",
    ")\n",
    "response"
   ],
   "id": "27e12d46c1f02354",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['故宫', '长城', '天安门广场']"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 5
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "### 2.2、RunnableParallel\n",
    "#### 定义\n",
    "\n",
    "RunnableParallel看名字里的Parallel就猜到一二，用于并行执行多个组件。通过RunnableParallel，可以实现部分组件或所有组件并发执行的需求。\n",
    "\n",
    "#### 应用场景\n",
    "\n",
    "比如，同时要执行两个任务，一个列出城市著名景点，一个列出城市著名书籍。"
   ],
   "id": "27f29251d6340f60"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-13T11:19:38.256840Z",
     "start_time": "2025-04-13T11:19:36.852731Z"
    }
   },
   "cell_type": "code",
   "source": [
    "\n",
    "from langchain_core.runnables import RunnableParallel\n",
    "\n",
    "prompt_tpl_1 = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"{parser_instructions}\"),\n",
    "        (\"human\", \"列出{cityName}的{viewPointNum}个著名景点。\"),\n",
    "    ]\n",
    ")\n",
    "prompt_tpl_2 = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"{parser_instructions}\"),\n",
    "        (\"human\", \"列出关于{cityName}历史的{viewPointNum}个著名书籍。\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "output_parser = CommaSeparatedListOutputParser()\n",
    "parser_instructions = output_parser.get_format_instructions()\n",
    "\n",
    "chain_1 = prompt_tpl_1 | model | output_parser\n",
    "chain_2 = prompt_tpl_2 | model | output_parser\n",
    "chain_parallel = RunnableParallel(view_point=chain_1, book=chain_2)\n",
    "\n",
    "response = chain_parallel.invoke(\n",
    "    {\"cityName\": \"南京\", \"viewPointNum\": 3, \"parser_instructions\": parser_instructions}\n",
    ")\n",
    "response"
   ],
   "id": "fe0967c312a463eb",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'view_point': ['紫金山', '南京长江大桥', '莫愁湖'],\n",
       " 'book': ['《南京城纪年史》', '《六朝古都南京》', '《明城墙与南京城市变迁》']}"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 8
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "### 2.3、RunnableBranch\n",
    "#### 定义\n",
    "\n",
    "RunnableBranch主要用于多分支子链的场景，为链的调用提供了路由功能，这个有点类似于LangChain的路由链。我们可以创建多个子链，然后根据条件选择执行某一个子链。\n",
    "\n",
    "#### 应用场景\n",
    "\n",
    "比如，有多个回答问题的链，先根据问题找到分类，然后在使用具体的链回答问题。\n"
   ],
   "id": "c123f2c455ac2b70"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-13T11:59:11.941236Z",
     "start_time": "2025-04-13T11:59:08.811193Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from operator import itemgetter\n",
    "from langchain_core.runnables import RunnableBranch, RunnableLambda\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "\n",
    "\n",
    "def print_info(info: str):\n",
    "    print(f\"info: {info}\")\n",
    "    return info\n",
    "\n",
    "\n",
    "output_parser = StrOutputParser()\n",
    "\n",
    "# 准备2条目的链：一条物理链，一条数学链\n",
    "# 1. 物理链\n",
    "physics_template = \"\"\"\n",
    "你是一位物理学家，擅长回答物理相关的问题，当你不知道问题的答案时，你就回答不知道。\n",
    "具体问题如下：\n",
    "{input}\n",
    "\"\"\"\n",
    "physics_chain = PromptTemplate.from_template(physics_template) | model | output_parser\n",
    "\n",
    "# 2. 数学链\n",
    "math_template = \"\"\"\n",
    "你是一个数学家，擅长回答数学相关的问题，当你不知道问题的答案时，你就回答不知道。\n",
    "具体问题如下：\n",
    "{input}\n",
    "\"\"\"\n",
    "math_chain = PromptTemplate.from_template(math_template) | model | output_parser\n",
    "\n",
    "# 4. 其他链\n",
    "other_template = \"\"\"\n",
    "你是一个AI助手，你会回答一下问题。\n",
    "具体问题如下：\n",
    "{input}\n",
    "\"\"\"\n",
    "other_chain = PromptTemplate.from_template(other_template) | model | output_parser\n",
    "\n",
    "classify_prompt_template = \"\"\"\n",
    "请你对以下问题进行分类，将问题分类为\"数学\"、\"物理\"、\"其它\"，不需要返回多个分类，返回一个即可。\n",
    "具体问题如下：\n",
    "{input}\n",
    "\n",
    "分类结果：\n",
    "\"\"\"\n",
    "classify_chain = PromptTemplate.from_template(classify_prompt_template) | model | output_parser\n",
    "\n",
    "answer_chain = RunnableBranch(\n",
    "    (lambda x: \"数学\" in x[\"topic\"], math_chain),\n",
    "    (lambda x: \"物理\" in x[\"topic\"], physics_chain),\n",
    "    other_chain\n",
    ")\n",
    "\n",
    "# \"input\": lambda x: x[\"input\"] ====> itemgetter(\"input\") 获取对象的input属性值\n",
    "# itemgetter 用于获取对象的哪些位置的数据，参数即为代表位置的序号值，\n",
    "text = {\"name\": \"zhangsan\", \"sex\": \"man\"}\n",
    "print(itemgetter(\"name\")(text))\n",
    "\n",
    "final_chain = {\"topic\": classify_chain, \"input\": itemgetter(\"input\")} | RunnableLambda(print_info) | answer_chain\n",
    "# final_chain.invoke({\"input\":\"地球的半径是多少？\"})\n",
    "final_chain.invoke({\"input\": \"对y=x求导的结果是多少？\"})\n"
   ],
   "id": "11fd0e750f466e5",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "zhangsan\n",
      "info: {'topic': '数学', 'input': '对y=x求导的结果是多少？'}\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'对函数 \\\\( y = x \\\\) 求导的结果是 \\\\( 1 \\\\)。这是因为根据基本的导数规则，一次函数 \\\\( ax + b \\\\) 的导数为 \\\\( a \\\\)，在这里 \\\\( a = 1 \\\\)。所以 \\\\( y = x \\\\) 的导数就是 \\\\( 1 \\\\)。'"
      ]
     },
     "execution_count": 33,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 33
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "### 2.4、RunnableLambda\n",
    "#### 定义\n",
    "\n",
    "要说牛批还得是RunnableLambda，它可以将Python 函数转换为 Runnable对象。这种转换使得任何函数都可以被看作 LCEL 链的一部分，我们把自己需要的功能通过自定义函数 + RunnableLambda的方式包装一下，集成到 LCEL 链中，这样算是可以跟任何外部系统打通了。\n",
    "\n",
    "#### 应用场景\n",
    "\n",
    "比如，在执行过程中，想在中间插入一段自定义功能（如 打印日志 等），可以通过自定义函数 + RunnableLambda的方式实现。"
   ],
   "id": "8d2f1b17800ed619"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-13T12:32:02.303544Z",
     "start_time": "2025-04-13T12:32:01.635993Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
    "def print_info(info: str):\n",
    "    print(f\"info: {info}\")\n",
    "    return info\n",
    "\n",
    "\n",
    "prompt_tpl_1 = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", \"{parser_instructions}\"),\n",
    "        (\"human\", \"列出{cityName}的{viewPointNum}个著名景点。\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "output_parser = CommaSeparatedListOutputParser()\n",
    "parser_instructions = output_parser.get_format_instructions()\n",
    "\n",
    "# chain_1 = prompt_tpl_1 | model | RunnableLambda(print_info) | output_parser\n",
    "\n",
    "\n",
    "chain_1 = ({\n",
    "    \"cityName\": itemgetter(\"cityName\") | RunnableLambda(lambda x: \"北京\"),\n",
    "    \"viewPointNum\": itemgetter(\"viewPointNum\"),\n",
    "    \"parser_instructions\": itemgetter(\"parser_instructions\")\n",
    "} | prompt_tpl_1 | model | output_parser)\n",
    "\n",
    "response = chain_1.invoke(\n",
    "    {\"cityName\": \"南京\", \"viewPointNum\": 3, \"parser_instructions\": parser_instructions}\n",
    ")\n",
    "response"
   ],
   "id": "7bb19da5c4dbf374",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['天安门', '长城', '故宫']"
      ]
     },
     "execution_count": 50,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 50
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-13T12:31:16.255486Z",
     "start_time": "2025-04-13T12:31:15.107639Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_community.vectorstores import DocArrayInMemorySearch\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
    "# 词嵌入模型\n",
    "from langchain_ollama import OllamaEmbeddings\n",
    "\n",
    "embeddings = OllamaEmbeddings(\n",
    "    model=\"quentinz/bge-large-zh-v1.5:latest\",\n",
    "    base_url=\"http://10.2.4.31:11434\",\n",
    ")\n",
    "vectorstore = DocArrayInMemorySearch.from_texts(\n",
    "    [\"汤姆本周五要去参加同学聚会\", \"杰瑞本周五要去参加生日聚会\"],\n",
    "    embedding=embeddings\n",
    ")\n",
    "\n",
    "retriever = vectorstore.as_retriever()\n",
    "\n",
    "template = \"\"\"基于以下的内容，回答问题:\n",
    "{context}\n",
    "问题: {question}\n",
    "\"\"\"\n",
    "\n",
    "prompt = ChatPromptTemplate.from_template(template)\n",
    "output_parser = StrOutputParser()\n",
    "# RunnableParallel 用于并行执行多个组件。通过RunnableParallel，可以实现部分组件或所有组件并发执行的需求。\n",
    "# RunnablePassthrough 主要用在链中传递数据。RunnablePassthrough一般用在链的第一个位置，用于接收用户的输入。如果处在中间位置，则用于接收上一步的输出。\n",
    "setup_and_retrieval = RunnableParallel(\n",
    "    {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
    ")\n",
    "\n",
    "chain = setup_and_retrieval | prompt | model | output_parser\n",
    "chain.invoke(\"这周五谁要去参加什么聚会？\")"
   ],
   "id": "3171de10fa0e23bf",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'这周五汤姆要去参加同学聚会，而杰瑞要去参加生日聚会。'"
      ]
     },
     "execution_count": 48,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 48
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-13T12:24:15.129393Z",
     "start_time": "2025-04-13T12:24:14.481041Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from operator import itemgetter\n",
    "\n",
    "from langchain.chat_models import ChatOpenAI\n",
    "from langchain.prompts import ChatPromptTemplate\n",
    "from langchain.schema.runnable import RunnableLambda\n",
    "\n",
    "def length_function(text):\n",
    "    print(f\"执行length_function方法，获取的内容为：{text}\")\n",
    "    return len(text)\n",
    "\n",
    "def _multiple_length_function(text1, text2):\n",
    "    print(f\"执行_multiple_length_function方法，获取的内容为：{text1} - {text2}\")\n",
    "    return len(text1) * len(text2)\n",
    "\n",
    "def multiple_length_function(_dict):\n",
    "    return _multiple_length_function(_dict[\"text1\"], _dict[\"text2\"])\n",
    "\n",
    "prompt = ChatPromptTemplate.from_template(\"what is {a} + {b}\")\n",
    "\n",
    "chain1 = prompt | model\n",
    "\n",
    "chain = (\n",
    "    {\n",
    "        \"a\": itemgetter(\"foo\") | RunnableLambda(length_function), # 给参数a赋值， 获取到foo属性的值，传递给RunnableLambda，执行length_function函数\n",
    "        # 给参数b赋值， 获取到字典的值，传递给RunnableLambda，执行multiple_length_function函数\n",
    "        \"b\": {\"text1\": itemgetter(\"foo\"), \"text2\": itemgetter(\"bar\")} | RunnableLambda(multiple_length_function), \n",
    "    }\n",
    "    | prompt\n",
    "    | model\n",
    ")\n",
    "\n",
    "chain.invoke({\"foo\": \"bar\", \"bar\": \"cc\"})"
   ],
   "id": "2cf105d94d0da5aa",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "执行length_function方法，获取的内容为：bar\n",
      "执行_multiple_length_function方法，获取的内容为：bar - cc\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "AIMessage(content='3 + 6 equals 9.', additional_kwargs={}, response_metadata={'model': 'qwen2.5:latest', 'created_at': '2025-04-13T12:24:15.098001936Z', 'done': True, 'done_reason': 'stop', 'total_duration': 492069760, 'load_duration': 48131962, 'prompt_eval_count': 36, 'prompt_eval_duration': 109000000, 'eval_count': 9, 'eval_duration': 327000000, 'message': Message(role='assistant', content='', images=None, tool_calls=None), 'model_name': 'qwen2.5:latest'}, id='run-32d9a997-1759-405b-b97b-c6b0203509c3-0', usage_metadata={'input_tokens': 36, 'output_tokens': 9, 'total_tokens': 45})"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 37
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 3、总结\n",
    "本篇主要聊了LangChain的LCEL表达式，以及LangChain链的原理，以及常用的几个Runnable的定义和应用场景，希望对你有帮助。"
   ],
   "id": "b2d50b26af0d1584"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
