{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 1.四种基本内置链\n",
    "- LLMChain: 提示词模版+模型+输出格式化,最常用的链\n",
    "- SequentialChain: 顺序执行，串行执行多个链,将前一个LLM输出作为下一个LLM输入\n",
    "    - SimpleSequentialChain: 简单的顺序执行多个链\n",
    "- TransformChain: 输入输出格式化\n",
    "- ConversationChain: 串行执行多个链, 并且支持对话\n",
    "- RouterChain: 路由链，支持创建一个非确定性的链，由LLM确定下一步；链内多个prompt模版提供了不同的提示请求"
   ],
   "id": "f92f87ab813a3228"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 1.1.LLMChain",
   "id": "25c44b75635c70f9"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-13T11:29:33.443808Z",
     "start_time": "2025-04-13T11:29:28.072609Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain.chains.llm import LLMChain\n",
    "from langchain_ollama import ChatOllama\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "llm = ChatOllama(base_url=\"http://10.2.4.31:11434\", model=\"qwen2.5:latest\")\n",
    "prompt = \"\"\"\n",
    "帮我为{product}提供3个注册的域名\n",
    "\"\"\"\n",
    "prompt = PromptTemplate(\n",
    "    input_variables=[\"product\"], template=prompt\n",
    ")\n",
    "chain = prompt | llm | StrOutputParser()\n",
    "# chain = LLMChain(llm=llm, \n",
    "#                  prompt=prompt,\n",
    "#                  verbose=True) # verbose: 打印日志\n",
    "chain.invoke(\"AI小助手\")\n"
   ],
   "id": "721c3e325b6b0676",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'当然可以，这里为您提供三个适合AI小助手的注册域名建议：\\n\\n1. **BrainMateAI.com** - 结合了“智能伙伴”的概念，简洁明了。\\n2. **SmartPalAI.com** - “Smart”代表智慧，“Pal”意为朋友或同伴，整体显得亲切且易于记忆。\\n3. **IQAssistantAI.com** - 强调人工智能的高智商和辅助功能。\\n\\n请注意，这些域名是否可用以及注册需要通过相关网站（如GoDaddy, Namecheap等）进行查询和购买。在实际操作前，请务必确认域名的可用性及完成相应的注册流程。'"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 37
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 1.2.顺序链 SimpleSequentialChain 和 SequentialChain\n",
   "id": "fc6a3196fe2c3885"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-13T11:29:40.793506Z",
     "start_time": "2025-04-13T11:29:33.447525Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 只支持固定的链路 SimpleSequentialChain\n",
    "from langchain.chains import SimpleSequentialChain\n",
    "from langchain_ollama import ChatOllama\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "\n",
    "chat_model = ChatOllama(base_url=\"http://10.2.4.31:11434\", model=\"qwen2.5:latest\")\n",
    "# 第一条链\n",
    "prompt_one = ChatPromptTemplate.from_template(\"帮我给{product}的公司，起一个提供响亮的名字\")\n",
    "chain_one = LLMChain(llm=chat_model, prompt=prompt_one, verbose=True)\n",
    "\n",
    "# 第二个链\n",
    "prompt_two = ChatPromptTemplate.from_template(\"用5个词来描述一下这个公司名字: {company_name}\")\n",
    "chain_two = LLMChain(llm=chat_model, prompt=prompt_two, verbose=True)\n",
    "\n",
    "# 定义链\n",
    "chain = SimpleSequentialChain(\n",
    "    chains=[chain_one, chain_two], \n",
    "    verbose=True)\n",
    "chain.run(\"AI教育培训机构\")"
   ],
   "id": "5a24f9061c5c1511",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001B[1m> Entering new SimpleSequentialChain chain...\u001B[0m\n",
      "\n",
      "\n",
      "\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
      "Prompt after formatting:\n",
      "\u001B[32;1m\u001B[1;3mHuman: 帮我给AI教育培训机构的公司，起一个提供响亮的名字\u001B[0m\n",
      "\n",
      "\u001B[1m> Finished chain.\u001B[0m\n",
      "\u001B[36;1m\u001B[1;3m为一家专注于AI教育的培训机构命名时，我们可以考虑体现公司的核心理念、服务特点以及目标受众等要素。这里为您推荐几个名称：\n",
      "\n",
      "1. **智创未来学院**：传达出通过人工智能技术创造美好未来的愿景。\n",
      "2. **慧码空间**：突出了对智慧与编程技能培养的重视。\n",
      "3. **智能领航者教育中心**：强调在AI领域为学员指引方向，成为行业内的领先者。\n",
      "4. **灵动AI学堂**：彰显学习内容生动有趣，激发学生的学习兴趣。\n",
      "5. **未来探索者学院**：鼓励学员勇于探索未知领域，特别是人工智能领域的前沿知识。\n",
      "\n",
      "您可以根据公司定位、文化理念以及目标客户群体等因素进一步选择或定制最适合您机构的名字。希望这些建议能对您有所帮助！\u001B[0m\n",
      "\n",
      "\n",
      "\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
      "Prompt after formatting:\n",
      "\u001B[32;1m\u001B[1;3mHuman: 用5个词来描述一下这个公司名字: 为一家专注于AI教育的培训机构命名时，我们可以考虑体现公司的核心理念、服务特点以及目标受众等要素。这里为您推荐几个名称：\n",
      "\n",
      "1. **智创未来学院**：传达出通过人工智能技术创造美好未来的愿景。\n",
      "2. **慧码空间**：突出了对智慧与编程技能培养的重视。\n",
      "3. **智能领航者教育中心**：强调在AI领域为学员指引方向，成为行业内的领先者。\n",
      "4. **灵动AI学堂**：彰显学习内容生动有趣，激发学生的学习兴趣。\n",
      "5. **未来探索者学院**：鼓励学员勇于探索未知领域，特别是人工智能领域的前沿知识。\n",
      "\n",
      "您可以根据公司定位、文化理念以及目标客户群体等因素进一步选择或定制最适合您机构的名字。希望这些建议能对您有所帮助！\u001B[0m\n",
      "\n",
      "\u001B[1m> Finished chain.\u001B[0m\n",
      "\u001B[33;1m\u001B[1;3m创新智领未来\u001B[0m\n",
      "\n",
      "\u001B[1m> Finished chain.\u001B[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'创新智领未来'"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 38
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-13T11:29:49.914719Z",
     "start_time": "2025-04-13T11:29:40.797152Z"
    }
   },
   "cell_type": "code",
   "source": [
    "## SequentialChain 支持多个链路的顺序执行\n",
    "from langchain_ollama import ChatOllama\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain.chains import SequentialChain,LLMChain\n",
    "\n",
    "chat_model = ChatOllama(base_url=\"http://10.2.4.31:11434\", model=\"qwen2.5:latest\")\n",
    "# chain1: 翻译成中文\n",
    "prompt_one = ChatPromptTemplate.from_template(\"把下面的内容翻译成中文：\\n\\n {content}\")\n",
    "chain_one = LLMChain(llm=chat_model, prompt=prompt_one, verbose=True, output_key=\"chinese_key\")\n",
    "\n",
    "# chain2: 对翻译后的中文进行总结\n",
    "prompt_two = ChatPromptTemplate.from_template(\"把下面的内容总结一下：\\n\\n {chinese_key}\")\n",
    "chain_two = LLMChain(llm=chat_model, prompt=prompt_two, verbose=True, output_key=\"summary_key\")\n",
    "\n",
    "# chain3: 智能识别语言\n",
    "prompt_three = ChatPromptTemplate.from_template(\"请智能识别下面的内容是什么语言：\\n\\n {summary_key}\")\n",
    "chain_three = LLMChain(llm=chat_model, prompt=prompt_three, verbose=True, output_key=\"language_key\")\n",
    "\n",
    "# chain4: 可以针对摘要进行评论\n",
    "prompt_four = ChatPromptTemplate.from_template(\"请使用指定语言对以下的内容进行回复：\\n\\n {summary_key} \\n\\n 语言：{language_key}\")\n",
    "chain_four = LLMChain(llm=chat_model, prompt=prompt_four, verbose=True, output_key=\"comment_key\")\n",
    "\n",
    "# 组合chain\n",
    "chain = SequentialChain(\n",
    "    chains=[chain_one, chain_two, chain_three, chain_four], # 先后顺序的链\n",
    "    input_variables=[\"content\"], # 输入值\n",
    "    output_variables=[\"chinese_key\", \"summary_key\", \"language_key\", \"comment_key\"], # 输出值\n",
    "    verbose=True\n",
    ")\n",
    "\n",
    "content = \"\"\"\n",
    "Spring’s flexible libraries are trusted by developers all over the world. Spring delivers delightful experiences to millions of end-users every day—whether that’s streaming TV, online shopping, or countless other innovative solutions. Spring also has contributions from all the big names in tech, including Alibaba, Amazon, Google, Microsoft, and more.\n",
    "\"\"\"\n",
    "chain.invoke(content)"
   ],
   "id": "b03f0b4df319480b",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001B[1m> Entering new SequentialChain chain...\u001B[0m\n",
      "\n",
      "\n",
      "\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
      "Prompt after formatting:\n",
      "\u001B[32;1m\u001B[1;3mHuman: 把下面的内容翻译成中文：\n",
      "\n",
      " \n",
      "Spring’s flexible libraries are trusted by developers all over the world. Spring delivers delightful experiences to millions of end-users every day—whether that’s streaming TV, online shopping, or countless other innovative solutions. Spring also has contributions from all the big names in tech, including Alibaba, Amazon, Google, Microsoft, and more.\n",
      "\u001B[0m\n",
      "\n",
      "\u001B[1m> Finished chain.\u001B[0m\n",
      "\n",
      "\n",
      "\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
      "Prompt after formatting:\n",
      "\u001B[32;1m\u001B[1;3mHuman: 把下面的内容总结一下：\n",
      "\n",
      " Spring 的灵活库得到了全世界开发者的信任。每天，数以百万计的最终用户都能从 Spring 中获得愉悦的体验——无论是在线观看电视、网络购物，还是无数其他创新解决方案。Spring 也获得了科技界各大公司的贡献，包括阿里巴巴、亚马逊、谷歌和微软等。\u001B[0m\n",
      "\n",
      "\u001B[1m> Finished chain.\u001B[0m\n",
      "\n",
      "\n",
      "\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
      "Prompt after formatting:\n",
      "\u001B[32;1m\u001B[1;3mHuman: 请智能识别下面的内容是什么语言：\n",
      "\n",
      " 总结如下：\n",
      "\n",
      "Spring 的灵活库深受全球开发者的信赖，每天有数以百万计的最终用户通过它获得优质的体验，应用范围涵盖了在线娱乐（如电视观看）、电子商务（如网络购物）等多个领域。此外，许多科技巨头如阿里巴巴、亚马逊、谷歌和微软等都对 Spring 进行了贡献。\u001B[0m\n",
      "\n",
      "\u001B[1m> Finished chain.\u001B[0m\n",
      "\n",
      "\n",
      "\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
      "Prompt after formatting:\n",
      "\u001B[32;1m\u001B[1;3mHuman: 请使用指定语言对以下的内容进行回复：\n",
      "\n",
      " 总结如下：\n",
      "\n",
      "Spring 的灵活库深受全球开发者的信赖，每天有数以百万计的最终用户通过它获得优质的体验，应用范围涵盖了在线娱乐（如电视观看）、电子商务（如网络购物）等多个领域。此外，许多科技巨头如阿里巴巴、亚马逊、谷歌和微软等都对 Spring 进行了贡献。 \n",
      "\n",
      " 语言：这段文字是中文。内容总结了Spring框架的广泛应用及其受到的信任，并提到了一些具体的应用领域和技术公司。\u001B[0m\n",
      "\n",
      "\u001B[1m> Finished chain.\u001B[0m\n",
      "\n",
      "\u001B[1m> Finished chain.\u001B[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'content': '\\nSpring’s flexible libraries are trusted by developers all over the world. Spring delivers delightful experiences to millions of end-users every day—whether that’s streaming TV, online shopping, or countless other innovative solutions. Spring also has contributions from all the big names in tech, including Alibaba, Amazon, Google, Microsoft, and more.\\n',\n",
       " 'chinese_key': 'Spring 的灵活库得到了全世界开发者的信任。每天，数以百万计的最终用户都能从 Spring 中获得愉悦的体验——无论是在线观看电视、网络购物，还是无数其他创新解决方案。Spring 也获得了科技界各大公司的贡献，包括阿里巴巴、亚马逊、谷歌和微软等。',\n",
       " 'summary_key': '总结如下：\\n\\nSpring 的灵活库深受全球开发者的信赖，每天有数以百万计的最终用户通过它获得优质的体验，应用范围涵盖了在线娱乐（如电视观看）、电子商务（如网络购物）等多个领域。此外，许多科技巨头如阿里巴巴、亚马逊、谷歌和微软等都对 Spring 进行了贡献。',\n",
       " 'language_key': '这段文字是中文。内容总结了Spring框架的广泛应用及其受到的信任，并提到了一些具体的应用领域和技术公司。',\n",
       " 'comment_key': '这段文字是中文。内容总结了Spring框架的广泛应用及其受到的信任，并提到了其在在线娱乐、电子商务等多个领域的应用实例，同时还提及了阿里巴巴、亚马逊、谷歌和微软等技术公司的贡献。'}"
      ]
     },
     "execution_count": 39,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 39
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 1.3 RouterChain\n",
   "id": "839feba656dc746a"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-13T12:02:22.209112Z",
     "start_time": "2025-04-13T12:02:15.270072Z"
    }
   },
   "cell_type": "code",
   "source": [
    "from operator import itemgetter\n",
    "from langchain_core.runnables import RunnableBranch, RunnableLambda\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain.globals import set_debug\n",
    "set_debug(True)\n",
    "\n",
    "def print_info(info: str):\n",
    "    print(f\"info: {info}\")\n",
    "    return info\n",
    "\n",
    "output_parser = StrOutputParser()\n",
    "model = ChatOllama(base_url=\"http://10.2.4.31:11434\", model=\"qwen2.5:latest\")\n",
    "# 准备2条目的链：一条物理链，一条数学链\n",
    "# 1. 物理链\n",
    "physics_template = \"\"\"\n",
    "你是一位物理学家，擅长回答物理相关的问题，当你不知道问题的答案时，你就回答不知道。\n",
    "具体问题如下：\n",
    "{input}\n",
    "\"\"\"\n",
    "physics_chain = PromptTemplate.from_template(physics_template) | model | output_parser\n",
    "\n",
    "# 2. 数学链\n",
    "math_template = \"\"\"\n",
    "你是一个数学家，擅长回答数学相关的问题，当你不知道问题的答案时，你就回答不知道。\n",
    "具体问题如下：\n",
    "{input}\n",
    "\"\"\"\n",
    "math_chain = PromptTemplate.from_template(math_template) | model | output_parser\n",
    "\n",
    "# 4. 其他链\n",
    "other_template = \"\"\"\n",
    "你是一个AI助手，你会回答一下问题。\n",
    "具体问题如下：\n",
    "{input}\n",
    "\"\"\"\n",
    "other_chain = PromptTemplate.from_template(other_template) | model | output_parser\n",
    "\n",
    "\n",
    "classify_prompt_template = \"\"\"\n",
    "请你对以下问题进行分类，将问题分类为\"数学\"、\"物理\"、\"其它\"，不需要返回多个分类，返回一个即可。\n",
    "具体问题如下：\n",
    "{input}\n",
    "\n",
    "分类结果：\n",
    "\"\"\"\n",
    "classify_prompt = PromptTemplate.from_template(classify_prompt_template) | model | output_parser\n",
    "\n",
    "# RunnableBranch主要用于多分支子链的场景，为链的调用提供了路由功能，这个有点类似于LangChain的路由链。我们可以创建多个子链，然后根据条件选择执行某一个子链。\n",
    "answer_chain = RunnableBranch(\n",
    "    (lambda x: \"数学\" in x[\"topic\"], math_chain),\n",
    "    (lambda x: \"物理\" in x[\"topic\"], physics_chain),\n",
    "    other_chain\n",
    ")\n",
    "\n",
    "final_chain =  {\n",
    "   \"topic\": classify_prompt, \n",
    "   \"input\": itemgetter(\"input\")\n",
    "} | RunnableLambda(print_info) | answer_chain\n",
    "# final_chain.invoke({\"input\":\"地球的半径是多少？\"})\n",
    "final_chain.invoke({\"input\":\"对y=x求导的结果是多少？\"})\n",
    "final_chain.invoke({\"input\":\"两个黄鹂鸣翠柳的下一句是什么？\"})"
   ],
   "id": "2f0e8bee10c5992f",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"input\": \"对y=x求导的结果是多少？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input>] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"input\": \"对y=x求导的结果是多少？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"input\": \"对y=x求导的结果是多少？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableLambda] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"input\": \"对y=x求导的结果是多少？\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableLambda] [1ms] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"对y=x求导的结果是多少？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence > prompt:PromptTemplate] Entering Prompt run with input:\n",
      "\u001B[0m{\n",
      "  \"input\": \"对y=x求导的结果是多少？\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence > prompt:PromptTemplate] [1ms] Exiting Prompt run with output:\n",
      "\u001B[0m[outputs]\n",
      "\u001B[32;1m\u001B[1;3m[llm/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence > llm:ChatOllama] Entering LLM run with input:\n",
      "\u001B[0m{\n",
      "  \"prompts\": [\n",
      "    \"Human: \\n请你对以下问题进行分类，将问题分类为\\\"数学\\\"、\\\"物理\\\"、\\\"其它\\\"，不需要返回多个分类，返回一个即可。\\n具体问题如下：\\n对y=x求导的结果是多少？\\n\\n分类结果：\"\n",
      "  ]\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[llm/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence > llm:ChatOllama] [341ms] Exiting LLM run with output:\n",
      "\u001B[0m{\n",
      "  \"generations\": [\n",
      "    [\n",
      "      {\n",
      "        \"text\": \"数学\",\n",
      "        \"generation_info\": {\n",
      "          \"model\": \"qwen2.5:latest\",\n",
      "          \"created_at\": \"2025-04-13T12:02:15.874894135Z\",\n",
      "          \"done\": true,\n",
      "          \"done_reason\": \"stop\",\n",
      "          \"total_duration\": 263402681,\n",
      "          \"load_duration\": 53384779,\n",
      "          \"prompt_eval_count\": 76,\n",
      "          \"prompt_eval_duration\": 130000000,\n",
      "          \"eval_count\": 2,\n",
      "          \"eval_duration\": 68000000,\n",
      "          \"message\": {\n",
      "            \"role\": \"assistant\",\n",
      "            \"content\": \"\",\n",
      "            \"images\": null,\n",
      "            \"tool_calls\": null\n",
      "          },\n",
      "          \"model_name\": \"qwen2.5:latest\"\n",
      "        },\n",
      "        \"type\": \"ChatGeneration\",\n",
      "        \"message\": {\n",
      "          \"lc\": 1,\n",
      "          \"type\": \"constructor\",\n",
      "          \"id\": [\n",
      "            \"langchain\",\n",
      "            \"schema\",\n",
      "            \"messages\",\n",
      "            \"AIMessage\"\n",
      "          ],\n",
      "          \"kwargs\": {\n",
      "            \"content\": \"数学\",\n",
      "            \"response_metadata\": {\n",
      "              \"model\": \"qwen2.5:latest\",\n",
      "              \"created_at\": \"2025-04-13T12:02:15.874894135Z\",\n",
      "              \"done\": true,\n",
      "              \"done_reason\": \"stop\",\n",
      "              \"total_duration\": 263402681,\n",
      "              \"load_duration\": 53384779,\n",
      "              \"prompt_eval_count\": 76,\n",
      "              \"prompt_eval_duration\": 130000000,\n",
      "              \"eval_count\": 2,\n",
      "              \"eval_duration\": 68000000,\n",
      "              \"message\": {\n",
      "                \"lc\": 1,\n",
      "                \"type\": \"not_implemented\",\n",
      "                \"id\": [\n",
      "                  \"ollama\",\n",
      "                  \"_types\",\n",
      "                  \"Message\"\n",
      "                ],\n",
      "                \"repr\": \"Message(role='assistant', content='', images=None, tool_calls=None)\"\n",
      "              },\n",
      "              \"model_name\": \"qwen2.5:latest\"\n",
      "            },\n",
      "            \"type\": \"ai\",\n",
      "            \"id\": \"run-344e0a08-1095-41c2-a7bf-3e3349d7a1f6-0\",\n",
      "            \"usage_metadata\": {\n",
      "              \"input_tokens\": 76,\n",
      "              \"output_tokens\": 2,\n",
      "              \"total_tokens\": 78\n",
      "            },\n",
      "            \"tool_calls\": [],\n",
      "            \"invalid_tool_calls\": []\n",
      "          }\n",
      "        }\n",
      "      }\n",
      "    ]\n",
      "  ],\n",
      "  \"llm_output\": null,\n",
      "  \"run\": null,\n",
      "  \"type\": \"LLMResult\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence > parser:StrOutputParser] Entering Parser run with input:\n",
      "\u001B[0m[inputs]\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence > parser:StrOutputParser] [1ms] Exiting Parser run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"数学\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence] [369ms] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"数学\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input>] [372ms] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"数学\",\n",
      "  \"input\": \"对y=x求导的结果是多少？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:print_info] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"数学\",\n",
      "  \"input\": \"对y=x求导的结果是多少？\"\n",
      "}\n",
      "info: {'topic': '数学', 'input': '对y=x求导的结果是多少？'}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:print_info] [2ms] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"数学\",\n",
      "  \"input\": \"对y=x求导的结果是多少？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"数学\",\n",
      "  \"input\": \"对y=x求导的结果是多少？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableLambda] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"数学\",\n",
      "  \"input\": \"对y=x求导的结果是多少？\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableLambda] [1ms] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": true\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"数学\",\n",
      "  \"input\": \"对y=x求导的结果是多少？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence > prompt:PromptTemplate] Entering Prompt run with input:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"数学\",\n",
      "  \"input\": \"对y=x求导的结果是多少？\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence > prompt:PromptTemplate] [0ms] Exiting Prompt run with output:\n",
      "\u001B[0m[outputs]\n",
      "\u001B[32;1m\u001B[1;3m[llm/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence > llm:ChatOllama] Entering LLM run with input:\n",
      "\u001B[0m{\n",
      "  \"prompts\": [\n",
      "    \"Human: \\n你是一个数学家，擅长回答数学相关的问题，当你不知道问题的答案时，你就回答不知道。\\n具体问题如下：\\n对y=x求导的结果是多少？\"\n",
      "  ]\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[llm/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence > llm:ChatOllama] [2.16s] Exiting LLM run with output:\n",
      "\u001B[0m{\n",
      "  \"generations\": [\n",
      "    [\n",
      "      {\n",
      "        \"text\": \"对函数 \\\\( y = x \\\\) 求导的结果是 \\\\( \\\\frac{dy}{dx} = 1 \\\\)。这是因为 \\\\( y = x \\\\) 是一个一次函数，其斜率（即导数）恒为1。\",\n",
      "        \"generation_info\": {\n",
      "          \"model\": \"qwen2.5:latest\",\n",
      "          \"created_at\": \"2025-04-13T12:02:18.108911196Z\",\n",
      "          \"done\": true,\n",
      "          \"done_reason\": \"stop\",\n",
      "          \"total_duration\": 2147724053,\n",
      "          \"load_duration\": 52211374,\n",
      "          \"prompt_eval_count\": 62,\n",
      "          \"prompt_eval_duration\": 93000000,\n",
      "          \"eval_count\": 53,\n",
      "          \"eval_duration\": 1989000000,\n",
      "          \"message\": {\n",
      "            \"role\": \"assistant\",\n",
      "            \"content\": \"\",\n",
      "            \"images\": null,\n",
      "            \"tool_calls\": null\n",
      "          },\n",
      "          \"model_name\": \"qwen2.5:latest\"\n",
      "        },\n",
      "        \"type\": \"ChatGeneration\",\n",
      "        \"message\": {\n",
      "          \"lc\": 1,\n",
      "          \"type\": \"constructor\",\n",
      "          \"id\": [\n",
      "            \"langchain\",\n",
      "            \"schema\",\n",
      "            \"messages\",\n",
      "            \"AIMessage\"\n",
      "          ],\n",
      "          \"kwargs\": {\n",
      "            \"content\": \"对函数 \\\\( y = x \\\\) 求导的结果是 \\\\( \\\\frac{dy}{dx} = 1 \\\\)。这是因为 \\\\( y = x \\\\) 是一个一次函数，其斜率（即导数）恒为1。\",\n",
      "            \"response_metadata\": {\n",
      "              \"model\": \"qwen2.5:latest\",\n",
      "              \"created_at\": \"2025-04-13T12:02:18.108911196Z\",\n",
      "              \"done\": true,\n",
      "              \"done_reason\": \"stop\",\n",
      "              \"total_duration\": 2147724053,\n",
      "              \"load_duration\": 52211374,\n",
      "              \"prompt_eval_count\": 62,\n",
      "              \"prompt_eval_duration\": 93000000,\n",
      "              \"eval_count\": 53,\n",
      "              \"eval_duration\": 1989000000,\n",
      "              \"message\": {\n",
      "                \"lc\": 1,\n",
      "                \"type\": \"not_implemented\",\n",
      "                \"id\": [\n",
      "                  \"ollama\",\n",
      "                  \"_types\",\n",
      "                  \"Message\"\n",
      "                ],\n",
      "                \"repr\": \"Message(role='assistant', content='', images=None, tool_calls=None)\"\n",
      "              },\n",
      "              \"model_name\": \"qwen2.5:latest\"\n",
      "            },\n",
      "            \"type\": \"ai\",\n",
      "            \"id\": \"run-3562e66c-98f6-4bd2-b21f-796368f7c427-0\",\n",
      "            \"usage_metadata\": {\n",
      "              \"input_tokens\": 62,\n",
      "              \"output_tokens\": 53,\n",
      "              \"total_tokens\": 115\n",
      "            },\n",
      "            \"tool_calls\": [],\n",
      "            \"invalid_tool_calls\": []\n",
      "          }\n",
      "        }\n",
      "      }\n",
      "    ]\n",
      "  ],\n",
      "  \"llm_output\": null,\n",
      "  \"run\": null,\n",
      "  \"type\": \"LLMResult\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence > parser:StrOutputParser] Entering Parser run with input:\n",
      "\u001B[0m[inputs]\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence > parser:StrOutputParser] [1ms] Exiting Parser run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"对函数 \\\\( y = x \\\\) 求导的结果是 \\\\( \\\\frac{dy}{dx} = 1 \\\\)。这是因为 \\\\( y = x \\\\) 是一个一次函数，其斜率（即导数）恒为1。\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence] [2.16s] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"对函数 \\\\( y = x \\\\) 求导的结果是 \\\\( \\\\frac{dy}{dx} = 1 \\\\)。这是因为 \\\\( y = x \\\\) 是一个一次函数，其斜率（即导数）恒为1。\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch] [2.16s] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"对函数 \\\\( y = x \\\\) 求导的结果是 \\\\( \\\\frac{dy}{dx} = 1 \\\\)。这是因为 \\\\( y = x \\\\) 是一个一次函数，其斜率（即导数）恒为1。\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence] [2.56s] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"对函数 \\\\( y = x \\\\) 求导的结果是 \\\\( \\\\frac{dy}{dx} = 1 \\\\)。这是因为 \\\\( y = x \\\\) 是一个一次函数，其斜率（即导数）恒为1。\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"input\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input>] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"input\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"input\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence > prompt:PromptTemplate] Entering Prompt run with input:\n",
      "\u001B[0m{\n",
      "  \"input\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence > prompt:PromptTemplate] [0ms] Exiting Prompt run with output:\n",
      "\u001B[0m[outputs]\n",
      "\u001B[32;1m\u001B[1;3m[llm/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence > llm:ChatOllama] Entering LLM run with input:\n",
      "\u001B[0m{\n",
      "  \"prompts\": [\n",
      "    \"Human: \\n请你对以下问题进行分类，将问题分类为\\\"数学\\\"、\\\"物理\\\"、\\\"其它\\\"，不需要返回多个分类，返回一个即可。\\n具体问题如下：\\n两个黄鹂鸣翠柳的下一句是什么？\\n\\n分类结果：\"\n",
      "  ]\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableLambda] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"input\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableLambda] [1ms] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[llm/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence > llm:ChatOllama] [251ms] Exiting LLM run with output:\n",
      "\u001B[0m{\n",
      "  \"generations\": [\n",
      "    [\n",
      "      {\n",
      "        \"text\": \"其它\",\n",
      "        \"generation_info\": {\n",
      "          \"model\": \"qwen2.5:latest\",\n",
      "          \"created_at\": \"2025-04-13T12:02:18.36674172Z\",\n",
      "          \"done\": true,\n",
      "          \"done_reason\": \"stop\",\n",
      "          \"total_duration\": 239888572,\n",
      "          \"load_duration\": 52746801,\n",
      "          \"prompt_eval_count\": 79,\n",
      "          \"prompt_eval_duration\": 106000000,\n",
      "          \"eval_count\": 2,\n",
      "          \"eval_duration\": 69000000,\n",
      "          \"message\": {\n",
      "            \"role\": \"assistant\",\n",
      "            \"content\": \"\",\n",
      "            \"images\": null,\n",
      "            \"tool_calls\": null\n",
      "          },\n",
      "          \"model_name\": \"qwen2.5:latest\"\n",
      "        },\n",
      "        \"type\": \"ChatGeneration\",\n",
      "        \"message\": {\n",
      "          \"lc\": 1,\n",
      "          \"type\": \"constructor\",\n",
      "          \"id\": [\n",
      "            \"langchain\",\n",
      "            \"schema\",\n",
      "            \"messages\",\n",
      "            \"AIMessage\"\n",
      "          ],\n",
      "          \"kwargs\": {\n",
      "            \"content\": \"其它\",\n",
      "            \"response_metadata\": {\n",
      "              \"model\": \"qwen2.5:latest\",\n",
      "              \"created_at\": \"2025-04-13T12:02:18.36674172Z\",\n",
      "              \"done\": true,\n",
      "              \"done_reason\": \"stop\",\n",
      "              \"total_duration\": 239888572,\n",
      "              \"load_duration\": 52746801,\n",
      "              \"prompt_eval_count\": 79,\n",
      "              \"prompt_eval_duration\": 106000000,\n",
      "              \"eval_count\": 2,\n",
      "              \"eval_duration\": 69000000,\n",
      "              \"message\": {\n",
      "                \"lc\": 1,\n",
      "                \"type\": \"not_implemented\",\n",
      "                \"id\": [\n",
      "                  \"ollama\",\n",
      "                  \"_types\",\n",
      "                  \"Message\"\n",
      "                ],\n",
      "                \"repr\": \"Message(role='assistant', content='', images=None, tool_calls=None)\"\n",
      "              },\n",
      "              \"model_name\": \"qwen2.5:latest\"\n",
      "            },\n",
      "            \"type\": \"ai\",\n",
      "            \"id\": \"run-b1682187-0fce-4061-9834-339c2e27dabc-0\",\n",
      "            \"usage_metadata\": {\n",
      "              \"input_tokens\": 79,\n",
      "              \"output_tokens\": 2,\n",
      "              \"total_tokens\": 81\n",
      "            },\n",
      "            \"tool_calls\": [],\n",
      "            \"invalid_tool_calls\": []\n",
      "          }\n",
      "        }\n",
      "      }\n",
      "    ]\n",
      "  ],\n",
      "  \"llm_output\": null,\n",
      "  \"run\": null,\n",
      "  \"type\": \"LLMResult\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence > parser:StrOutputParser] Entering Parser run with input:\n",
      "\u001B[0m[inputs]\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence > parser:StrOutputParser] [0ms] Exiting Parser run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"其它\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input> > chain:RunnableSequence] [254ms] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"其它\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableParallel<topic,input>] [255ms] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"其它\",\n",
      "  \"input\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:print_info] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"其它\",\n",
      "  \"input\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "info: {'topic': '其它', 'input': '两个黄鹂鸣翠柳的下一句是什么？'}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:print_info] [0ms] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"其它\",\n",
      "  \"input\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"其它\",\n",
      "  \"input\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableLambda] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"其它\",\n",
      "  \"input\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableLambda] [2ms] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": false\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableLambda] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"其它\",\n",
      "  \"input\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableLambda] [1ms] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": false\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence] Entering Chain run with input:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"其它\",\n",
      "  \"input\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence > prompt:PromptTemplate] Entering Prompt run with input:\n",
      "\u001B[0m{\n",
      "  \"topic\": \"其它\",\n",
      "  \"input\": \"两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence > prompt:PromptTemplate] [1ms] Exiting Prompt run with output:\n",
      "\u001B[0m[outputs]\n",
      "\u001B[32;1m\u001B[1;3m[llm/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence > llm:ChatOllama] Entering LLM run with input:\n",
      "\u001B[0m{\n",
      "  \"prompts\": [\n",
      "    \"Human: \\n你是一个AI助手，你会回答一下问题。\\n具体问题如下：\\n两个黄鹂鸣翠柳的下一句是什么？\"\n",
      "  ]\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[llm/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence > llm:ChatOllama] [3.87s] Exiting LLM run with output:\n",
      "\u001B[0m{\n",
      "  \"generations\": [\n",
      "    [\n",
      "      {\n",
      "        \"text\": \"两个黄鹂鸣翠柳的下一句是“一行白鹭上青天”。这句诗出自唐朝诗人杜甫的《绝句》。全诗为：\\n\\n两只黄鹂在翠绿的柳树间婉转地歌唱，\\n一队整齐的白鹭直冲向蔚蓝的天空。\\n我坐在窗前，可以望见西岭上堆积着终年不化的积雪，\\n还可以看到东边门外那棵数株芭蕉树。\",\n",
      "        \"generation_info\": {\n",
      "          \"model\": \"qwen2.5:latest\",\n",
      "          \"created_at\": \"2025-04-13T12:02:22.244495411Z\",\n",
      "          \"done\": true,\n",
      "          \"done_reason\": \"stop\",\n",
      "          \"total_duration\": 3856285758,\n",
      "          \"load_duration\": 55785928,\n",
      "          \"prompt_eval_count\": 54,\n",
      "          \"prompt_eval_duration\": 87000000,\n",
      "          \"eval_count\": 99,\n",
      "          \"eval_duration\": 3701000000,\n",
      "          \"message\": {\n",
      "            \"role\": \"assistant\",\n",
      "            \"content\": \"\",\n",
      "            \"images\": null,\n",
      "            \"tool_calls\": null\n",
      "          },\n",
      "          \"model_name\": \"qwen2.5:latest\"\n",
      "        },\n",
      "        \"type\": \"ChatGeneration\",\n",
      "        \"message\": {\n",
      "          \"lc\": 1,\n",
      "          \"type\": \"constructor\",\n",
      "          \"id\": [\n",
      "            \"langchain\",\n",
      "            \"schema\",\n",
      "            \"messages\",\n",
      "            \"AIMessage\"\n",
      "          ],\n",
      "          \"kwargs\": {\n",
      "            \"content\": \"两个黄鹂鸣翠柳的下一句是“一行白鹭上青天”。这句诗出自唐朝诗人杜甫的《绝句》。全诗为：\\n\\n两只黄鹂在翠绿的柳树间婉转地歌唱，\\n一队整齐的白鹭直冲向蔚蓝的天空。\\n我坐在窗前，可以望见西岭上堆积着终年不化的积雪，\\n还可以看到东边门外那棵数株芭蕉树。\",\n",
      "            \"response_metadata\": {\n",
      "              \"model\": \"qwen2.5:latest\",\n",
      "              \"created_at\": \"2025-04-13T12:02:22.244495411Z\",\n",
      "              \"done\": true,\n",
      "              \"done_reason\": \"stop\",\n",
      "              \"total_duration\": 3856285758,\n",
      "              \"load_duration\": 55785928,\n",
      "              \"prompt_eval_count\": 54,\n",
      "              \"prompt_eval_duration\": 87000000,\n",
      "              \"eval_count\": 99,\n",
      "              \"eval_duration\": 3701000000,\n",
      "              \"message\": {\n",
      "                \"lc\": 1,\n",
      "                \"type\": \"not_implemented\",\n",
      "                \"id\": [\n",
      "                  \"ollama\",\n",
      "                  \"_types\",\n",
      "                  \"Message\"\n",
      "                ],\n",
      "                \"repr\": \"Message(role='assistant', content='', images=None, tool_calls=None)\"\n",
      "              },\n",
      "              \"model_name\": \"qwen2.5:latest\"\n",
      "            },\n",
      "            \"type\": \"ai\",\n",
      "            \"id\": \"run-eff28468-6910-4d53-98fd-f68a3d74d2ed-0\",\n",
      "            \"usage_metadata\": {\n",
      "              \"input_tokens\": 54,\n",
      "              \"output_tokens\": 99,\n",
      "              \"total_tokens\": 153\n",
      "            },\n",
      "            \"tool_calls\": [],\n",
      "            \"invalid_tool_calls\": []\n",
      "          }\n",
      "        }\n",
      "      }\n",
      "    ]\n",
      "  ],\n",
      "  \"llm_output\": null,\n",
      "  \"run\": null,\n",
      "  \"type\": \"LLMResult\"\n",
      "}\n",
      "\u001B[32;1m\u001B[1;3m[chain/start]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence > parser:StrOutputParser] Entering Parser run with input:\n",
      "\u001B[0m[inputs]\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence > parser:StrOutputParser] [1ms] Exiting Parser run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"两个黄鹂鸣翠柳的下一句是“一行白鹭上青天”。这句诗出自唐朝诗人杜甫的《绝句》。全诗为：\\n\\n两只黄鹂在翠绿的柳树间婉转地歌唱，\\n一队整齐的白鹭直冲向蔚蓝的天空。\\n我坐在窗前，可以望见西岭上堆积着终年不化的积雪，\\n还可以看到东边门外那棵数株芭蕉树。\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch > chain:RunnableSequence] [3.88s] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"两个黄鹂鸣翠柳的下一句是“一行白鹭上青天”。这句诗出自唐朝诗人杜甫的《绝句》。全诗为：\\n\\n两只黄鹂在翠绿的柳树间婉转地歌唱，\\n一队整齐的白鹭直冲向蔚蓝的天空。\\n我坐在窗前，可以望见西岭上堆积着终年不化的积雪，\\n还可以看到东边门外那棵数株芭蕉树。\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence > chain:RunnableBranch] [3.88s] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"两个黄鹂鸣翠柳的下一句是“一行白鹭上青天”。这句诗出自唐朝诗人杜甫的《绝句》。全诗为：\\n\\n两只黄鹂在翠绿的柳树间婉转地歌唱，\\n一队整齐的白鹭直冲向蔚蓝的天空。\\n我坐在窗前，可以望见西岭上堆积着终年不化的积雪，\\n还可以看到东边门外那棵数株芭蕉树。\"\n",
      "}\n",
      "\u001B[36;1m\u001B[1;3m[chain/end]\u001B[0m \u001B[1m[chain:RunnableSequence] [4.14s] Exiting Chain run with output:\n",
      "\u001B[0m{\n",
      "  \"output\": \"两个黄鹂鸣翠柳的下一句是“一行白鹭上青天”。这句诗出自唐朝诗人杜甫的《绝句》。全诗为：\\n\\n两只黄鹂在翠绿的柳树间婉转地歌唱，\\n一队整齐的白鹭直冲向蔚蓝的天空。\\n我坐在窗前，可以望见西岭上堆积着终年不化的积雪，\\n还可以看到东边门外那棵数株芭蕉树。\"\n",
      "}\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "'两个黄鹂鸣翠柳的下一句是“一行白鹭上青天”。这句诗出自唐朝诗人杜甫的《绝句》。全诗为：\\n\\n两只黄鹂在翠绿的柳树间婉转地歌唱，\\n一队整齐的白鹭直冲向蔚蓝的天空。\\n我坐在窗前，可以望见西岭上堆积着终年不化的积雪，\\n还可以看到东边门外那棵数株芭蕉树。'"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 41
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "### 1.4.TransformChain ",
   "id": "319e445d61293aab"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-04-14T13:14:23.562217Z",
     "start_time": "2025-04-14T13:14:22.743032Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 将 a 加 b 的结果转换转换为 a + 1 + b + 1的和\n",
    "from langchain_ollama import ChatOllama\n",
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain.chains import (\n",
    "    LLMChain,\n",
    "    TransformChain,\n",
    "    SimpleSequentialChain\n",
    ")\n",
    "\n",
    "template = \"\"\"\n",
    "    对以下的内容进行总结：{output_text}\n",
    "    总结:\n",
    "\"\"\"\n",
    "\n",
    "transform_prompt = PromptTemplate(template=template, \n",
    "               input_variables=[\"output_text\"])\n",
    "\n",
    "# 由 output_text 输出通过transform函数转换为 summary 输出\n",
    "transform_chain = TransformChain(\n",
    "    input_variables=[\"output_text\"], \n",
    "    output_variables=[\"summary\"], \n",
    "    transform=lambda input_dict: {\"summary\": input_dict[\"output_text\"]},\n",
    ")\n",
    "\n",
    "llm_chain = LLMChain(\n",
    "    prompt=PromptTemplate(template=\"总结一下内容：{summary} 总结：\", input_variables=[\"summary\"]), \n",
    "    llm=ChatOllama(base_url=\"http://10.2.4.31:11434\", model=\"qwen2.5:latest\"),\n",
    "    verbose=True\n",
    ")\n",
    "\n",
    "chain = SimpleSequentialChain(\n",
    "    chains=[transform_chain, llm_chain], \n",
    "    verbose=True\n",
    ")\n",
    "chain.invoke(\"你是谁？我的名字是张三，特别喜欢打球\")\n"
   ],
   "id": "2179989963b640de",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001B[1m> Entering new SimpleSequentialChain chain...\u001B[0m\n",
      "\u001B[36;1m\u001B[1;3m你是谁？我的名字是张三，特别喜欢打球\u001B[0m\n",
      "\n",
      "\n",
      "\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
      "Prompt after formatting:\n",
      "\u001B[32;1m\u001B[1;3m总结一下内容：你是谁？我的名字是张三，特别喜欢打球 总结：\u001B[0m\n",
      "\n",
      "\u001B[1m> Finished chain.\u001B[0m\n",
      "\u001B[33;1m\u001B[1;3m你叫张三，特别喜欢打篮球或其它球类运动。\u001B[0m\n",
      "\n",
      "\u001B[1m> Finished chain.\u001B[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'input': '你是谁？我的名字是张三，特别喜欢打球', 'output': '你叫张三，特别喜欢打篮球或其它球类运动。'}"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 6
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [],
   "execution_count": null,
   "source": "",
   "id": "1484d6d68352be47"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "langchain",
   "language": "python",
   "name": "langchain"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
