{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "de0c09d10f688a1",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 通过传递数据\n",
    "RunnablePassthrough**允许传递输入数据**，可以保持不变或添加额外的键。通常与RunnableParallel一起使用，将数据分配给映射中的新键。\n",
    "RunnablePassthrough() 单独调用时，将简单地接收输入并传递。参数如下\n",
    "- func (Callable[[Other], None], optional) – Function to be called with the input.\n",
    "- afunc (Callable[[Other], Awaitable[None]], optional) – Async function to be called with the input.\n",
    "- input_type (Optional[Type[Other]], optional) – Type of the input.\n",
    "- **kwargs (Any) – Additional keyword arguments.\n",
    "\n",
    "## RunnableParallel示例1："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "dc2bd363b68fc9f3",
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-28T05:50:05.252643Z",
     "start_time": "2024-10-28T05:50:05.238856Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": "{'original': 'completion', 'parsed': 'noitelpmoc'}"
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.runnables import RunnableParallel, RunnablePassthrough, RunnableLambda\n",
    "\n",
    "runnable = RunnableParallel(\n",
    "    origin=RunnablePassthrough(),\n",
    "    modified=lambda x: x[\"num\"] + 1,\n",
    ")\n",
    "\n",
    "runnable.invoke({\"num\": 1})  # {'origin': {'num': 1}, 'modified': 2}"
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "## RunnableParallel处理链示例\n"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "d37ae0ff58172ef8"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "data": {
      "text/plain": "{'original': 'Hello', 'parsed': 'olleH'}"
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "## 样例2\n",
    "\n",
    "def fake_llm(prompt: str) -> str:  # Fake LLM for the example\n",
    "    return prompt\n",
    "\n",
    "\n",
    "chain = RunnableLambda(fake_llm) | {\n",
    "    \"original\": RunnablePassthrough(),  # Original LLM output\n",
    "    \"parsed\": lambda text: text[::-1]  # Parsing logic\n",
    "}\n",
    "chain.invoke(\"Hello\")  #{'original': 'completion', 'parsed': 'olleH'}"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-28T05:56:24.422573Z",
     "start_time": "2024-10-28T05:56:24.413294Z"
    }
   },
   "id": "d11ec840a70c3c6a",
   "execution_count": 9
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 添加输出键时传递输入\n",
    "使用assign参数调用`RunnablePassthrough.assign(...)`，将接收输入，并添加传递给assign函数的额外参数。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "8eae64b8c03d63fd"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "data": {
      "text/plain": "{'llm1': 'hello', 'llm2': 'hello', 'total_chars': 10}"
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "runnable = {\n",
    "               'llm1': fake_llm,\n",
    "               'llm2': fake_llm,\n",
    "           } | RunnablePassthrough.assign(total_chars=lambda inputs: len(inputs['llm1'] + inputs['llm2']))\n",
    "\n",
    "runnable.invoke('hello')  #{'llm1': 'hello', 'llm2': 'hello', 'total_chars': 10}"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-28T06:07:24.172748Z",
     "start_time": "2024-10-28T06:07:24.162539Z"
    }
   },
   "id": "be2b4f2566b4647a",
   "execution_count": 10
  },
  {
   "cell_type": "markdown",
   "source": [
    "# 检索\n",
    "\n",
    "在这里，prompt的输入预期是一个带有 \"context\" 和 \"question\" 键的映射。用户输入只是问题。因此，我们需要使用我们的retriever获取上下文，并将用户输入传递到 \"question\" 键下。在这种情况下，RunnablePassthrough允许我们将用户的问题传递给prompt和model。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "366eae69e2f53fda"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "data": {
      "text/plain": "'Based on the provided context, Harrison worked at Kensho.'"
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_community.vectorstores import FAISS\n",
    "from langchain_community.embeddings import DashScopeEmbeddings\n",
    "from langchain_community.llms.tongyi import Tongyi\n",
    "from dotenv import load_dotenv\n",
    "import os\n",
    "\n",
    "load_dotenv()\n",
    "\n",
    "vectorstore = FAISS.from_texts([\"harrison worked at kensho\", \"tomcat worked at it\"], DashScopeEmbeddings(\n",
    "    dashscope_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "))\n",
    "\n",
    "# 向量\n",
    "retriever = vectorstore.as_retriever()\n",
    "template = \"\"\"\n",
    "    Answer the question based only on the following context:\n",
    "{context}\n",
    "\n",
    "Question: {question}\n",
    "    \"\"\"\n",
    "\n",
    "prompt = ChatPromptTemplate.from_template(template)\n",
    "llm = ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\"\n",
    ")\n",
    "# 或\n",
    "llm2 = Tongyi()\n",
    "\n",
    "retriever_chain = {\"context\": retriever, \"question\": RunnablePassthrough()} | prompt | llm | StrOutputParser()\n",
    "\n",
    "retriever_chain.invoke(\"where did harrison work?\")\n",
    "\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-28T06:28:25.863991Z",
     "start_time": "2024-10-28T06:28:24.008163Z"
    }
   },
   "id": "308e10201591f481",
   "execution_count": 14
  },
  {
   "cell_type": "markdown",
   "source": [
    "# 绑定运行参数\n",
    "`RunnablePassthrough` 允许您通过使用 `bind_kwargs` 参数来绑定运行参数。例如，如果您希望始终使用特定的参数，您可以通过在 `RunnablePassthrough` 上调用 `bind_kwargs` 来设置这些参数。\n",
    "\n",
    "有时候我们想要在一个Runnable序列中调用一个Runnable，并传递一些常量参数，这些参数不是前一个Runnable的输出的一部分，也不是用户输入的一部分。我们可以使用Runnable.bind()来轻松地传递这些参数"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "1f92fc4ee7e59e4a"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "EQUATION: \\( x^3 + 7 = 12 \\)\n",
      "\n",
      "SOLUTION: \n",
      "1. Subtract 7 from both sides:\n",
      "   \\[\n",
      "   x^3 + 7 - 7 = 12 - 7\n",
      "   \\]\n",
      "   Simplifying, we get:\n",
      "   \\[\n",
      "   x^3 = 5\n",
      "   \\]\n",
      "\n",
      "2. Take the cube root of both sides to solve for \\( x \\):\n",
      "   \\[\n",
      "   x = \\sqrt[3]{5}\n",
      "   \\]\n",
      "\n",
      "So, the solution is:\n",
      "\\[\n",
      "x = \\sqrt[3]{5}\n",
      "\\]\n"
     ]
    }
   ],
   "source": [
    "\n",
    "from langchain_core.messages import SystemMessage\n",
    "from langchain_core.prompts import HumanMessagePromptTemplate\n",
    "\n",
    "prompt = ChatPromptTemplate.from_messages([\n",
    "    SystemMessage(\n",
    "        content=\"Write out the following equation using algebraic symbols then solve it. Use the format\\n\\nEQUATION:...\\nSOLUTION:...\\n\\n\",\n",
    "    ),\n",
    "    HumanMessagePromptTemplate.from_template(\"{equation_statement}\"),\n",
    "])\n",
    "llm = ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\", temperature=0\n",
    ")\n",
    "\n",
    "chain = {\"equation_statement\": RunnablePassthrough()} | prompt | llm | StrOutputParser()\n",
    "\n",
    "print(chain.invoke(\"x raised to the third plus seven equals 12\"))"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-28T06:47:41.199710Z",
     "start_time": "2024-10-28T06:47:32.294434Z"
    }
   },
   "id": "2a3ef328f61b195e",
   "execution_count": 17
  },
  {
   "cell_type": "markdown",
   "source": [
    "并且想要使用特定的stop词调用模型"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "ebebb872fe574352"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "EQUATION: \\( x^3 + 7 = 12 \\)\n",
      "\n"
     ]
    }
   ],
   "source": [
    "chain = {\"equation_statement\": RunnablePassthrough()} | prompt | llm.bind(stop=\"SOLUTION\") | StrOutputParser()\n",
    "\n",
    "print(chain.invoke(\"x raised to the third plus seven equals 12\"))"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-28T06:48:49.924354Z",
     "start_time": "2024-10-28T06:48:48.167375Z"
    }
   },
   "id": "487fd8b1acd81217",
   "execution_count": 18
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [],
   "metadata": {
    "collapsed": false
   },
   "id": "f9744762d9b7f06a"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
