{
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "# 根据输入动态路由逻辑\n",
    "路由允许您创建非确定性链，其中前一步的输出定义了下一步。路由有助于在与LLMs的交互中提供结构和一致性。\n",
    "\n",
    "有两种方法可以执行路由：\n",
    "\n",
    "1. 使用`RunnableBranch`。\n",
    "2. Conditionally return runnables from a RunnableLambda (recommended)。\n",
    "\n",
    "我们将使用一个两步序列来说明这两种方法，其中第一步将将输入问题分类为`LangChain`，`Anthropic`或`Other`，然后路由到相应的提示链。\n",
    "\n",
    "## 使用RunnableBranch\n",
    "`RunnableBranch` 是 `LangChain` 库中的一个组件，用于在工作流中实现条件分支逻辑。它允许根据特定条件选择不同的执行路径。下面是 RunnableBranch 的基本用法和示例：\n",
    "\n",
    "基本用法**\n",
    "\n",
    "1. 定义条件：定义一个条件函数，该函数返回一个布尔值。\n",
    "  \n",
    "2. 定义分支：定义两个分支，一个用于条件为真时执行，另一个用于条件为假时执行。\n",
    "  \n",
    "3. 创建 RunnableBranch：使用 RunnableBranch 类将条件和分支组合起来。\n",
    "\n",
    "> 如果没有提供的条件匹配，则运行默认的可运行。\n",
    "\n",
    "简单样例："
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "fc91227873ccc458"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-------------+  \n",
      "| BranchInput |  \n",
      "+-------------+  \n",
      "        *        \n",
      "        *        \n",
      "        *        \n",
      "   +--------+    \n",
      "   | Branch |    \n",
      "   +--------+    \n",
      "        *        \n",
      "        *        \n",
      "        *        \n",
      "+--------------+ \n",
      "| BranchOutput | \n",
      "+--------------+ \n"
     ]
    }
   ],
   "source": [
    "from langchain_core.runnables import RunnableBranch\n",
    "\n",
    "branch = RunnableBranch(\n",
    "    (lambda x: isinstance(x, str), lambda x: x.upper()),\n",
    "    (lambda x: isinstance(x, int), lambda x: x + 1),\n",
    "    (lambda x: isinstance(x, float), lambda x: x * 2),\n",
    "    lambda x: \"goodbye\",\n",
    ")\n",
    "\n",
    "branch.invoke(\"hello\")  # \"HELLO\"\n",
    "branch.invoke(None)  # \"goodbye\"\n",
    "branch.get_graph().print_ascii()"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-29T07:07:10.278014Z",
     "start_time": "2024-10-29T07:07:10.261658Z"
    }
   },
   "id": "366be709ead02eee",
   "execution_count": 12
  },
  {
   "cell_type": "markdown",
   "source": [
    "以下是它在实际操作中的示例,首先，让我们创建一个链，将传入的问题标识为LangChain，Anthropic或Other："
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "ac84c9c56b2f79e2"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "data": {
      "text/plain": "'Anthropic'"
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import os\n",
    "\n",
    "from dotenv import load_dotenv\n",
    "from langchain_community.llms.tongyi import Tongyi\n",
    "from langchain_core.output_parsers import StrOutputParser\n",
    "from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder, PromptTemplate\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "load_dotenv()\n",
    "\n",
    "# 请注意，我们将max_retries = 0设置为避免在RateLimits等情况下重试\n",
    "llm1 = ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\"\n",
    ")\n",
    "#\n",
    "chain = PromptTemplate.from_template(\n",
    "    \"\"\"Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.\n",
    "\n",
    "Do not respond with more than one word.\n",
    "\n",
    "<question>\n",
    "{question}\n",
    "</question>\n",
    "\n",
    "Classification:\"\"\"\n",
    ") | Tongyi() | StrOutputParser()\n",
    "chain.invoke({\"question\": \"how do I call Anthropic?\"})"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-29T07:07:11.155037Z",
     "start_time": "2024-10-29T07:07:10.283211Z"
    }
   },
   "id": "f0b2c449fdf46d8b",
   "execution_count": 13
  },
  {
   "cell_type": "markdown",
   "source": [
    "创建三个子链："
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "3fe003e194f65ed6"
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "lang_chain = PromptTemplate.from_template(\n",
    "    \"\"\"\n",
    "    You are an expert in langchain. \\\n",
    "Always answer questions starting with \"As Harrison Chase told me\". \\\n",
    "Respond to the following question:\n",
    "\n",
    "Question: {question}\n",
    "Answer:\n",
    "    \"\"\"\n",
    ") | ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\"\n",
    ")\n",
    "\n",
    "anthopic_chain = PromptTemplate.from_template(\n",
    "    \"\"\"\n",
    "    You are an expert in anthropic. \\\n",
    "Always answer questions starting with \"As Dario Amodei told me\". \\\n",
    "Respond to the following question:\n",
    "\n",
    "Question: {question}\n",
    "Answer:\n",
    "\"\"\"\n",
    ") | ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\"\n",
    ")\n",
    "\n",
    "general_chain = PromptTemplate.from_template(\n",
    "    \"\"\"\n",
    "    Respond to the following question:\n",
    "\n",
    "Question: {question}\n",
    "Answer:\n",
    "\"\"\"\n",
    ") | ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\"\n",
    ")"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-29T07:07:46.473247Z",
     "start_time": "2024-10-29T07:07:46.245096Z"
    }
   },
   "id": "79108e2e9c427df2",
   "execution_count": 17
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "data": {
      "text/plain": "AIMessage(content='To interact with Anthropic\\'s AI models, such as Claude, you typically need to use their API. Here’s a general guide on how to do that:\\n\\n1. **Sign Up and Get Access:**\\n   - First, you need to sign up for access to Anthropic\\'s services. Visit the [Anthropic website](https://www.anthropic.com/) and follow the instructions to request access.\\n   - Once approved, you will receive an API key.\\n\\n2. **Set Up Your Environment:**\\n   - Ensure you have a development environment set up. You can use any programming language that can make HTTP requests, but Python is commonly used for its simplicity and rich library support.\\n\\n3. **Install Required Libraries:**\\n   - If you are using Python, you might want to install the `requests` library to handle HTTP requests. You can install it using pip:\\n     ```bash\\n     pip install requests\\n     ```\\n\\n4. **Make an API Request:**\\n   - Use your API key to authenticate and make a request to the Anthropic API. Here is a basic example in Python:\\n\\n     ```python\\n     import requests\\n\\n     # Replace \\'your_api_key\\' with your actual API key\\n     api_key = \\'your_api_key\\'\\n     \\n     # The URL for the Anthropic API\\n     url = \\'https://api.anthropic.com/v1/complete\\'\\n     \\n     # The prompt you want to send to the AI\\n     prompt = \"Hello, Claude! How can you help me today?\"\\n     \\n     # The data payload for the request\\n     data = {\\n         \\'prompt\\': prompt,\\n         \\'model\\': \\'claude-2\\',  # or another model version\\n         \\'max_tokens_to_sample\\': 100  # Adjust as needed\\n     }\\n     \\n     # Set the headers with the API key\\n     headers = {\\n         \\'Content-Type\\': \\'application/json\\',\\n         \\'X-API-Key\\': api_key\\n     }\\n     \\n     # Make the POST request\\n     response = requests.post(url, json=data, headers=headers)\\n     \\n     # Print the response\\n     if response.status_code == 200:\\n         print(response.json())\\n     else:\\n         print(f\"Error: {response.status_code}\")\\n         print(response.text)\\n     ```\\n\\n5. **Handle the Response:**\\n   - The response from the API will be in JSON format. Parse the response to get the generated text or other information.\\n\\n6. **Follow Best Practices:**\\n   - Always handle your API key securely. Do not hard-code it in your scripts or share it publicly.\\n   - Review the API documentation for any additional parameters or features you might need.\\n\\nBy following these steps, you should be able to call Anthropic\\'s AI models and integrate them into your applications.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 569, 'prompt_tokens': 36, 'total_tokens': 605, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-461fe7c2-ef0f-4370-a6de-236eb9b4e657-0', usage_metadata={'input_tokens': 36, 'output_tokens': 569, 'total_tokens': 605, 'input_token_details': {}, 'output_token_details': {}})"
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.runnables import RunnableBranch\n",
    "\n",
    "branch = RunnableBranch(\n",
    "    (lambda x: \"langchain\" in x[\"topic\"].lower(), lang_chain),\n",
    "    (lambda x: \"Anthropic\" in x[\"topic\"].lower(), anthopic_chain),\n",
    "    general_chain,\n",
    ")\n",
    "\n",
    "full_chain = {\"topic\": chain, \"question\": lambda x: x[\"question\"]} | branch\n",
    "full_chain.invoke({\"question\": \"how do I call Anthropic?\"})\n",
    "# full_chain.invoke()"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-29T07:16:04.176705Z",
     "start_time": "2024-10-29T07:15:27.199041Z"
    }
   },
   "id": "a4e382a7048913b6",
   "execution_count": 25
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "data": {
      "text/plain": "AIMessage(content='The answer to 2 + 2 is 4.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 12, 'prompt_tokens': 36, 'total_tokens': 48, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-5ccdacba-e1c9-4ba8-b2a7-1377ec7e4790-0', usage_metadata={'input_tokens': 36, 'output_tokens': 12, 'total_tokens': 48, 'input_token_details': {}, 'output_token_details': {}})"
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "full_chain.invoke({\"question\": \"whats 2 + 2\"})"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-29T07:16:14.799769Z",
     "start_time": "2024-10-29T07:16:12.338680Z"
    }
   },
   "id": "a8d85322077e4ccb",
   "execution_count": 26
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "full_chain.invoke({\"question\": \"how do I use LangChain?\"})"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "83b914bdbd0d57e2"
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 使用自定义函数（推荐）\n",
    "还可以使用自定义函数在不同的输出之间进行路由。列子如下："
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "136d23ba73d06e35"
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "def route(info: dict):\n",
    "    if \"langchain\" in info[\"topic\"].lower():\n",
    "        return lang_chain\n",
    "    elif \"anthropic\" in info[\"topic\"].lower():\n",
    "        return anthopic_chain\n",
    "    else:\n",
    "        return general_chain"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-29T07:18:52.142327Z",
     "start_time": "2024-10-29T07:18:52.137240Z"
    }
   },
   "id": "55496e2b7e97a50d",
   "execution_count": 27
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "data": {
      "text/plain": "AIMessage(content='As Dario Amodei told me, to call or get in touch with Anthropic, you can visit their official website and look for a contact page. They may have an email address or a form that you can fill out to send them a message. Additionally, for more specific inquiries, such as press, careers, or partnerships, they often provide dedicated contact points or emails on their site. Make sure to check the most current information available on their website, as contact details can change.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 99, 'prompt_tokens': 59, 'total_tokens': 158, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-a0cbc5e3-65ce-4dfe-823d-88d2f0c8a58a-0', usage_metadata={'input_tokens': 59, 'output_tokens': 99, 'total_tokens': 158, 'input_token_details': {}, 'output_token_details': {}})"
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from langchain_core.runnables import RunnableLambda\n",
    "\n",
    "full_chain = {\"topic\": chain, \"question\": lambda x: x[\"question\"]} | RunnableLambda(route)\n",
    "full_chain.invoke({\"question\": \"how do I call Anthropic?\"})"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-29T07:20:24.958452Z",
     "start_time": "2024-10-29T07:20:17.770454Z"
    }
   },
   "id": "3301e5ec42b60032",
   "execution_count": 28
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "data": {
      "text/plain": "AIMessage(content='The answer to 2 + 2 is 4.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 12, 'prompt_tokens': 36, 'total_tokens': 48, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'qwen-max', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-6e382541-4ac2-49d3-896d-b478b813b2a8-0', usage_metadata={'input_tokens': 36, 'output_tokens': 12, 'total_tokens': 48, 'input_token_details': {}, 'output_token_details': {}})"
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "full_chain.invoke({\"question\": \"whats 2 + 2\"})"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-29T07:21:05.959051Z",
     "start_time": "2024-10-29T07:21:04.098520Z"
    }
   },
   "id": "99e78e03342e8df3",
   "execution_count": 29
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 基于语义相似性路由\n",
    "将template嵌入到一个向量空间中，然后根据输入问题使用相似度来决定路由。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "ac9431034cc9a8f9"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Using PHYSICS\n"
     ]
    },
    {
     "data": {
      "text/plain": "'Black body radiation refers to the electromagnetic radiation emitted by an idealized physical body known as a \"black body.\" This body is considered ideal because it absorbs all incident electromagnetic radiation, regardless of frequency or angle. Because it perfectly absorbs all incoming light, it also perfectly emits radiation when heated. The spectrum of this emitted radiation depends only on the body\\'s temperature and not on its composition or the nature of the incoming radiation.\\n\\nA key characteristic of black body radiation is that it follows a specific distribution of energy across different wavelengths, which can be described mathematically by Planck\\'s law. As the temperature of the black body increases, the peak of the emitted radiation shifts to shorter wavelengths (higher frequencies) and the total amount of emitted radiation increases. This is why, for example, objects heated to high temperatures first glow red, then yellow, and eventually white or blue as the temperature rises.\\n\\nExamples of approximate black bodies in nature include stars, including the Sun, and even a simple incandescent light bulb filament.'"
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "\n",
    "from langchain_core.runnables import RunnablePassthrough\n",
    "from langchain_community.utils.math import cosine_similarity\n",
    "from langchain_community.embeddings import DashScopeEmbeddings\n",
    "\n",
    "physics_template = \"\"\"\n",
    "You are a very smart physics professor. \\\n",
    "You are great at answering questions about physics in a concise and easy to understand manner. \\\n",
    "When you don't know the answer to a question you admit that you don't know.\n",
    "\n",
    "Here is a question:\n",
    "{query}\n",
    "\"\"\"\n",
    "\n",
    "math_template = \"\"\"\n",
    "You are a very good mathematician. You are great at answering math questions. \\\n",
    "You are so good because you are able to break down hard problems into their component parts, \\\n",
    "answer the component parts, and then put them together to answer the broader question.\n",
    "\n",
    "Here is a question:\n",
    "{query}\n",
    "\"\"\"\n",
    "embeddings = DashScopeEmbeddings(\n",
    "    dashscope_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    ")\n",
    "prompt_templates = [physics_template, math_template]\n",
    "prompt_embeddings = embeddings.embed_documents(prompt_templates)\n",
    "\n",
    "\n",
    "def prompt_router(input_dict: dict) -> PromptTemplate:\n",
    "    \"\"\"\n",
    "    基于语义相似性路由\n",
    "    :param input_dict: \n",
    "    :return: \n",
    "    \"\"\"\n",
    "    query_embedding = embeddings.embed_query(input_dict[\"query\"])\n",
    "    similarity = cosine_similarity([query_embedding], prompt_embeddings)[0]\n",
    "    most_similar = prompt_templates[similarity.argmax()]\n",
    "    print(\"Using MATH\" if most_similar == math_template else \"Using PHYSICS\")\n",
    "    return PromptTemplate.from_template(most_similar)\n",
    "\n",
    "\n",
    "chain = {\"query\": RunnablePassthrough()} | RunnableLambda(prompt_router) | ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\"\n",
    ") | StrOutputParser()\n",
    "\n",
    "chain.invoke(\"What is black body radiation?\")"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-29T07:35:50.407374Z",
     "start_time": "2024-10-29T07:35:39.111991Z"
    }
   },
   "id": "23496a069bb6a28d",
   "execution_count": 31
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [],
   "metadata": {
    "collapsed": false
   },
   "id": "97a2abef0f74b21f"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
