{
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "# 添加回退\n",
    "在LLM应用程序中，可能会出现许多故障点，无论是LLM API的问题、模型输出不佳、其他集成的问题等等。回退可以帮助您优雅地处理和隔离这些问题。重要的是，回退不仅可以应用在LLM级别上，还可以应用在整个可运行级别上。\n",
    "\n",
    "## 处理LLM API错误\n",
    "这可能是回退的最常见用例。对LLM API的请求可能因各种原因而失败 - API可能关闭，也可能已达到速率限制，任何数量的问题。因此，使用回退可以帮助防止这些类型的问题。\n",
    "重要提示：默认情况下，LLM包装器会捕获错误并重试。在使用回退时，您很可能希望将其关闭。否则，第一个包装器将继续重试而不会失败。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "3ce6d5ca677e8b0e"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Based on the provided context, Tomcat worked at \"it.\" However, this answer seems incomplete or unclear because typically a company name would be expected. The information given doesn't specify what \"it\" refers to, so this is the most accurate response based on the available data.\n"
     ]
    }
   ],
   "source": [
    "from openai import RateLimitError\n",
    "from http.client import error\n",
    "from unittest.mock import patch\n",
    "from langchain_core.runnables import RunnablePassthrough\n",
    "from langchain_openai import ChatOpenAI\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_community.vectorstores import FAISS\n",
    "from langchain_community.embeddings import DashScopeEmbeddings\n",
    "from langchain_community.llms.tongyi import Tongyi\n",
    "from dotenv import load_dotenv\n",
    "import os\n",
    "\n",
    "load_dotenv()\n",
    "\n",
    "vectorstore = FAISS.from_texts([\"harrison worked at kensho\", \"tomcat worked at it\"], DashScopeEmbeddings(\n",
    "    dashscope_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "))\n",
    "\n",
    "# 向量\n",
    "retriever = vectorstore.as_retriever()\n",
    "template = \"\"\"\n",
    "    Answer the question based only on the following context:\n",
    "{context}\n",
    "\n",
    "Question: {question}\n",
    "    \"\"\"\n",
    "\n",
    "prompt = ChatPromptTemplate.from_template(template)\n",
    "# 请注意，我们将max_retries = 0设置为避免在RateLimits等情况下重试\n",
    "llm1 = ChatOpenAI(\n",
    "    # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "    openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "    model_name=\"qwen-max\",\n",
    "    max_retries=0,\n",
    ")\n",
    "#\n",
    "llm2 = Tongyi()\n",
    "# 添加回退\n",
    "llm = llm1.with_fallbacks([llm2])\n",
    "\n",
    "retriever_chain = {\"context\": retriever, \"question\": RunnablePassthrough()} | prompt | llm\n",
    "with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
    "    try:\n",
    "        print(retriever_chain.invoke(\"where did tomcat work?\"))\n",
    "    except RateLimitError:\n",
    "        print(\"RateLimitError\")\n",
    "\n",
    "# retriever_chain.invoke(\"where did harrison work?\")\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-28T07:24:18.745689Z",
     "start_time": "2024-10-28T07:24:14.571460Z"
    }
   },
   "id": "7f0b5c779616534c",
   "execution_count": 12
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 指定要处理的错误\n",
    "如果我们想更具体地指定回退被调用的时机，我们还可以指定要处理的错误:\n"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "f0852e70ce7cde88"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Based on the provided context, Tomcat worked at \"it.\" However, this answer seems incomplete or unclear. The context does not provide a specific company or organization name for where Tomcat worked. If you have more information, please let me know!\n"
     ]
    }
   ],
   "source": [
    "\n",
    "llm = llm1.with_fallbacks([llm2], exceptions_to_handle=[error])\n",
    "retriever_chain = {\"context\": retriever, \"question\": RunnablePassthrough()} | prompt | llm\n",
    "with patch(\"openai.resources.chat.completions.Completions.create\", side_effect=error):\n",
    "    try:\n",
    "        print(retriever_chain.invoke(\"where did tomcat work?\"))\n",
    "    except RateLimitError:\n",
    "        print(\"RateLimitError\")\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-10-28T07:24:53.873114Z",
     "start_time": "2024-10-28T07:24:50.798345Z"
    }
   },
   "id": "99f1ae3acd4285f1",
   "execution_count": 14
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [],
   "metadata": {
    "collapsed": false
   },
   "id": "516b1d25d18e1509"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
