{
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "https://developer.aliyun.com/article/1586957\n",
    "\n",
    "本文会构建一个有如下功能的 chatbot：\n",
    "• 可以生成图片\n",
    "• 可以回答用户的问题\n",
    "• 可以检索本地文档库中的信息\n",
    "• 可以从互联网进行搜索信息\n",
    "## 什么是多模态\n",
    "在前面的大部分例子中，我们跟 LLM 对话的时候都是使用了文本作为输入和输出。\n",
    "但是除了文本，我们也可以让 LLM 来为我们生成图片。\n",
    "多模态是指同时使用两种或两种以上的信息模式或表现形式。在人工智能和机器学习的背景下，\n",
    "多模态通常指的是能够处理和融合不同类型数据的系统，这些数据可能包括文本、图像、音频、视频或其他传感器数据。\n",
    "## 如和实现对本地文档的 QA\n",
    "\n",
    "在 `langchain` 中，`RetrievalQA` 是一个结合了检索（`Retrieval`）和问答（`QA`）的组件。\n",
    "\n",
    "它允许你构建一个系统，该系统能够根据用户的提问，从提供的文档或知识库中检索相关信息，并回答用户的问题。\n",
    "\n",
    "`RetrievalQA` 的工作流程如下：\n",
    "\n",
    "- 检索（`Retrieval`）：当用户提出一个问题时，`RetrievalQA` 会使用一个检索机制（本文会使用向量数据库做语义检索）\n",
    "- 阅读理解：一旦检索到相关的信息，`RetrievalQA` 会使用一个阅读理解模型来理解这些信息，并回答用户的问题。\n",
    "- 问答：最后，`RetrievalQA` 会使用一个问答模型（`ChatModel`）来生成最终的回答。\n",
    "\n",
    "`RetrievalQA` 的优势在于它能够处理大量复杂的信息，并提供精确的答案。它特别适合那些需要从大量文档中检索信息的场景，例如法律文件、医学文献、技术手册等。\n",
    "\n",
    "> 直接跟 LLM 对话的时候，一般都会有一个上下文大小限制的问题，太大的文档无法全部放入到上下文中。\n",
    "> \n",
    "> 但是可以先分片存入向量数据库中，在跟 LLM 对话之前，再从向量数据库中检索出相关的文档。最终发给 LLM 的数据只有相关的文档，这样就能够更好地回答用户的问题。\n",
    "\n",
    "### 将 pdf 存入向量数据库\n",
    "\n",
    "> 我们可以使用自己的 pdf 文档。\n",
    "\n",
    "在这个例子中，我们将会使用 `langchain` 来将一个 pdf 文档存入向量数据库中："
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "a16416e2c158ff6c"
  },
  {
   "cell_type": "code",
   "outputs": [],
   "source": [
    "import os\n",
    "from dotenv import load_dotenv\n",
    "from langchain_community.embeddings import DashScopeEmbeddings\n",
    "from langchain_community.document_loaders import PyPDFLoader\n",
    "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
    "from langchain_community.vectorstores import Chroma\n",
    "\n",
    "# 加载 pdf 文档\n",
    "loader = PyPDFLoader(\"初中数学知识点总结.pdf\")\n",
    "docs = loader.load()\n",
    "\n",
    "load_dotenv()\n",
    "\n",
    "# 文档分片\n",
    "text_spliter = RecursiveCharacterTextSplitter(chunk_size=200, chunk_overlap=10)\n",
    "splits = text_spliter.split_documents(docs)\n",
    "persist_directory = './data/'\n",
    "\n",
    "embedding = DashScopeEmbeddings(dashscope_api_key=os.getenv(\"DASHSCOPE_API_KEY\"))\n",
    "# 创建向量数据库\n",
    "vectordb = Chroma.from_documents(\n",
    "    documents=splits,\n",
    "    embedding=embedding,\n",
    "    collection_name=\"spotmax\",\n",
    "    persist_directory=persist_directory,\n",
    ")\n",
    "# 持久化向量数据库到磁盘，从 Chroma 0.4.x 版本开始，这个方法已经不再支持，因为文档会自动持久化。\n",
    "# vectordb.persist()"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-01T05:08:39.854398Z",
     "start_time": "2024-11-01T05:08:34.973097Z"
    }
   },
   "id": "f1e3d25d17eee3eb",
   "execution_count": 11
  },
  {
   "cell_type": "markdown",
   "source": [
    "说明：\n",
    "• PyPDFLoader 是一个用于加载 pdf 文档的类。\n",
    "• RecursiveCharacterTextSplitter 是一个用于将文档分片的类。\n",
    "• Chroma 是一个向量数据库类，用于存储和检索向量化的文档。\n",
    "• vectordb 是 Chroma 的一个实例，用于存储和检索文档。\n",
    "• vectordb.persist() 用于将向量数据库持久化到磁盘。\n",
    "\n",
    "### 使用 `RetrievalQA` 进行问答\n",
    "\n",
    "在上一步将 pdf 文档存入向量数据库之后，我们就可以通过 `Chroma` 的实例来对其做语义检索了。\n",
    "\n",
    ">关于as_retriever方法参数：\n",
    "> - **search_type**：定义了检索器应该执行哪种类型的搜索。它可以是“similarity”（默认值），\n",
    "> - “mmr”或“similarity_score_threshold”。“similarity”：这可能是基于某种相似度算法（如余弦相似度）来搜索最相似的文档。\n",
    "> - “mmr”：可能是基于最大边际相关性（Maximal Marginal Relevance）算法来搜索最相关的文档。\n",
    "> - “similarity_score_threshold”：这可能是在搜索时设置一个最小相似度阈值，只有得分高于这个阈值的文档才会被返回。\n",
    "> - **search_kwargs**：传递给搜索函数的关键字参数。这可能包括：\n",
    "> - k：要返回的文档数量（默认值：4）。\n",
    "> - score_threshold：对于“similarity_score_threshold”搜索类型，这是最小相关性阈值。\n",
    "> - fetch_k：要传递给MMR算法的文档数量（默认值：20）。\n",
    "> - lambda_mult：MMR返回的结果的多样性；1表示最小多样性，0表示最大（默认值：0.5）。\n",
    "> - filter：根据文档元数据进行过滤。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "80f19fa221e818e4"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\u001B[1m> Entering new RetrievalQA chain...\u001B[0m\n",
      "\n",
      "\u001B[1m> Finished chain.\u001B[0m\n",
      "相似三角形的判定方法主要包括以下几种：\n",
      "\n",
      "1. **预备定理**：如果一条直线平行于三角形的一边，并且与其他两边相交，那么所截得的三角形与原三角形三边对应成比例。这意味着两个三角形是相似的。\n",
      "\n",
      "2. **判定定理 3**：如果一个三角形的三条边分别与另一个三角形的三条边对应成比例，那么这两个三角形相似。简而言之，就是“三边对应成比例，则两三角形相似”。\n",
      "\n",
      "3. **判定定理 4**：对于直角三角形而言，当它被斜边上的高分成两个较小的直角三角形时，这两个较小的直角三角形都与原来的直角三角形相似。\n",
      "\n",
      "这些方法提供了一种基于边长比例来判断两个三角形是否相似的方式。请注意，在实际应用中，可能还会使用到其他如角度关系等条件来进行相似性的判断。但根据提供的信息，上述为明确列出的主要判定方法。\n"
     ]
    }
   ],
   "source": [
    "from langchain.chains.retrieval_qa.base import RetrievalQA\n",
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "\n",
    "def qa(question):\n",
    "    # 语义检索\n",
    "    vectordb2 = Chroma(persist_directory='./data',\n",
    "                       embedding_function=DashScopeEmbeddings(dashscope_api_key=os.getenv(\"DASHSCOPE_API_KEY\")),\n",
    "                       collection_name='spotmax')\n",
    "    llm = ChatOpenAI(\n",
    "        # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "        openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "        openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "        model_name=\"qwen-max\",\n",
    "        temperature=0,\n",
    "    )\n",
    "    retriever = vectordb2.as_retriever(\n",
    "        search_typy=\"bm25\",\n",
    "        search_kwargs={\"k\": 3}\n",
    "    )\n",
    "    qa0 = RetrievalQA.from_chain_type(llm=llm, chain_type=\"stuff\", retriever=retriever, return_source_documents=True,\n",
    "                                      verbose=True)\n",
    "    result = qa0({\"query\": question})\n",
    "    return result['result']\n",
    "\n",
    "\n",
    "print(qa(\"相似三角形的判定方法？\"))"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-01T05:08:55.025187Z",
     "start_time": "2024-11-01T05:08:39.871316Z"
    }
   },
   "id": "f47fe6e4097e298d",
   "execution_count": 12
  },
  {
   "cell_type": "markdown",
   "source": [
    "### 让 LLM 生成图片\n",
    "根据用户的 prompt 生成一张 256x256 像素的图片，并且返回一个 markdown 链接形式的图片地址"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "67c04fce1d631a59"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "ename": "AttributeError",
     "evalue": "'Completions' object has no attribute 'images'",
     "output_type": "error",
     "traceback": [
      "\u001B[0;31m---------------------------------------------------------------------------\u001B[0m",
      "\u001B[0;31mAttributeError\u001B[0m                            Traceback (most recent call last)",
      "Cell \u001B[0;32mIn[14], line 24\u001B[0m\n\u001B[1;32m     21\u001B[0m     markdown_url \u001B[38;5;241m=\u001B[39m \u001B[38;5;124mf\u001B[39m\u001B[38;5;124m'\u001B[39m\u001B[38;5;124m![image](\u001B[39m\u001B[38;5;132;01m{\u001B[39;00murl\u001B[38;5;132;01m}\u001B[39;00m\u001B[38;5;124m)\u001B[39m\u001B[38;5;124m'\u001B[39m\n\u001B[1;32m     22\u001B[0m     \u001B[38;5;28;01mreturn\u001B[39;00m markdown_url\n\u001B[0;32m---> 24\u001B[0m \u001B[38;5;28mprint\u001B[39m(create_image(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m生成三角\u001B[39m\u001B[38;5;124m\"\u001B[39m))\n",
      "Cell \u001B[0;32mIn[14], line 12\u001B[0m, in \u001B[0;36mcreate_image\u001B[0;34m(prompt)\u001B[0m\n\u001B[1;32m      4\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21mcreate_image\u001B[39m(prompt):\n\u001B[1;32m      5\u001B[0m     llm \u001B[38;5;241m=\u001B[39m OpenAI(\n\u001B[1;32m      6\u001B[0m         \u001B[38;5;66;03m# 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\u001B[39;00m\n\u001B[1;32m      7\u001B[0m         openai_api_key\u001B[38;5;241m=\u001B[39mos\u001B[38;5;241m.\u001B[39mgetenv(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mDASHSCOPE_API_KEY\u001B[39m\u001B[38;5;124m\"\u001B[39m),\n\u001B[0;32m   (...)\u001B[0m\n\u001B[1;32m     10\u001B[0m         temperature\u001B[38;5;241m=\u001B[39m\u001B[38;5;241m0\u001B[39m,\n\u001B[1;32m     11\u001B[0m     )\n\u001B[0;32m---> 12\u001B[0m     response \u001B[38;5;241m=\u001B[39m llm\u001B[38;5;241m.\u001B[39mclient\u001B[38;5;241m.\u001B[39mimages\u001B[38;5;241m.\u001B[39mgenerate(\n\u001B[1;32m     13\u001B[0m         \u001B[38;5;66;03m# model='dall-e-2',\u001B[39;00m\n\u001B[1;32m     14\u001B[0m         prompt\u001B[38;5;241m=\u001B[39mprompt,\n\u001B[1;32m     15\u001B[0m         size\u001B[38;5;241m=\u001B[39m\u001B[38;5;124m'\u001B[39m\u001B[38;5;124m256x256\u001B[39m\u001B[38;5;124m'\u001B[39m,\n\u001B[1;32m     16\u001B[0m         quality\u001B[38;5;241m=\u001B[39m\u001B[38;5;124m'\u001B[39m\u001B[38;5;124mstandard\u001B[39m\u001B[38;5;124m'\u001B[39m,\n\u001B[1;32m     17\u001B[0m         n\u001B[38;5;241m=\u001B[39m\u001B[38;5;241m1\u001B[39m\n\u001B[1;32m     18\u001B[0m     )\n\u001B[1;32m     20\u001B[0m     url \u001B[38;5;241m=\u001B[39m response[\u001B[38;5;124m'\u001B[39m\u001B[38;5;124mimages\u001B[39m\u001B[38;5;124m'\u001B[39m][\u001B[38;5;241m0\u001B[39m][\u001B[38;5;124m'\u001B[39m\u001B[38;5;124murl\u001B[39m\u001B[38;5;124m'\u001B[39m]\n\u001B[1;32m     21\u001B[0m     markdown_url \u001B[38;5;241m=\u001B[39m \u001B[38;5;124mf\u001B[39m\u001B[38;5;124m'\u001B[39m\u001B[38;5;124m![image](\u001B[39m\u001B[38;5;132;01m{\u001B[39;00murl\u001B[38;5;132;01m}\u001B[39;00m\u001B[38;5;124m)\u001B[39m\u001B[38;5;124m'\u001B[39m\n",
      "\u001B[0;31mAttributeError\u001B[0m: 'Completions' object has no attribute 'images'"
     ]
    }
   ],
   "source": [
    "from langchain_openai import OpenAI\n",
    "\n",
    "\n",
    "def create_image(prompt):\n",
    "    llm = OpenAI(\n",
    "        # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "        openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "        openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "        model_name=\"qwen-max\",\n",
    "        temperature=0,\n",
    "    )\n",
    "    response = llm.client.images.generate(\n",
    "        # model='dall-e-2',\n",
    "        prompt=prompt,\n",
    "        size='256x256',\n",
    "        quality='standard',\n",
    "        n=1\n",
    "    )\n",
    "\n",
    "    url = response['images'][0]['url']\n",
    "    markdown_url = f'![image]({url})'\n",
    "    return markdown_url\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-01T05:12:17.505252Z",
     "start_time": "2024-11-01T05:12:17.380085Z"
    }
   },
   "id": "f0de5f7fbbb418ec",
   "execution_count": 14
  },
  {
   "cell_type": "markdown",
   "source": [
    "### 从互联网搜索信息\n",
    "我们可以使用 GoogleSerperAPIWrapper 来从互联网搜索信息："
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "1e52a7a1eb19bde3"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "百度搜索结果\n"
     ]
    }
   ],
   "source": [
    "\n",
    "def query_web(question):\n",
    "    \"\"\"查询百度搜索结果\"\"\"\n",
    "    # search = GoogleSerperAPIWrapper()\n",
    "    return \"百度搜索结果\"\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-01T05:20:26.343508Z",
     "start_time": "2024-11-01T05:20:26.336811Z"
    }
   },
   "id": "2e310daf4096d2d9",
   "execution_count": 18
  },
  {
   "cell_type": "markdown",
   "source": [
    "### 让 chatbot 理解不同的操作？\n",
    "可以使用 `Agent` 来让 chatbot 理解不同的操作：\n",
    "\n",
    "1. 将上面提供的几种操作封装成不同的 `Tool`。\n",
    "2. 创建一个 `AgentExecutor`，根据用户的输入，选择合适的 `Tool` 来执行。\n",
    "\n",
    "**ZeroShotAgent** 是一种在自然语言处理（NLP）和机器学习中使用的代理（Agent），它能够在没有特定任务训练的情况下，直接处理和完成新任务。这种代理利用预训练的大型语言模型（如 GPT-3、BERT 等）的强大泛化能力，通过零样本学习（zero-shot learning）来理解和执行任务\n",
    "\n",
    "**ConversationBufferWindowMemory**  是 LangChain 库中用于管理对话历史记录的一种内存管理机制。它通过维护一个固定大小的对话窗口，只保留最近的几轮对话，从而有效地管理和控制对话历史的长度。这对于构建对话系统（如聊天机器人）非常有用，因为它可以帮助模型更好地理解和回应当前的对话上下文，同时减少内存开销。\n",
    "\n",
    "**关键点**\n",
    "\n",
    "- 功能：\n",
    "  维护一个固定大小的对话窗口，只保留最近的几轮对话。\n",
    "  通过限制对话历史的长度，减少内存开销，提高性能。\n",
    "  \n",
    "- 参数：\n",
    "  \n",
    "  - k：指定对话窗口的大小，即保留的最近对话轮数。\n",
    "    \n",
    "  - memory_key：指定在内存中存储对话历史的键名，默认为 \"history\"。\n",
    "    \n",
    "  - input_key：指定输入消息的键名，默认为 \"input\"。\n",
    "    \n",
    "  - output_key：指定输出消息的键名，默认为 \"output\"。\n",
    "    \n",
    "\n",
    "应用场景：\n",
    "    \n",
    " - 聊天机器人：帮助机器人更好地理解和回应当前的对话上下文。\n",
    " - 问答系统：在多轮对话中保持上下文的一致性。\n",
    "      \n",
    " - 对话式推荐系统：根据用户的多轮反馈提供更个性化的推荐。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "42ff52c2fc20c3e4"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "ename": "ValueError",
     "evalue": "Prompt missing required variables: {'tool_names'}",
     "output_type": "error",
     "traceback": [
      "\u001B[0;31m---------------------------------------------------------------------------\u001B[0m",
      "\u001B[0;31mValueError\u001B[0m                                Traceback (most recent call last)",
      "Cell \u001B[0;32mIn[46], line 41\u001B[0m\n\u001B[1;32m     30\u001B[0m memory \u001B[38;5;241m=\u001B[39m ConversationBufferWindowMemory(k\u001B[38;5;241m=\u001B[39m\u001B[38;5;241m10\u001B[39m, memory_key\u001B[38;5;241m=\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mchat_history\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n\u001B[1;32m     31\u001B[0m \u001B[38;5;66;03m# 即将失效 LLMChain，使用 prompt | llm 代替\u001B[39;00m\n\u001B[1;32m     32\u001B[0m \u001B[38;5;66;03m# llm_chain = LLMChain(llm=ChatOpenAI(\u001B[39;00m\n\u001B[1;32m     33\u001B[0m \u001B[38;5;66;03m#     # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\u001B[39;00m\n\u001B[0;32m   (...)\u001B[0m\n\u001B[1;32m     39\u001B[0m \n\u001B[1;32m     40\u001B[0m \u001B[38;5;66;03m# 即将失效ZeroShotAgent 使用create_react_agent代替\u001B[39;00m\n\u001B[0;32m---> 41\u001B[0m agent \u001B[38;5;241m=\u001B[39m create_structured_chat_agent(Tongyi(), tools, prompt)\n",
      "File \u001B[0;32m/opt/anaconda3/envs/ai_312/lib/python3.12/site-packages/langchain/agents/structured_chat/base.py:280\u001B[0m, in \u001B[0;36mcreate_structured_chat_agent\u001B[0;34m(llm, tools, prompt, tools_renderer, stop_sequence)\u001B[0m\n\u001B[1;32m    276\u001B[0m missing_vars \u001B[38;5;241m=\u001B[39m {\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mtools\u001B[39m\u001B[38;5;124m\"\u001B[39m, \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mtool_names\u001B[39m\u001B[38;5;124m\"\u001B[39m, \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124magent_scratchpad\u001B[39m\u001B[38;5;124m\"\u001B[39m}\u001B[38;5;241m.\u001B[39mdifference(\n\u001B[1;32m    277\u001B[0m     prompt\u001B[38;5;241m.\u001B[39minput_variables \u001B[38;5;241m+\u001B[39m \u001B[38;5;28mlist\u001B[39m(prompt\u001B[38;5;241m.\u001B[39mpartial_variables)\n\u001B[1;32m    278\u001B[0m )\n\u001B[1;32m    279\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m missing_vars:\n\u001B[0;32m--> 280\u001B[0m     \u001B[38;5;28;01mraise\u001B[39;00m \u001B[38;5;167;01mValueError\u001B[39;00m(\u001B[38;5;124mf\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mPrompt missing required variables: \u001B[39m\u001B[38;5;132;01m{\u001B[39;00mmissing_vars\u001B[38;5;132;01m}\u001B[39;00m\u001B[38;5;124m\"\u001B[39m)\n\u001B[1;32m    282\u001B[0m prompt \u001B[38;5;241m=\u001B[39m prompt\u001B[38;5;241m.\u001B[39mpartial(\n\u001B[1;32m    283\u001B[0m     tools\u001B[38;5;241m=\u001B[39mtools_renderer(\u001B[38;5;28mlist\u001B[39m(tools)),\n\u001B[1;32m    284\u001B[0m     tool_names\u001B[38;5;241m=\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m, \u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;241m.\u001B[39mjoin([t\u001B[38;5;241m.\u001B[39mname \u001B[38;5;28;01mfor\u001B[39;00m t \u001B[38;5;129;01min\u001B[39;00m tools]),\n\u001B[1;32m    285\u001B[0m )\n\u001B[1;32m    286\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m stop_sequence:\n",
      "\u001B[0;31mValueError\u001B[0m: Prompt missing required variables: {'tool_names'}"
     ]
    }
   ],
   "source": [
    "from langchain_core.prompts import PromptTemplate\n",
    "from langchain_community.llms.tongyi import Tongyi\n",
    "from langchain.chains.llm import LLMChain\n",
    "from langchain.memory import ConversationBufferWindowMemory\n",
    "from langchain.agents import ZeroShotAgent, AgentExecutor, create_react_agent, create_structured_chat_agent\n",
    "from langchain_core.tools import Tool, render_text_description\n",
    "\n",
    "tools = [\n",
    "    Tool(name=\"Get current info\", func=query_web, description=\"\"\"\n",
    "    只有在需要实时信息回答时才调用它。\n",
    "    并且输入应该是一个搜索查询。\n",
    "    \"\"\"),\n",
    "    Tool(name=\"查询初中数学知识信息\", func=create_image, description=\"\"\"\n",
    "    根据用户数据如信息回答初中数学知识时调用。\n",
    "    输入应该是一个描述性的句子。\n",
    "    \"\"\")\n",
    "]\n",
    "rendered_tools = render_text_description(tools)\n",
    "\n",
    "prompt = PromptTemplate.from_template(\"\"\"\n",
    "与人类进行对话，尽可能回答以下问题。您可以使用以下工具:\n",
    "{tools}\n",
    "Begin!\n",
    "{chat_history}\n",
    "Question: {input}\n",
    "{agent_scratchpad}\n",
    "\"\"\")\n",
    "\n",
    "# 保留最近10轮会话\n",
    "memory = ConversationBufferWindowMemory(k=10, memory_key=\"chat_history\")\n",
    "# 即将失效 LLMChain，使用 prompt | llm 代替\n",
    "# llm_chain = LLMChain(llm=ChatOpenAI(\n",
    "#     # 若没有配置环境变量，请用百炼API Key将下行替换为：api_key=\"sk-xxx\",\n",
    "#     openai_api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
    "#     openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    "#     model_name=\"qwen-max\",\n",
    "#     temperature=0.7,\n",
    "# ), prompt=prompt, memory=memory)\n",
    "\n",
    "# 即将失效ZeroShotAgent 使用create_react_agent代替\n",
    "agent = create_structured_chat_agent(Tongyi(), tools, prompt)  #ZeroShotAgent(tools=tools, llm_chain=llm_chain)\n",
    "# create_react_agent()\n",
    "# agent_chain = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, handle_parsing_errors=True)\n",
    "# \n",
    "# agent_chain.invoke({\"input\": \"Hello, how can I help you?\"})\n"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-01T06:57:23.383264Z",
     "start_time": "2024-11-01T06:57:23.347861Z"
    }
   },
   "id": "9c67107a86c2fd7c",
   "execution_count": 46
  },
  {
   "cell_type": "markdown",
   "source": [
    "## 界面展示\n",
    "可以使用 gradio 来构建一个简单的 web 界面。\n",
    "这个例子中，我们添加了一个 `chatbot` 组件，以及为用户提供了一个输入框和一个提交按钮。\n",
    "\n",
    "> `inputs` 和 `outputs` 参数用于指定输入和输出的组件。`inputs` 会作为参数传递给 `respond` 函数，`respond` 的返回值会被传递给 `outputs` 组件。"
   ],
   "metadata": {
    "collapsed": false
   },
   "id": "1023e7a6f8d00d98"
  },
  {
   "cell_type": "code",
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/anaconda3/envs/ai_312/lib/python3.12/site-packages/gradio/components/chatbot.py:223: UserWarning: You have not specified a value for the `type` parameter. Defaulting to the 'tuples' format for chatbot messages, but this is deprecated and will be removed in a future version of Gradio. Please set type='messages' instead, which uses openai-style 'role' and 'content' keys.\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "* Running on local URL:  http://127.0.0.1:7867\n",
      "\n",
      "To create a public link, set `share=True` in `launch()`.\n"
     ]
    },
    {
     "data": {
      "text/plain": "<IPython.core.display.HTML object>",
      "text/html": "<div><iframe src=\"http://127.0.0.1:7867/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": ""
     },
     "execution_count": 47,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import gradio as gr\n",
    "\n",
    "\n",
    "def get_response(message):\n",
    "    res = agent_chain.invoke(message)\n",
    "    return res['output']\n",
    "    # return message\n",
    "\n",
    "\n",
    "def respond(message, chat_history):\n",
    "    \"\"\"对话函数\"\"\"\n",
    "    bot_message = get_response(message)\n",
    "    chat_history.append((message, bot_message))\n",
    "    return \"\", chat_history\n",
    "\n",
    "\n",
    "with gr.Blocks() as demo:\n",
    "    chatbot = gr.Chatbot(height=500)  # 对话框\n",
    "    msg = gr.Textbox(label=\"输入框\")  # 输入框\n",
    "    btn = gr.Button(\"提交\")  # 按钮\n",
    "    clear = gr.ClearButton(components=[msg, chatbot], value=\"清除\")  # 清除按钮\n",
    "    btn.click(respond, inputs=[msg, chatbot], outputs=[msg, chatbot])\n",
    "    msg.submit(respond, inputs=[msg, chatbot], outputs=[msg, chatbot])\n",
    "gr.close_all()\n",
    "demo.launch()"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2024-11-06T00:47:14.038891Z",
     "start_time": "2024-11-06T00:47:08.793310Z"
    }
   },
   "id": "691c488188054d45",
   "execution_count": 47
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
