{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "True"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from dotenv import load_dotenv\n",
    "\n",
    "load_dotenv(\".env.local\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain_openai import ChatOpenAI\n",
    "\n",
    "llm = ChatOpenAI(\n",
    "    model=\"qwen-plus\",\n",
    "    #openai_api_key=OPENAI_API_KEY,\n",
    "    openai_api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "import bs4\n",
    "from langchain import hub\n",
    "from langchain.chains import create_retrieval_chain\n",
    "from langchain.chains.combine_documents import create_stuff_documents_chain\n",
    "from langchain_chroma import Chroma\n",
    "from langchain_community.document_loaders import WebBaseLoader\n",
    "from langchain_core.prompts import ChatPromptTemplate\n",
    "from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
    "from langchain_community.embeddings import DashScopeEmbeddings\n",
    "\n",
    "# 1. Load, chunk and index the contents of the blog to create a retriever.\n",
    "loader = WebBaseLoader(\n",
    "    web_paths=(\"https://lilianweng.github.io/posts/2023-06-23-agent/\",),\n",
    "    bs_kwargs=dict(\n",
    "        parse_only=bs4.SoupStrainer(\n",
    "            class_=(\"post-content\", \"post-title\", \"post-header\")\n",
    "        )\n",
    "    ),\n",
    ")\n",
    "docs = loader.load()\n",
    "\n",
    "text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)\n",
    "splits = text_splitter.split_documents(docs)\n",
    "vectorstore = Chroma.from_documents(documents=splits, embedding=DashScopeEmbeddings())\n",
    "retriever = vectorstore.as_retriever()\n",
    "\n",
    "\n",
    "# 2. Incorporate the retriever into a question-answering chain.\n",
    "system_prompt = (\n",
    "    \"You are an assistant for question-answering tasks. \"\n",
    "    \"Use the following pieces of retrieved context to answer \"\n",
    "    \"the question. If you don't know the answer, say that you \"\n",
    "    \"don't know. Use three sentences maximum and keep the \"\n",
    "    \"answer concise.\"\n",
    "    \"\\n\\n\"\n",
    "    \"{context}\"\n",
    ")\n",
    "\n",
    "prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", system_prompt),\n",
    "        (\"human\", \"{input}\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "question_answer_chain = create_stuff_documents_chain(llm, prompt)\n",
    "rag_chain = create_retrieval_chain(retriever, question_answer_chain)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "'Task decomposition is the process of breaking down a complex task into smaller, more manageable steps or subgoals. This approach helps in planning and executing tasks by simplifying decision-making and improving clarity. Techniques like Chain of Thought (CoT) and Tree of Thoughts (ToT) are used to facilitate this decomposition, either through step-by-step reasoning or exploring multiple reasoning paths.'"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "response = rag_chain.invoke({\"input\": \"What is Task Decomposition?\"})\n",
    "response[\"answer\"]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "请注意，使用了内置的链构造函数 `create_stuff_documents_chain` 和 `create_retrieval_chain`，因此我们解决方案的基本组成部分是：\n",
    "\n",
    "1. 检索器；\n",
    "2. 提示；\n",
    "3. 大型语言模型。\n",
    "\n",
    "这将简化整合聊天历史的过程。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 添加聊天历史"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们构建的链直接使用输入查询来检索相关上下文。但在对话环境中，用户查询可能需要对话上下文才能被理解。例如，考虑以下交流：\n",
    "\n",
    "> 人类: \"什么是任务分解？\"\n",
    "> AI: \"任务分解涉及将复杂任务分解为更小、更简单的步骤，以便使代理或模型更易于管理。\" \n",
    "> 人类: \"常见的做法有哪些？\"\n",
    "\n",
    "为了回答第二个问题，我们的系统需要理解“它”指的是“任务分解”。\n",
    "\n",
    "我们需要更新我们现有应用的两个方面：\n",
    "\n",
    "1. 提示词: 更新我们的提示词以支持历史消息作为输入。\n",
    "2. 上下文化问题: 添加一个子链，获取最新的用户问题，并在聊天历史的上下文中重新表述它。这可以简单地理解为构建一个新的“历史感知”检索器。而之前我们有：\n",
    "    * 查询 -> 检索器 现在我们将有：\n",
    "    * (查询, 聊天历史) -> 大型语言模型 -> 重新表述的查询 -> 检索器"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 上下文问题"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "首先，我们需要定义一个子链，该子链获取历史消息和最新的用户问题，并在引用历史信息时重新表述问题。\n",
    "\n",
    "我们将使用一个提示词，其中包含名为 `chat_history` 的 `MessagesPlaceholder` 变量。这允许我们使用 `chat_history` 输入键将消息列表传递给提示词，这些消息将在系统消息之后和包含最新问题的人类消息之前插入。\n",
    "\n",
    "请注意，我们利用一个辅助函数 `create_history_aware_retriever` 来处理这一步，该函数管理 `chat_history` 为空的情况，否则按顺序应用 `提示词 | 大型语言模型 | 输出解析器() | 检索器`。\n",
    "\n",
    "`create_history_aware_retriever` 构建一个接受 输入 和 聊天历史 作为输入的链，并具有与检索器相同的输出模式。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.chains import create_history_aware_retriever\n",
    "from langchain_core.prompts import MessagesPlaceholder\n",
    "\n",
    "contextualize_q_system_prompt = (\n",
    "    \"Given a chat history and the latest user question \"\n",
    "    \"which might reference context in the chat history, \"\n",
    "    \"formulate a standalone question which can be understood \"\n",
    "    \"without the chat history. Do NOT answer the question, \"\n",
    "    \"just reformulate it if needed and otherwise return it as is.\"\n",
    ")\n",
    "\n",
    "contextualize_q_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", contextualize_q_system_prompt),\n",
    "        MessagesPlaceholder(\"chat_history\"),\n",
    "        (\"human\", \"{input}\"),\n",
    "    ]\n",
    ")\n",
    "history_aware_retriever = create_history_aware_retriever(\n",
    "    llm, retriever, contextualize_q_prompt\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "该链将输入查询的重述添加到我们的检索器之前，以便检索包含对话的上下文。\n",
    "\n",
    "现在我们可以构建完整的问答链。这只需将检索器更新为我们的新 `history_aware_retriever`。\n",
    "\n",
    "同样，我们将使用 `create_stuff_documents_chain` 来生成一个 `question_answer_chain`，输入键为 `context`、`chat_history` 和 `input` ——它接受检索到的上下文以及对话历史和查询以生成答案。\n",
    "\n",
    "我们使用 `create_retrieval_chain` 构建最终的 `rag_chain`。该链按顺序应用 `history_aware_retriever` 和 `question_answer_chain`，保留中间输出，例如检索到的上下文，以便于使用。它的输入键为 `input` 和 `chat_history`，输出中包括 `input`、`chat_history`、`context` 和 `answer`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "from langchain.chains import create_retrieval_chain\n",
    "from langchain.chains.combine_documents import create_stuff_documents_chain\n",
    "\n",
    "qa_prompt = ChatPromptTemplate.from_messages(\n",
    "    [\n",
    "        (\"system\", system_prompt),\n",
    "        MessagesPlaceholder(\"chat_history\"),\n",
    "        (\"human\", \"{input}\"),\n",
    "    ]\n",
    ")\n",
    "\n",
    "question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)\n",
    "\n",
    "rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "让我们试试这个。下面我们提出一个问题和一个需要上下文化的后续问题，以返回合理的响应。因为我们的链包括一个 `chat_history` 输入，调用者需要管理聊天历史。我们可以通过将输入和输出消息附加到列表来实现这一点："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Task decomposition is the process of breaking down a complex task into smaller, more manageable steps or subgoals. This approach helps in planning and executing tasks efficiently by focusing on one step at a time. Techniques like Chain of Thought (CoT) and Tree of Thoughts (ToT) are used to facilitate this decomposition, either by sequential reasoning or exploring multiple possibilities at each step.\n",
      "Common ways of performing task decomposition include:  \n",
      "1. **Using simple prompts** with large language models (LLMs) to generate step-by-step breakdowns, such as asking \"Steps for XYZ.\"  \n",
      "2. **Task-specific instructions**, like requesting a story outline when writing a novel.  \n",
      "3. **Human input**, where people provide guidance or structure for breaking down the task.\n"
     ]
    }
   ],
   "source": [
    "from langchain_core.messages import AIMessage, HumanMessage\n",
    "\n",
    "chat_history = []\n",
    "\n",
    "question = \"What is Task Decomposition?\"\n",
    "ai_msg_1 = rag_chain.invoke({\"input\": question, \"chat_history\": chat_history})\n",
    "print(ai_msg_1[\"answer\"])\n",
    "\n",
    "chat_history.extend(\n",
    "    [\n",
    "        HumanMessage(content=question),\n",
    "        AIMessage(content=ai_msg_1[\"answer\"]),\n",
    "    ]\n",
    ")\n",
    "\n",
    "second_question = \"What are common ways of doing it?\"\n",
    "ai_msg_2 = rag_chain.invoke({\"input\": second_question, \"chat_history\": chat_history})\n",
    "\n",
    "print(ai_msg_2[\"answer\"])"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
