{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# 2.2 Expanding the knowledge scope of the Q&A bot\n",
        "\n",
        "## 🚄 Preface\n",
        "\n",
        "You have already learned that a RAG chatbot is an effective solution for expanding the knowledge scope of LLMs. In this section, you will learn about the workflow of a RAG chatbot and how to create a RAG chatbot application so that it can answer questions based on the company's policy documents.\n",
        "\n",
        "## 🍁 Goals\n",
        "\n",
        "After completing this section, you will be able to:\n",
        "\n",
        "* Understand the workflow of RAG chatbot\n",
        "* Create a RAG chatbot application\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 1. How RAG works\n",
        "\n",
        "RAG works by providing reference materials to LLMs, similar to an open-book exam. If a model has not seen certain knowledge points during training, directly asking it related questions may result in inaccurate answers. However, if relevant knowledge is provided as a reference, the quality of the LLM's responses will significantly improve.\n",
        "\n",
        "RAG is a solution that provides reference materials for LLMs. RAG applications typically consist of two parts: **indexing** and **retrieval generation**.\n",
        "\n",
        "### 1.1 Indexing\n",
        "Indexing involves preparing reference materials for efficient retrieval. It's much like marking pages in a book so you can find the information you need during the exam. Indexing includes four steps:<br>\n",
        "1. **Document Parsing**<br>\n",
        "Converting documents into a textual format that an LLM can understand.\n",
        "2. **Text Chunking**<br>\n",
        "Segmenting parsed documents into smaller chunks for faster retrieval.\n",
        "3. **Text Embedding**<br>\n",
        "Using embedding models to digitally represent both the question and the paragraphs, then comparing their similarity to find the most relevant content.\n",
        "    > If you're interested in the details of this process, you can explore the extended reading section of this tutorial.\n",
        "4. **Index Storage**<br>\n",
        "Storing vectorized content in a database to avoid repeating the process each time.\n",
        "\n",
        "    <img src=\"https://img.alicdn.com/imgextra/i3/O1CN01h0y0Uy1WH30Q7FRDJ_!!6000000002762-2-tps-1592-503.png\" width=\"1000\"><br>\n",
        "\n",
        "    After indexing, RAG applications can retrieve relevant text segments based on user questions.\n",
        "\n",
        "### 1.2 Retrieval generation\n",
        "RAG includes two stages: `Retrieval` and `Generation`. <br>\n",
        "1. **Retrieval**<br>\n",
        "The retrieval phase recalls the most relevant text segments. Continuing the open-book test analogy, it's like searching for the answer in your textbook. The question is vectorized using an embedding model, and semantic similarity is compared with the paragraphs in the vector database to identify the most relevant ones. Retrieval is the most critical part of a RAG application. Imagine finding the wrong material during an exam—your answer would be inaccurate. To improve retrieval accuracy, besides using powerful embedding models, techniques like reranking and sentence window retrieval can be applied.\n",
        "2. **Generation**<br>\n",
        "After you've found the  information you're looking for, it's time to apply it so that it answers the question.Similarly, after retrieving relevant text segments, the RAG application generates the final prompt by combining the question and the retrieved text segments through a prompt template. The LLM then generates the response, leveraging its summarization abilities rather than relying solely on its internal knowledge.\n",
        "    > A typical prompt template is: `Please answer the user's question based on the following information: {retrieved text segments}. The user's question is: {question}.`\n",
        "\n",
        "    <img src=\"https://img.alicdn.com/imgextra/i2/O1CN01cA2SmX1ociqCTwjys_!!6000000005246-2-tps-2719-703.png\" width=\"600\"><br>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 2. Creating a RAG application\n",
        "\n",
        "Building a RAG application requires implementing the above functionalities.. This process is quite complex. However, with LlamaIndex, you can complese these tasks without needing to write  much code.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### 2.1 Confirm your Python environment  \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Before running the code in this section of the course, make sure you are using the correct Python environment, such as the `Python (llm_learn)` environment created in  previous lessons.\n",
        "\n",
        "<img src=\"https://img.alicdn.com/imgextra/i1/O1CN01B9bNMT27MDFvpBmnc_!!6000000007782-2-tps-1944-448.png\" width=\"800\">\n",
        "\n",
        "**Note: In each subsequent lesson, you should check whether you need to manually switch the Notebook environment.**\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### 2.2 A simple RAG chatbot\n",
        "\n",
        "Like the tutorial in the previous section, you must to configure the Model Studio API Key into the environment.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "execution": {
          "iopub.execute_input": "2024-11-15T09:00:41.030766Z",
          "iopub.status.busy": "2024-11-15T09:00:41.030362Z",
          "iopub.status.idle": "2024-11-15T09:00:41.236899Z",
          "shell.execute_reply": "2024-11-15T09:00:41.236115Z",
          "shell.execute_reply.started": "2024-11-15T09:00:41.030739Z"
        },
        "tags": []
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Your configured API Key is: sk-98*****\n"
          ]
        }
      ],
      "source": [
        "from config.load_key import load_key\n",
        "import os\n",
        "\n",
        "load_key()\n",
        "# In production environments, do not output the API Key to logs to avoid leakage\n",
        "print(f'Your configured API Key is: {os.environ[\"DASHSCOPE_API_KEY\"][:5]+\"*\"*5}')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "In the docs folder, you'll find some fictional company policy documents we've prepared. ext, you will create a RAG application based on these documents."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "execution": {
          "iopub.execute_input": "2024-11-15T09:00:43.822829Z",
          "iopub.status.busy": "2024-11-15T09:00:43.822278Z",
          "iopub.status.idle": "2024-11-15T09:00:58.744414Z",
          "shell.execute_reply": "2024-11-15T09:00:58.743812Z",
          "shell.execute_reply.started": "2024-11-15T09:00:43.822800Z"
        },
        "tags": []
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Parsing files...\n",
            "Creating index...\n",
            "Creating query engine...\n",
            "Generating response...\n",
            "The answer is:\n",
            "For project management, your company should use the following tools based on the information provided:\n",
            "\n",
            "- **Asana** for coordinating tasks, particularly when working with Instructional Designers.\n",
            "- **Jira** for participating in daily standups and managing interactions with technical teams.\n",
            "- **HubSpot** for aligning launch strategies, particularly in collaboration with marketing efforts.\n",
            "- **GitLab** for content version tracking and maintaining a data-driven version control process.\n",
            "- **Miro whiteboards** for requirement gathering and stakeholder mapping during the needs analysis phase.\n",
            "- **Confluence** for creating and documenting requirement specifications.\n",
            "\n",
            "These tools support efficient task tracking, collaboration, version control, and stakeholder engagement across departments."
          ]
        }
      ],
      "source": [
        "# Import dependencies\n",
        "from llama_index.embeddings.dashscope import DashScopeEmbedding, DashScopeTextEmbeddingModels\n",
        "from llama_index.core import SimpleDirectoryReader, VectorStoreIndex\n",
        "from llama_index.llms.openai_like import OpenAILike\n",
        "\n",
        "# These two lines of code are used to suppress WARNING messages to avoid interference with reading and learning. It is recommended to set the log level as needed in a production environment.\n",
        "import logging\n",
        "logging.basicConfig(level=logging.ERROR)\n",
        "\n",
        "print(\"Parsing files...\")\n",
        "# LlamaIndex provides the SimpleDirectoryReader method, which can directly load files from a specified folder into document objects, corresponding to the parsing process.\n",
        "documents = SimpleDirectoryReader('./docs').load_data()\n",
        "\n",
        "print(\"Creating index...\")\n",
        "# The from_documents method includes slicing and index creation steps.\n",
        "index = VectorStoreIndex.from_documents(\n",
        "    documents,\n",
        "    # Specify embedding model\n",
        "    embed_model=DashScopeEmbedding(\n",
        "        # You can also use other embedding models provided by Alibaba Cloud: https://help.aliyun.com/zh/model-studio/getting-started/models#3383780daf8hw\n",
        "        model_name=DashScopeTextEmbeddingModels.TEXT_EMBEDDING_V2\n",
        "    ))\n",
        "print(\"Creating query engine...\")\n",
        "query_engine = index.as_query_engine(\n",
        "    # Set to streaming output\n",
        "    streaming=True,\n",
        "    # Here we use the qwen-plus-0919 model. You can also use other Qwen text generation models provided by Alibaba Cloud: https://help.aliyun.com/zh/model-studio/getting-started/models#9f8890ce29g5u\n",
        "    llm=OpenAILike(\n",
        "        model=\"qwen-plus\",\n",
        "        api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
        "        api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
        "        is_chat_model=True\n",
        "        ))\n",
        "print(\"Generating response...\")\n",
        "streaming_response = query_engine.query('What tools should our company use for project management?')\n",
        "print(\"The answer is:\")\n",
        "# Use streaming output\n",
        "streaming_response.print_response_stream()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### 2.3 Saving and loading index\n",
        "Creating an index can take time. To save time, you can save the index locally and load it  when needed. This improves response speed and avoids rebuilding the index from scratch. LlamaIndex provides an easy-to-implement method for saving and loading indexes.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "execution": {
          "iopub.execute_input": "2024-11-15T09:00:59.966266Z",
          "iopub.status.busy": "2024-11-15T09:00:59.965889Z",
          "iopub.status.idle": "2024-11-15T09:01:00.240477Z",
          "shell.execute_reply": "2024-11-15T09:01:00.239682Z",
          "shell.execute_reply.started": "2024-11-15T09:00:59.966241Z"
        },
        "tags": []
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Index files saved to knowledge_base/test\n"
          ]
        }
      ],
      "source": [
        "# Save the index as a local file\n",
        "index.storage_context.persist(\"knowledge_base/test\")\n",
        "print(\"Index files saved to knowledge_base/test\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {
        "execution": {
          "iopub.execute_input": "2024-11-15T09:01:02.142167Z",
          "iopub.status.busy": "2024-11-15T09:01:02.141798Z",
          "iopub.status.idle": "2024-11-15T09:01:02.675970Z",
          "shell.execute_reply": "2024-11-15T09:01:02.675221Z",
          "shell.execute_reply.started": "2024-11-15T09:01:02.142142Z"
        },
        "tags": []
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Successfully loaded index from knowledge_base/test path\n"
          ]
        }
      ],
      "source": [
        "# Load the local index file as an index\n",
        "from llama_index.core import StorageContext, load_index_from_storage\n",
        "storage_context = StorageContext.from_defaults(persist_dir=\"knowledge_base/test\")\n",
        "index = load_index_from_storage(storage_context, embed_model=DashScopeEmbedding(\n",
        "        model_name=DashScopeTextEmbeddingModels.TEXT_EMBEDDING_V2\n",
        "    ))\n",
        "print(\"Successfully loaded index from knowledge_base/test path\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "After loading the index locally, test it by asking questions."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {
        "execution": {
          "iopub.execute_input": "2024-11-15T09:01:11.982327Z",
          "iopub.status.busy": "2024-11-15T09:01:11.981943Z",
          "iopub.status.idle": "2024-11-15T09:01:14.921721Z",
          "shell.execute_reply": "2024-11-15T09:01:14.921129Z",
          "shell.execute_reply.started": "2024-11-15T09:01:11.982304Z"
        },
        "tags": []
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Creating the query engine...\n",
            "Generating response...\n",
            "The answer is:\n",
            "For project management, the company should use Asana for coordinating tasks, Jira for participating in daily standups with technical teams, and HubSpot for aligning launch strategies. Additionally, Miro whiteboards can be utilized for stakeholder mapping during requirement gathering, and Confluence can be used for creating requirement specifications."
          ]
        }
      ],
      "source": [
        "print(\"Creating the query engine...\")\n",
        "query_engine = index.as_query_engine(\n",
        "    # Set to streaming output\n",
        "    streaming=True,\n",
        "    # Use the qwen-plus-0919 model here. You can also use other text generation models provided by Alibaba Cloud: https://help.aliyun.com/zh/model-studio/getting-started/models#9f8890ce29g5u\n",
        "    llm=OpenAILike(\n",
        "        model=\"qwen-plus\",\n",
        "        api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
        "        api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
        "        is_chat_model=True\n",
        "        ))\n",
        "print(\"Generating response...\")\n",
        "streaming_response = query_engine.query('What tools should our company use for project management?')\n",
        "print(\"The answer is:\")\n",
        "streaming_response.print_response_stream()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Encapsulate the above code so for quick reuse in subsequent iterations."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "execution": {
          "iopub.execute_input": "2024-11-15T09:01:16.991663Z",
          "iopub.status.busy": "2024-11-15T09:01:16.991276Z",
          "iopub.status.idle": "2024-11-15T09:01:20.492123Z",
          "shell.execute_reply": "2024-11-15T09:01:20.491499Z",
          "shell.execute_reply.started": "2024-11-15T09:01:16.991640Z"
        },
        "tags": []
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "For project management, the company should use tools such as Asana for task tracking and Jira for daily standups. These tools facilitate coordination among instructional designers and technical teams, ensuring efficient and organized project execution."
          ]
        }
      ],
      "source": [
        "from chatbot import rag\n",
        "\n",
        "# The citations have been indexed in previous steps, so the index can be loaded directly here. If you need to rebuild the index, you can add a line of code: rag.indexing()\n",
        "index = rag.load_index(persist_path='./knowledge_base/test')\n",
        "query_engine = rag.create_query_engine(index=index)\n",
        "\n",
        "rag.ask('What tools should our company use for project management?', query_engine=query_engine)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### 2.4 Multi-round conversation\n",
        "Multi-round conversations in RAG are slightly different from the mechanism of initiating multi-round conversations directly with LLMs. From the tutorial in Section 2.1, you have learned that multi-round conversations allow LLMs to refer to historical dialog information. The method is to include this historical information in the messages list.\n",
        "\n",
        "During the retrieval phase in RAG applications, the system typically compares the semantic similarity between the user's input and the text segments. However, doing so may result in losing important historical dialogue context, leading to inaccurate retrieval results.\n",
        "\n",
        "Suppose a user asks \"Where is Jimmy Peterson's workstation?\" in the first round,  then asks \"Who is his supervisor?\" in the second round. The retrieval system may not know who “he” refers to if it only compares the second question with the text segments. This could lead to retrieving incorrect content.\n",
        "\n",
        "If both the full historical dialog and the question are input into the retrieval system, the retrieval system may struggle due to the length of the text,  (embedding models perform worse on long texts than on short ones). A common industry solution is:\n",
        "\n",
        "1. Use the LLM to rewrite the query based on the historical dialogue, incorporating key information from the conversation.\n",
        "2. Use the rewritten query to follow the original retrieval and generationprocess.\n",
        "\n",
        "LlamaIndex provides convenient tools that make it easy to implement multi-round conversations in RAG applications.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 8,
      "metadata": {
        "execution": {
          "iopub.execute_input": "2024-11-15T09:01:26.566550Z",
          "iopub.status.busy": "2024-11-15T09:01:26.566171Z",
          "iopub.status.idle": "2024-11-15T09:01:33.772277Z",
          "shell.execute_reply": "2024-11-15T09:01:33.771645Z",
          "shell.execute_reply.started": "2024-11-15T09:01:26.566525Z"
        },
        "tags": []
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Querying with: What are the core responsibilities of content development engineers, including all relevant context from the previous discussion?\n",
            "Content development engineers are responsible for integrating educational theory with technical practice to support learners' growth through high-quality content creation. Their core responsibilities include:\n",
            "\n",
            "1. **Educational Innovation & Market Alignment**: Conducting in-depth research on educational technology trends, learning theories, and market demands. This involves analyzing competitors' products, evaluating existing educational resources, and exploring the integration of emerging technologies like artificial intelligence and virtual reality into educational content. They ensure content remains technologically advanced and aligned with the needs of educators and learners.\n",
            "\n",
            "2. **Curriculum Design & Development**: Designing and developing high-quality educational materials and courses based on research and market feedback. This includes creating syllabi, courseware, and assessment tools while ensuring alignment with educational standards and learning objectives. Consideration is given to diverse learner needs to accommodate various learning styles and levels.\n",
            "\n",
            "3. **Content Optimization and Updates**: Continuously optimizing existing educational materials by tracking learner feedback and evaluations to identify and address issues. Regular updates are made to reflect new research findings, technological advancements, and market changes, ensuring the content remains timely and relevant.\n",
            "\n",
            "4. **Implementation of Advanced Tools and Standards**: Utilizing tools and standards such as adaptive learning engines, NLP-powered chatbots, authoring tools (e.g., Articulate 360, Lectora Inspire, Camtasia), and compliance standards (e.g., SCORM, xAPI, WCAG 2.1) to enhance the quality and accessibility of educational content.\n",
            "\n",
            "These responsibilities ensure that the content development engineer contributes to both learner success and organizational growth through innovative learning solutions."
          ]
        }
      ],
      "source": [
        "from llama_index.core import PromptTemplate\n",
        "from llama_index.core.llms import ChatMessage, MessageRole\n",
        "from llama_index.core.chat_engine import CondenseQuestionChatEngine\n",
        "\n",
        "custom_prompt = PromptTemplate(\n",
        "    \"\"\"\n",
        "    Given a conversation (between a human and an assistant) and a follow-up message from the human,\n",
        "    rewrite the message as a standalone question that includes all relevant context from the conversation.\n",
        "\n",
        "    <Chat History>\n",
        "    {chat_history}\n",
        "\n",
        "    <Follow-up Message>\n",
        "    {question}\n",
        "\n",
        "    <Standalone Question>\n",
        "\"\"\"\n",
        ")\n",
        "\n",
        "# Historical conversation information\n",
        "custom_chat_history = [\n",
        "    ChatMessage(role=MessageRole.USER,content=\"What are the subtypes of content development engineers?\"),\n",
        "    ChatMessage(role=MessageRole.ASSISTANT, content=\"Comprehensive technical positions.\"),\n",
        "]\n",
        "\n",
        "query_engine = index.as_query_engine(\n",
        "    # Set to streaming output\n",
        "    streaming=True,\n",
        "    # Use the qwen-plus model here; you can also use other text generation models provided by Alibaba Cloud: https://help.aliyun.com/zh/model-studio/getting-started/models#9f8890ce29g5u\n",
        "    llm=OpenAILike(\n",
        "        model=\"qwen-plus\",\n",
        "        api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
        "        api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
        "        is_chat_model=True\n",
        "        ))\n",
        "chat_engine = CondenseQuestionChatEngine.from_defaults(\n",
        "    query_engine=query_engine,\n",
        "    condense_question_prompt=custom_prompt,\n",
        "    chat_history=custom_chat_history,\n",
        "    llm=OpenAILike(\n",
        "        model=\"qwen-plus\",\n",
        "        api_base=\"https://dashscope.aliyuncs.com/compatible-mode/v1\",\n",
        "        api_key=os.getenv(\"DASHSCOPE_API_KEY\"),\n",
        "        is_chat_model=True\n",
        "        ),\n",
        "    verbose=True\n",
        ")\n",
        "\n",
        "streaming_response = chat_engine.stream_chat(\"What are the core responsibilities?\")\n",
        "for token in streaming_response.response_gen:\n",
        "    print(token, end=\"\")\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Although the last question did not mention \"content development engineer,\" the LLM still rewrote the question based on the historical dialog information, rephrasing it as \"What are the core responsibilities of a content development engineer?\" and provided the correct answer."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 📝3.Summary\n",
        "Here's what we covered in this section:\n",
        "1. **Understanding the working principle of RAG**<br>\n",
        "A complete RAG application typically involves two main phases: index building and retrieval generation.\n",
        "\n",
        "* The index building phase includes four steps: document parsing, text segmentation, text vectorization, and index storage.\n",
        "* The retrieval generation phase consists of two steps: retrieval and generation.\n",
        "\n",
        "By understanding how RAG works, you can better optimize and iterate on your RAG chatbot.\n",
        "\n",
        "2. **Creating a RAG application**<br>\n",
        "Using the highly integrated tools provided by LlamaIndex, you built a RAG application, and learned how to save and load indexes. You also gained knowledge on implementing multi-round conversation within a RAG application.\n",
        "\n",
        "Although the RAG chatbot can already answer questions like \"What tools should our company use for project management?\" quite well, its current functionality is still quite basic. In upcoming tutorials, we will explore ways to expand the capabilities of the RAG chatbot. The next section will focus on improving the quality of the RAG chatbot's responses through prompt optimization.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "### Further reading\n",
        "\n",
        "#### Text vectorization\n",
        "Computers cannot directly understand the similarity between two sentences such as \"I like  apples\" and \"I love apples.\" However, they can understand the similarity between two numerical vectors of the same dimension, usually using cosine similarity. \n",
        "\n",
        "Text vectorization converts natural language into numerical forms that computers can process, using embedding models. These models are trained using a technique called **contrastive learning**, where the input data consists of many text pairs (s1, s2), each labeled as  related or unrelated. The model's goal is to maximize the similarity of related text pairs  and minimize the  similarity between those of unrelated  pairs.\n",
        "\n",
        "During the **indexing** phase, after text segmentation produces n chunks (such as [c1, c2, c3, ..., cn]), an embedding model converts them into corresponding vectors ([v1, v2, v3, ..., vn]), which are then stored in a vector database.\n",
        "\n",
        "In the **retrieval** phase, when a user asks a question q, the embedding model converts it into a vector vq. It then finds the n most similar vectors to vq in the vector database (this number can be adjusted as needed). Through the relationship between these vectors and their corresponding text segments, the relevant text segments are retrieved as search results."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 🔥 Quiz\n",
        "\n",
        "### 🔍 Single choice question\n",
        "\n",
        "<details>\n",
        "<summary style=\"cursor: pointer; padding: 12px; border: 1px solid #dee2e6; border-radius: 6px;\">\n",
        "<b>How should retrieval be conducted during multi-turn conversations in RAG applications❓ (Select 1.)</b>\n",
        "\n",
        "- A. Input the complete historical dialogue information during the retrieval phase<br>\n",
        "- B. Rewrite the input question based on historical dialogue information before entering the retrieval phase<br>\n",
        "- C. Input the latest question during the retrieval phase<br>\n",
        "- D. Migrate the text segments recalled from the previous round<br>\n",
        "\n",
        "**[Click to view the answer]**\n",
        "</summary>\n",
        "\n",
        "<div style=\"margin-top: 10px; padding: 15px;  border: 1px solid #dee2e6; border-radius: 0 0 6px 6px;\">\n",
        "\n",
        "✅ **Reference Answer: B**  \n",
        "📝 **Explanation**:  \n",
        "- In multi-turn conversations, directly using the original question (Option C) or the full history (Option A) can lead to retrieval noise or information redundancy.\n",
        "- Option B dynamically rewrites the current question, maintaining conversational coherence while avoiding the outdated text migration issue of Option D, making it the optimal solution balancing efficiency and accuracy.\n",
        "\n",
        "</div>\n",
        "</details>  \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": []
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "llm_learn",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.12.10"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 4
}
