{
  "cells": [
    {
      "cell_type": "markdown",
      "id": "39f10553-d93f-496b-a5c1-7e224fbb7f0f",
      "metadata": {},
      "source": [
        "## 2.4 Automated evaluation of the Q&A bot's performance\n",
        "\n",
        "### 🚄 Preface\n",
        "\n",
        "The new Q&A bot may encounter some issues during real-world use, especially when users ask specific questions that require detailed knowledge from internal documents. For example, when a new employee asks, \"How do I request leave?\" the bot might provide a generic response instead of consulting the company’s official policy documents for accurate guidance.\n",
        "\n",
        "Just as conventional software development requires testing and validation, it is equally important to establish an **evaluation system** for your Q&A bot project. This ensures that similar issues can be quickly identified and resolved. Moreover, after implementing any optimization or improvement, you should run a batch of test questions to confirm that the changes positively impact  the overall performance of the Q&A bot.\n",
        "\n",
        "In this chapter, you will learn how to **automate evaluation processes** using LLMs and specialized frameworks like **Ragas**, enabling you to measure both the quality of answers and the effectiveness of retrieval.\n",
        "\n",
        "## 🍁 Goals\n",
        "After completing this chapter, you will be able to:\n",
        "\n",
        "- Understand how to automate evaluations for LLM applications.\n",
        "- Evaluate RAG chatbots using automated tools such as Ragas.\n",
        "- Identify and solve problems in your Q&A bot by analyzing evaluation scores.\n",
        "\n",
        "<!-- ## 📖 Course Outline\n",
        "In this chapter, we will first understand some current issues with RAG chatbot through a specific problem. Then, we will attempt to discover the issue by implementing a simple automated test ourselves. Finally, we will learn how to use the more mature RAG application testing framework, Ragas, to assess the performance of RAG chatbot.\n",
        "\n",
        "- 1.&nbsp;Evaluating RAG Application Performance\n",
        "    - 1.1 Issues with the Q&A robot\n",
        "    - 1.2 Checking RAG retrieval results to troubleshoot issues\n",
        "    - 1.3 Attempting to establish an automated testing mechanism\n",
        "\n",
        "- 2.&nbsp;Using Ragas to Evaluate Application Performance\n",
        "     - 2.1 Evaluating the quality of responses from the RAG chatbot\n",
        "       - 2.1.1 Quick start\n",
        "       - 2.1.2 Understanding How Answer Correctness Is Calculated\n",
        "     - 2.2 Evaluating the recall effectiveness of retrieval\n",
        "       - 2.2.1 Quick start\n",
        "       - 2.2.2 Understanding the calculation process of context recall and context precision -->"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "c917f6bc",
      "metadata": {},
      "source": [
        "## 1. Evaluating RAG application performance\n",
        "\n",
        "### 1.1 Issues with the Q&A Bot\n",
        "\n",
        "In the previous section, you completed the development of a Q&A bot and began exploring how to evaluate its performance."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "33d09a89",
      "metadata": {},
      "outputs": [],
      "source": [
        "import os\n",
        "from config.load_key import load_key\n",
        "load_key()\n",
        "print(f'Your configured API Key is: {os.environ[\"DASHSCOPE_API_KEY\"][:5]+\"*\"*5}')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "947e89be",
      "metadata": {},
      "outputs": [],
      "source": [
        "from chatbot import rag\n",
        "rag.indexing()\n",
        "query_engine = rag.create_query_engine(rag.load_index())\n",
        "print('Question: Which department is Michael Johnson in?')\n",
        "response = query_engine.query('Which department is Michael Johnson in?')\n",
        "print('Answer: ', end='')\n",
        "response.print_response_stream()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "4fef052a",
      "metadata": {},
      "source": [
        "As part of this exploration, you asked the question:\n",
        "\n",
        "> **\"Which department is Michael Johnson in?\"**\n",
        "\n",
        "And received the following answer from the bot:\n",
        "\n",
        "> **\"Michael Johnson is in the IT Infrastructure Department. He serves as a System Administrator, working under Michael Chen at the 456 Tech Hub.\"**\n",
        "\n",
        "The original document contain multiple individuals named Michael Johnson, none of whom are associated with the **IT Infrastructure Department**. However the LLM generated a confident and specific response that combined elements from different contexts — creating the illusion of accuracy without being grounded in factual data.\n",
        "\n",
        "<a href=\"https://img.alicdn.com/imgextra/i1/O1CN01C6ZkQG1uGdQbJVw19_!!6000000006010-2-tps-1478-732.png\" target=\"_blank\">\n",
        "<img src=\"https://img.alicdn.com/imgextra/i1/O1CN01C6ZkQG1uGdQbJVw19_!!6000000006010-2-tps-1478-732.png\" width=\"800\">\n",
        "</a>\n",
        "\n",
        "This highlights a critical issue: the answer was not based on accurate or unambiguous context, but rather on the model's aggregation or assumptions—potentially resulting in misleading conclusions.\n",
        "\n",
        "Therefore, the next step is to examine the retrieval results used by the RAG system before generating the final answer, to ensure the context provided is accurate,  relevant, and aligned with the user's question."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "793fa40e",
      "metadata": {},
      "source": [
        "### 1.2. Inspecting Retrieval Results for Diagnosis\n",
        "\n",
        "To validate the reasoning behind the answer, we inspect the context chunks retrieved by the RAG system before generating the response.\n",
        "\n",
        "Here is the retrieved context:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "79691f6b",
      "metadata": {},
      "outputs": [],
      "source": [
        "contexts = [node.get_content() for node in response.source_nodes]\n",
        "contexts"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "48bbcc22",
      "metadata": {},
      "source": [
        "From this context, we can see that Michael Johnson is indeed listed as a System Administrator in the Course Development Department, which  supports the generated answer.\n",
        "\n",
        "✅ Conclusion: The retrieval performed well—relevant and sufficient information was retrieved to support the correct answer.\n",
        "\n",
        "While this example shows good retrieval performance, not all queries will yield such clear results. Therefore, it's essential to build an automated evaluation framework to consistently measure retrieval quality and answer accuracy across multiple test cases.\n",
        "\n",
        "### 1.3 Implementing an Automated Evaluation Mechanism\n",
        "\n",
        "Although manual inspection helps understand individual cases, it becomes impractical when dealing with hundreds or thousands of questions. Hence, we aim to build an automated testing mechanism to streamline the evaluation process.\n",
        "\n",
        "#### 1.3.1 Validating answer quality using LLMs\n",
        "\n",
        "LLMs can be used not only to generate answers but also to evaluate them. By providing both the question and the generated answer, we can prompt the LLM to determine whether the answer is valid or invalid based on the reference material.\n",
        "\n",
        "Here is a function that checks if the answer effectively addresses the question:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "b9a20f5b",
      "metadata": {},
      "outputs": [],
      "source": [
        "from chatbot import llm\n",
        "\n",
        "def test_answer(question, answer):\n",
        "    prompt = (\"You are a tester.\\n\"\n",
        "        \"You need to check whether the following answer effectively responds to the user's question.\\n\"\n",
        "        \"The reply can only be: Valid response or Invalid response. Do not provide any other information.\\n\"\n",
        "        \"------\"\n",
        "        f\"The answer is {answer}\"\n",
        "        \"------\"\n",
        "        f\"The question is: {question}\"\n",
        "    )\n",
        "    return llm.invoke(prompt,model_name=\"qwen-max\")\n",
        "\n",
        "\n",
        "test_answer(\"Which department is Michael Johnson in?\", \"Michael Johnson is in the IT Infrastructure Department. He holds the position of System Administrator and works under the supervision of Michael Chen at the 456 Tech Hub.\")"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "d7c22a73",
      "metadata": {},
      "source": [
        "The LLM confirms that the answer is valid and correctly addresses the question.\n",
        "\n",
        "\n",
        "\n",
        "#### 1.3.2 Validating context relevance\n",
        "\n",
        "Equally important is ensuring that the retrieved context is relevant and useful for answering the question. To do this, we define another function to evaluate whether the context provided aligns with the user's query. This step helps ensure that the model is not only generating a plausible response but also basing it on accurate and contextually appropriate information.\n",
        "\n",
        "This validation process strengthens the reliability of the Q&A bot by filtering out irrelevant or misleading content before the final answer is generated."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "4e20f11a",
      "metadata": {},
      "outputs": [],
      "source": [
        "def test_contexts(question, answer, contexts):\n",
        "    prompt = (\n",
        "        \"You are a tester. Your task is to determine whether the provided reference materials directly support the given answer to the question.\\n\"\n",
        "        \"If the answer can be clearly found or derived from the reference materials, respond with: The reference information is useful.\\n\"\n",
        "        \"Otherwise, respond with: The reference information is not useful.\\n\"\n",
        "        \"Do not provide any other explanation or information.\\n\"\n",
        "        \"------\\n\"\n",
        "        f\"Question: {question}\\n\"\n",
        "        f\"Answer: {answer}\\n\"\n",
        "        f\"Reference materials: {' '.join(contexts)}\\n\"\n",
        "        \"------\"\n",
        "    )\n",
        "    return llm.invoke(prompt, model_name=\"qwen-max\")\n",
        "test_contexts(\n",
        "    \"Which department is Michael Johnson in?\", \n",
        "    \"Michael Johnson is in the IT Infrastructure Department. He holds the position of System Administrator and works under the supervision of Michael Chen at the 456 Tech Hub.\", \n",
        "    contexts[0]+contexts[1]\n",
        "    )"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "24e550e2",
      "metadata": {},
      "source": [
        "#### 1.3.3 Summary of evaluation logic\n",
        "\n",
        "| Component         | Method                           | Result                |\n",
        "|------------------|----------------------------------|------------------------|\n",
        "| Question          | \"Which department is Michael Johnson in?\" |                        |\n",
        "| Generated Answer  | Evaluated using `test_answer`     | Valid response         |\n",
        "| Retrieved Context | Evaluated using `test_contexts`   | Reference info is useful |\n",
        "\n",
        "With the two methods above, you've already taken the first steps in setting up a prototype for an LLM testing project. However, the current implementation is still incomplete and has several limitations:\n",
        "\n",
        "- Hallucination Detection: LLMs can generate responses that sound confident and plausible but are not factually accurate. The `test_answer` method, as currently implemented, may not be able to effectively detect such hallucinations, leading to false positives in evaluation.\n",
        "- Relevance of retrieved context: The quality of a RAG-based system heavily depends on the relevance and accuracy of the retrieved context. A higher signal-to-noise ratio—meaning more relevant information and less irrelevant or misleading content—leads to better answers. However, the current testing approach is relatively simplistic and does not account for this critical factor.\n",
        "\n",
        "\n",
        "To address these issues and improve the robustness of your testing framework, it’s highly recommended to integrate mature evaluation tools such as [Ragas](https://docs.ragas.io/en/stable), a specialized framework designed for evaluating the performance of RAG-based chatbots."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "d3a0e7e9",
      "metadata": {},
      "source": [
        "## 2. Using Ragas to evaluate application performance\n",
        "\n",
        "Ragas offers a comprehensive set of metrics to assess the quality of question-answering across the entire application pipeline. These metrics help ensure that both the retrieval and generation phases of a RAG (Retrieval-Augmented Generation) system perform effectively.\n",
        "Here are key evaluation metrics provided by Ragas:\n",
        "* Overall response quality\n",
        "    * Answer correctness: Measures how accurate the generated answers are in relation to the actual knowledge in the dataset.\n",
        "* Generation phase evaluation\n",
        "    * Answer Relevance: Evaluates whether the generated answer is relevant to the user’s question.\n",
        "    * Faithfulness: Checks if the answer is factually consistent with the retrieved reference materials, ensuring it doesn’t introduce incorrect or fabricated information.\n",
        "* Retrieval phase evaluation\n",
        "    * Context precision: Assesses whether the retrieved context contains a high proportion of relevant information related to the correct answer.\n",
        "    * Context recall: Measures how many of the relevant reference materials are successfully retrieved; a higher score indicates fewer relevant documents are missed.\n",
        "These metrics provide a structured way to evaluate and improve the performance of your Q&A bot, ensuring that it delivers accurate, relevant, and well-supported responses.\n",
        "\n",
        "<a href=\"https://img.alicdn.com/imgextra/i4/O1CN01b2lVQp21JZCJy6Nfe_!!6000000006964-0-tps-739-420.jpg\" target=\"_blank\">\n",
        "<img src=\"https://img.alicdn.com/imgextra/i4/O1CN01b2lVQp21JZCJy6Nfe_!!6000000006964-0-tps-739-420.jpg\" width=\"500\">\n",
        "</a>  \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "86554952",
      "metadata": {},
      "source": [
        "### 2.1 Evaluating the response quality of RAG applications\n",
        "\n",
        "#### 2.1.1 Quick start  \n",
        "\n",
        "\n",
        "When evaluating the overall response quality of a RAG chatbot, using Ragas' Answer Correctness is an excellent metric. To calculate this metric, you need to prepare the following two types of data to evaluate the quality of the answer generated by the RAG chatbot:\n",
        "\n",
        "1. question (The question input to the RAG chatbot)\n",
        "2. ground_truth (The correct answer you already know)\n",
        "\n",
        "To illustrate the differences in evaluation metrics for different responses, we have prepared three sets of RAG chatbot responses to the question:\n",
        "\n",
        "**Question:**  \n",
        "\"Which department does Michael Johnson belong to?\"\n",
        "\n",
        "We will compare each model-generated **answer** against the known **ground truth**.\n",
        "\n",
        "Three sample answers are provided below, each representing a different level of correctness:\n",
        "\n",
        "- **Answer 1:** Based on the provided information, there is no mention of the department Michael Johnson belongs to. If you can provide more information about Michael Johnson, I may be able to help you find the answer.  \n",
        "  ➤ This is considered an **invalid answer**, as it fails to provide the correct response even when context may have been available.\n",
        "\n",
        "- **Answer 2:** Michael Johnson belongs to the Human Resources Department.  \n",
        "  ➤ This is a **hallucinated answer**, as it provides a confident but incorrect response.\n",
        "\n",
        "- **Answer 3:** Michael Johnson belongs to the Course Development Department.  \n",
        "  ➤ This is the **correct answer**, matching the ground truth exactly.                                         |\n",
        "\n",
        "We can then run the following code to calculate the score for response accuracy (i.e., Answer Correctness) using Ragas:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "39940304",
      "metadata": {},
      "outputs": [],
      "source": [
        "from tqdm.cli import tqdm as tqdm_cli\n",
        "import tqdm.auto\n",
        "tqdm.auto.tqdm = tqdm_cli\n",
        "\n",
        "from langchain_community.llms.tongyi import Tongyi\n",
        "from langchain_community.embeddings import DashScopeEmbeddings\n",
        "from datasets import Dataset\n",
        "from ragas import evaluate\n",
        "from ragas.metrics import answer_correctness\n",
        "\n",
        "data_samples = {\n",
        "    'question': [\n",
        "        'Which department is Michael Johnson in?',\n",
        "        'Which department is Michael Johnson in?',\n",
        "        'Which department is Michael Johnson in?'\n",
        "    ],\n",
        "    'answer': [\n",
        "        'According to the provided information, there is no mention of the department where Michael Johnson works. If you can provide more information about Michael Johnson, I may be able to help you find the answer.',\n",
        "        'Michael Johnson is in the HR department',\n",
        "        'Michael Johnson is in the Course Development Department'\n",
        "    ],\n",
        "    'ground_truth':[\n",
        "        'Michael Johnson is a member of the Course Development Department',\n",
        "        'Michael Johnson is a member of the Course Development Department',\n",
        "        'Michael Johnson is a member of the Course Development Department'\n",
        "    ]\n",
        "}\n",
        "\n",
        "dataset = Dataset.from_dict(data_samples)\n",
        "score = evaluate(\n",
        "    dataset = dataset,\n",
        "    metrics=[answer_correctness],\n",
        "    llm=Tongyi(model_name=\"qwen-plus-0919\"),\n",
        "    embeddings=DashScopeEmbeddings(model=\"text-embedding-v3\")\n",
        ")\n",
        "score.to_pandas()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "f7aa2370",
      "metadata": {},
      "source": [
        "This code will generate a score that reflects how well each model-generated answer aligns with the known correct answer. By comparing the scores across different responses, you can identify which answers are accurate, which are incorrect, and which may be hallucinated.\n",
        "\n",
        "\n",
        "\n",
        "<div>\n",
        "<style scoped>\n",
        "    .dataframe tbody tr th:only-of-type {\n",
        "        vertical-align: middle;\n",
        "    }\n",
        "\n",
        "    .dataframe tbody tr th {\n",
        "        vertical-align: top;\n",
        "    }\n",
        "\n",
        "    .dataframe thead th {\n",
        "        text-align: right;\n",
        "    }\n",
        "</style>\n",
        "<table border=\"1\" class=\"dataframe\">\n",
        "  <thead>\n",
        "    <tr style=\"text-align: right;\">\n",
        "      <th></th>\n",
        "      <th>question</th>\n",
        "      <th>answer</th>\n",
        "      <th>ground_truth</th>\n",
        "      <th>answer_correctness</th>\n",
        "    </tr>\n",
        "  </thead>\n",
        "  <tbody>\n",
        "    <tr>\n",
        "      <th>0</th>\n",
        "      <td>Which department does Michael Johnson belong to?</td>\n",
        "      <td>According to the provided information, there is no mention of the department where Michael Johnson works. If you can provide more information about Michael Johnson, I may be able to help you find the answer.</td>\n",
        "      <td>Michael Johnson is a member of the Course Development Department</td>\n",
        "      <td>0.168191</td>\n",
        "    </tr>\n",
        "    <tr>\n",
        "      <th>1</th>\n",
        "      <td>Which department does Michael Johnson belong to?</td>\n",
        "      <td>Michael Johnson belongs to the Human Resources Department.</td>\n",
        "      <td>Michael Johnson is a member of the Course Development Department</td>\n",
        "      <td>0.496046</td>\n",
        "    </tr>\n",
        "    <tr>\n",
        "      <th>2</th>\n",
        "      <td>Which department does Michael Johnson belong to?</td>\n",
        "      <td>Michael Johnson is in the Course Development Department</td>\n",
        "      <td>Michael Johnson is a member of the Course Development Department</td>\n",
        "      <td>0.998264\n",
        "</td>\n",
        "    </tr>\n",
        "  </tbody>\n",
        "</table>\n",
        "</div>  \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "55141147",
      "metadata": {},
      "source": [
        "As you can see, Ragas' Answer Correctness metric accurately reflects the performance of the three responses, with the more factually accurate answers receiving higher scores."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6bd01530",
      "metadata": {},
      "source": [
        "#### 2.1.2 Understanding How Answer Correctness Is Calculated\n",
        "\n",
        "Intuitively, the scoring of Answer Correctness aligns with your expectations. The scoring process involves using an LLM (in the code `llm=Tongyi(model_name=\"qwen-plus\"))` and an embedding model (in the code `embeddings=DashScopeEmbeddings(model=\"text-embedding-v3\"))` to calculate the result based on  **semantic similarity** and **factual accuracy** between the answer and the ground truth.\n",
        "\n",
        "##### Semantic similarity\n",
        "Semantic similarity is determined by generating text vectors for both the answer and the ground truth using the embedding model. These vectors are then compared using methods like cosine similarity, which is the most commonly used in Ragas, along with Euclidean distance and Manhattan distance. \n",
        "\n",
        "##### Factual accuracy\n",
        "\n",
        "Factual accuracy measures the consistency of factual information between the answer and the ground truth. For example:\n",
        "\n",
        "* answer: Michael Johnson is a colleague in the Course Development Department responsible for big data direction.\n",
        "* ground_truth: Michael Johnson is a colleague in the Course Development Department responsible for technical writer tasks.\n",
        "\n",
        "While both statements agree on the department (Course Development), they differ in the job role (big data direction versus technical writer tasks). Such differences are not easily captured through simple LLM or embedding model calls.\n",
        "\n",
        "To address this, Ragas uses an LLM to generate lists of assertions for both the answer and the ground truth. It then compares these lists to identify matching and conflicting facts, allowing for a more nuanced evaluation of factual accuracy.\n",
        "\n",
        "The following diagram illustrates how Ragas evaluates factual accuracy:\n",
        "\n",
        "<a href=\"https://img.alicdn.com/imgextra/i1/O1CN01v8tUjW1nsvFM9NJJA_!!6000000005146-2-tps-2298-868.png\" target=\"_blank\">\n",
        "<img src=\"https://img.alicdn.com/imgextra/i1/O1CN01v8tUjW1nsvFM9NJJA_!!6000000005146-2-tps-2298-868.png\" width=\"1000\">\n",
        "</a>\n",
        "\n",
        "1. Generate respective lists of assertions for the answer and ground truth using a LLMs. For example:\n",
        "    - **Generate the assertion list for the answer**: Michael Johnson is a colleague in the Course Development Department responsible for big data direction. ---> [\"*Michael Johnson is in the Course Development Department*\", \"*Michael Johnson is responsible for big data direction*\"]\n",
        "    - **Generate the assertion list for ground_truth**: Michael Johnson is a colleague in the Course Development Department responsible for technical writer tasks. ---> [\"*Michael Johnson is in the Course Development Department*\", \"*Michael Johnson is responsible for technical writer tasks*\"]\n",
        "\n",
        "2. Traverse the assertion lists for the answer and ground_truth, initializing three lists: TP, FP, and FN.\n",
        "    - For the assertions generated from the **answer**:\n",
        "      - If the assertion matches one from the ground_truth, add it to the TP list. For example: \"*Michael Johnson is in the Course Development Department*\".\n",
        "      - If the assertion cannot be found in the ground_truth list, add it to the FP list. For example: \"*Michael Johnson is responsible for big data direction*\".\n",
        "    - For the assertions generated from the **ground_truth**:\n",
        "      - If the assertion cannot be found in the answer list, add it to the FN list. For example: \"*Michael Johnson is responsible for technical writer tasks*\".\n",
        "      > The judgment process in this step is entirely provided by a LLM.\n",
        "\n",
        "3. Count the number of elements in the TP, FP, and FN lists, and calculate the F1 score as follows:\n",
        "\n",
        "\n",
        "\n",
        "```shell\n",
        "f1 score = tp / (tp + 0.5 * (fp + fn)) if tp > 0 else 0\n",
        "```\n",
        "\n",
        "Using the example above (where TP=1, FP=1, FN=1), the calculation is: f1_score = 1 / (1 + 0.5 * (1 + 1)) = 0.5\n",
        "\n",
        "##### Final Score Calculation\n",
        "After obtaining the scores for semantic similarity and factual accuracy, a weighted sum of the two can be calculated to obtain the final Answer Correctness score:\n",
        "\n",
        "\n",
        "```\n",
        "Answer Correctness score = 0.25 * Semantic Similarity score + 0.75 * Factual Accuracy score\n",
        "```"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "fe5fa7e6",
      "metadata": {},
      "source": [
        "### 2.2 Evaluating the response quality of RAG applications\n",
        "\n",
        "#### 2.2.1 Quick start\n",
        "\n",
        "The context precision and context recall metrics in Ragas are used to evaluate the effectiveness of retrieval recall in RAG (Retrieval-Augmented Generation) applications.\n",
        "\n",
        "* Context Precision: Measures whether the relevant information from the retrieved context is ranked highly and makes up a large proportion (signal-to-noise ratio). It focuses on relevance.\n",
        "* Context Recall: Assesses how well the retrieved context aligns with the ground truth, ensuring that important factual information is not missed. It focuses on factual accuracy.\n",
        "\n",
        "In practical applications, these two metrics are often used together to provide a more comprehensive evaluation of the retrieval process.\n",
        "\n",
        "To calculate these metrics, your dataset should include the following:\n",
        "\n",
        "* question: The question input to the RAG application.\n",
        "* contexts: The retrieved reference information.\n",
        "* ground_truth: The correct answer you already know.\n",
        "\n",
        "You can continue using the question \"Which department is Michael Johnson from?\" and prepare three sets of data for testing. Run the following code to calculate both context precision and context recall scores simultaneously:\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "8fb227cb",
      "metadata": {},
      "outputs": [],
      "source": [
        "from langchain_community.llms.tongyi import Tongyi\n",
        "from datasets import Dataset\n",
        "from ragas import evaluate\n",
        "from ragas.metrics import context_recall, context_precision\n",
        "\n",
        "data_samples = {\n",
        "    'question': [\n",
        "        'Which department is Michael Johnson in?',\n",
        "        'Which department is Michael Johnson in?',\n",
        "        'Which department is Michael Johnson in?'\n",
        "    ],\n",
        "    'answer': [\n",
        "        'Based on the provided information, there is no mention of the department where Michael Johnson works. If you can provide more information about Michael Johnson, I may be able to help you find the answer.',\n",
        "        'Michael Johnson is in the HR department',\n",
        "        'Michael Johnson is in the Course Development Department'\n",
        "    ],\n",
        "    'ground_truth': [\n",
        "        'Michael Johnson is a member of the Course Development Department',\n",
        "        'Michael Johnson is a member of the Course Development Department',\n",
        "        'Michael Johnson is a member of the Course Development Department'\n",
        "    ],\n",
        "    'contexts': [\n",
        "        ['Provides administrative management and coordination support, optimizing administrative workflows.', 'Performance Management Department Robert Carter EID-701 Course Development Department'],\n",
        "        ['Michael Chen, Director of the Course Development Department', 'Newton discovered the law of universal gravitation'],\n",
        "        ['Newton discovered the law of universal gravitation', 'Michael Johnson, engineer in the Course Development Department, has recently been responsible for technical writer tasks.'],\n",
        "    ],\n",
        "}\n",
        "\n",
        "dataset = Dataset.from_dict(data_samples)\n",
        "score = evaluate(\n",
        "    dataset=dataset,\n",
        "    metrics=[context_recall, context_precision],\n",
        "    llm=Tongyi(model_name=\"qwen-plus-0919\"))\n",
        "score.to_pandas()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "6b678b82",
      "metadata": {},
      "source": [
        "<div>\n",
        "<style scoped>\n",
        "    .dataframe tbody tr th:only-of-type {\n",
        "        vertical-align: middle;\n",
        "    }\n",
        "\n",
        "    .dataframe tbody tr th {\n",
        "        vertical-align: top;\n",
        "    }\n",
        "\n",
        "    .dataframe thead th {\n",
        "        text-align: right;\n",
        "    }\n",
        "</style>\n",
        "<table border=\"1\" class=\"dataframe\">\n",
        "  <thead>\n",
        "    <tr style=\"text-align: right;\">\n",
        "      <th></th>\n",
        "      <th>question</th>\n",
        "      <th>answer</th>\n",
        "      <th>ground_truth</th>\n",
        "      <th>contexts</th>\n",
        "      <th>context_recall</th>\n",
        "      <th>context_precision</th>\n",
        "    </tr>\n",
        "  </thead>\n",
        "  <tbody>\n",
        "    <tr>\n",
        "      <th>0</th>\n",
        "      <td>Which department is Michael Johnson in?</td>\n",
        "      <td>Based on the provided information, there is no mention of the department where Michael Johnson works. If you can provide more information about Michael Johnson, I may be able to help you find the answer.</td>\n",
        "      <td>Michael Johnson is a member of the Course Development Department.</td>\n",
        "      <td>[Provides administrative management and coordination support, optimizing administrative workflows., Performance Management Department Robert Carter EID-701 Course Development Department]</td>\n",
        "      <td>0.0</td>\n",
        "      <td>0.0</td>\n",
        "    </tr>\n",
        "    <tr>\n",
        "      <th>1</th>\n",
        "      <td>Which department is Michael Johnson in?</td>\n",
        "      <td>Michael Johnson is in the HR department</td>\n",
        "      <td>Michael Johnson is a member of the Course Development Department.</td>\n",
        "      <td>[Michael Chen, Director of the Course Development Department, Newton discovered the law of universal gravitation]</td>\n",
        "      <td>0.0</td>\n",
        "      <td>0.0</td>\n",
        "    </tr>\n",
        "    <tr>\n",
        "      <th>2</th>\n",
        "      <td>Which department is Michael Johnson in?</td>\n",
        "      <td>Michael Johnson is in the Course Development Department</td>\n",
        "      <td>Michael Johnson is a member of the Course Development Department.</td>\n",
        "      <td>[Newton discovered the law of universal gravitation, Michael Johnson, engineer in the Course Development Department, has recently been responsible for curriculum development]</td>\n",
        "      <td>1.0</td>\n",
        "      <td>0.5</td>\n",
        "    </tr>\n",
        "  </tbody>\n",
        "</table>\n",
        "</div>"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "ab1e7a36",
      "metadata": {},
      "source": [
        "From the data above, we can see that:\n",
        "- The answer in the last row of data is accurate.\n",
        "- In the final row, the contexts successfully retrieve the key information, \"Michael Johnson, engineer in the Course Development Department,\" which fully supports the ground_truth. This is why its context_recall score is 1.0. In contrast, the first two rows fail to retrieve this information, resulting in a context_recall score of 0.0.\n",
        "- However, not every piece of information in the contexts is relevant to the question and answer. For example, \"Newton discovered gravity.\" This situation is reflected in the context precision score being 0.5."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "e1938bbd",
      "metadata": {},
      "source": [
        "#### 2.2.2 Understanding the calculation process of context recall and context precision\n",
        "\n",
        "##### Context recall\n",
        "\n",
        "You have already learned from the previous text that context recall is a metric used to measure whether retrieved contexts are consistent with the ground truth.\n",
        "\n",
        "In Ragas, context recall evaluates what proportion of viewpoints in the ground truth can be supported by the retrieved contexts. The calculation process is as follows:\n",
        "\n",
        "1. An LLM breaks down the ground truth into a list of statements.\n",
        "   > For example, from the ground truth \"Michael Johnson is a member of the Course Development Department,\" the LLM might generate a list of statements such as [\"Michael Johnson belongs to the Course Development Department\"]..\n",
        "2. The LLM determines whether each statement can find supporting evidence in the retrieved referencecontexts.\n",
        "   > For instance, this statement can find supporting evidence in the third row of data's contexts: \"Michael Johnson, engineer in the Course Development Department, has recently been responsible for curriculum development.\"\n",
        "3. The context recall score is calculated as the proportion of statements in the ground truth list that are supported by the contexts.\n",
        "   > In this case, the score is 1 = 1/1, meaning all statements are supported.\n",
        "\n",
        "##### Context precision\n",
        "\n",
        "Context precision in Ragas not only measures what proportion of the retrieved contexts are relevant to the ground truth but also evaluates the ranking of those contexts. The calculation process is more complex:\n",
        "\n",
        "1. Each context is evaluated sequentially based on whether it is relevant to the question and ground truth.\n",
        "   * If relevant, it scores 1; otherwise, it scores 0.\n",
        "   * For example, in the third row of data:\n",
        "      * Context 1: \"Newton discovered gravity\" → irrelevant → score 0\n",
        "      * Context 2: \"Michael Johnson, engineer in the Course Development Department, has recently been responsible for curriculum development.\" → relevant → score 1\n",
        "\n",
        "\n",
        "2. For each context, the precision score is calculated by dividing the cumulative sum of scores of the current context and all preceding contexts by its position in the sequence .\n",
        "    * For the third row of data:\n",
        "        * Context 1 : 0/1 = 0\n",
        "        * Context 2 : 1/2 = 0.5\n",
        "\n",
        "\n",
        "3. The context precision score is obtained by summing up the precision scores of all contexts and dividing by the number of relevant contexts.\n",
        "    * For the third row of data: context_precision = (0 + 0.5) / 1 = 0.5\n",
        "\n",
        "> If you're still unclear about the calculation process, don’t worry—the key takeaway is that context precision evaluates how well the retrieval system ranks relevant contexts higher than irrelevant ones.\n",
        "If you’re interested, we encourage you to explore [Ragas's source code](https://github.com/explodinggradients/ragas/blob/cc31f65d4b7c7cd6bbf686b9073a0dfaacfbcbc5/src/ragas/metrics/_context_precision.py#L250) for a deeper understanding of how these metrics are implemented."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "9d7cc24c",
      "metadata": {},
      "source": [
        "### 2.3 Other recommended metrics to explore\n",
        "\n",
        "Ragas provides many other evaluation metrics, which are not introduced one by one here. You can visit the Ragas documentation to learn more about their use cases and working principles.\n",
        "\n",
        "The list of metrics supported by Ragas can be found at: https://docs.ragas.io/en/stable/concepts/metrics/available_metrics/  \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "44edd4a3",
      "metadata": {},
      "source": [
        "## 3. How to optimize based on ragas metrics\n",
        "The ultimate goal of evaluation is not just to obtain scores, but to identify areas for improvement based on these scores. You are now familiar with the concepts behind three key metrics: answer_correctness, context_recall, and context_precision.\n",
        "\n",
        "When you observe that certain metrics have low scores, you should take action to improve them. Below are some optimization strategies based on specific metrics:\n",
        "\n",
        "### 3.1 Context recall\n",
        "This metric evaluates the **retrieval** phase of your RAG application. A low context recall score indicates that the retrieved contexts do not cover enough relevant information from the ground truth.If this metric has a low score, you can try optimizing from the following aspects:\n",
        "\n",
        "- **Check the Knowledge Base**\n",
        "\n",
        "    <img src=\"https://wanx.alicdn.com/wanx/1937257750879544/text_to_image_lite_v2/6e4ca1055a0c467992b6b719b443a33e_0.png?x-oss-process=image/watermark,image_aW1nL3dhdGVybWFyazIwMjQxMTEyLnBuZz94LW9zcy1wcm9jZXNzPWltYWdlL3Jlc2l6ZSxtX2ZpeGVkLHdfMzAzLGhfNTI=,t_80,g_se,x_10,y_10/format,webp\" width=\"300\">\n",
        "\n",
        "    The knowledge base is the source of a RAG application. If the content within it is incomplete or insufficient, it can lead to inadequate reference information being retrieved, which directly impacts context recall—the ability of the system to retrieve relevant information that supports accurate and meaningful answers.\n",
        "\n",
        "    To ensure the knowledge base is comprehensive and effective, you can:\n",
        "\n",
        "    * Compare the knowledge base content with test samples to verify whether the information in the knowledge base is sufficient to support each query.\n",
        "    * Use LLMs to assist in this process: By prompting the model to evaluate whether the knowledge base contains the necessary information to answer a given question, you can identify gaps or inconsistencies.\n",
        "\n",
        "    If you find that some test samples lack relevant information in the knowledge base, it’s essential to supplement the knowledge base with additional data. This ensures that the RAG system has access to the most accurate and complete information, improving both retrieval quality and answer reliability.\n",
        "\n",
        "- **Replace the Embedding Model**\n",
        "\n",
        "    <img src=\"https://img.alicdn.com/imgextra/i4/O1CN01MMsV3b1U2GzviZv6y_!!6000000002459-2-tps-991-320.png\" width=\"750\">\n",
        "\n",
        "    If your knowledge base content is already comprehensive and well-structured,  it may be beneficial to replace the embedding model used for vectorization. A high-quality embedding model can better capture the deep semantic meaning of text, allowing for more accurate and meaningful similarity comparisons between queries and retrieved content.\n",
        "\n",
        "For example, consider the following:\n",
        "* Question: \"Who is responsible for curriculum development?\"\n",
        "* Relevant Knowledge Base Text: \"Michael Johnson is a member of the Course Development Department.\"\n",
        "\n",
        "Even though the two sentences don’t share many surface-level words, a strong embedding model can recognize the semantic relationship between them—understanding that \"responsible for curriculum development\" and \"member of the Course Development Department\" are closely related in meaning.\n",
        "\n",
        "- **query rewriting**\n",
        "\n",
        "    <img src=\"https://img.alicdn.com/imgextra/i1/O1CN01RpktVQ1FEtg8r4QCX_!!6000000000456-2-tps-1704-1322.png\" width=\"800\">\n",
        "\n",
        "    As a developer, it's unrealistic to expect users to phrase their questions in a specific or detailed way. Therefore, you might receive vague or ambiguous queries such as: \"Course Development Department,\" \"Leave Request,\" or \"Project Management.\" If these questions are directly input into a RAG application, they are unlikely to retrieve relevant and effective text segments. To address this, you can design a prompt template by organizing common employee questions and use LLMs to rewrite the queries, improving the accuracy of the retrieval process.\n",
        "\n",
        "\n",
        "### 3.2 Context Precision\n",
        "Similar to context recall, the context precision metric  evaluates the performance of a RAG application during the **retrieval** phase, but it focuses more on whether the most relevant text segments are ranked highly. A low context precision score suggests that while some relevant information may be retrieved, it is mixed with irrelevant or less important content, reducing the effectiveness of the retrieval.\n",
        "\n",
        "If this metric has a low score, you can apply the same optimization measures used for context recall, such as improving the knowledge base or refining the retrieval algorithm. Additionally, you can implement **reranking** during the retrieval phase to improve the ranking of related text segments and ensure that the most relevant information appears first.\n",
        "\n",
        "### 3.3 Answer Correctness\n",
        "The answer correctness metric evaluates the overall comprehensive performance of a RAG system. If this metric has a low score while the previous two metrics have high scores, it indicates that the RAG system performs well in the **retrieval** phase but encounters issues in the **generation** phase. You can try the methods learned in previous tutorials, such as optimizing prompts, adjusting hyperparameters (such as temperature) of LLMs generation, or replacing with a more powerful LLM, and even fine-tuning the LLM (which will be introduced in later tutorials) to enhance the accuracy of generated answers."
      ]
    },
    {
      "cell_type": "markdown",
      "id": "448a70bb-47c0-4933-9ed5-acd3072d18cc",
      "metadata": {},
      "source": [
        "## ✅ Summary\n",
        "\n",
        "Through the study of this section, you have learned how to establish automated testing for RAG chatbots.\n",
        "\n",
        "\n",
        "Automated testing is a crucial tool for engineering optimization. With quantified metrics, it  helps you move from intuitive improvements to data-driven evaluations, ensuring that your RAG chatbot performs better after each enhancement. This not only allows you to evaluate the question-answering quality more efficiently and identify areas for improvement but also enables you to quantify the results of your optimizations.\n",
        "\n",
        "While automated testing is powerful, it does not eliminate the need for human evaluation entirely. In practical applications, it is recommended to involve domain experts who can help build a test set that reflects real-world scenarios. These experts can ensure the test set covers a wide range of typical queries and edge cases, and it should be continuously updated to reflect evolving needs.\n",
        "\n",
        "Additionally, since LLMs are not always 100% accurate, it’s advisable to regularly sample and review the results of automated testing in real-world use. Avoid frequently changing the LLM or embedding models unless necessary, as this can introduce instability. For Ragas, you can improve its performance by adjusting the default prompts—for example, by adding domain-specific reference examples that align with your business context. (For more details, refer to the extended reading materials.)\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "e7745583",
      "metadata": {},
      "source": [
        "## Further Reading\n",
        "\n",
        "### Changing the Prompt Template in Ragas\n",
        "Many of Ragas’ evaluation metrics rely on large language models to compute scores. Like LlamaIndex, Ragas provides default prompt templates, but it also supports custom prompts that you can modify to suit your specific use case.\n",
        "Example prompt templates are included in the `ragas_prompt` folder to help you customize the prompts used by Ragas for different evaluation metrics. You can refer to the following code to integrate these updated prompts into your workflow.\n",
        "\n",
        "> Note: Ragas includes example cases in its prompts to guide the model on how to make judgments or generate lists of statements. You can replace or modify these examples to better align with your specific domain or use case."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "id": "b8d949c1",
      "metadata": {},
      "outputs": [],
      "source": [
        "# Import prompt templates\n",
        "from ragas_prompt.ragas_test_prompt import ContextRecall, ContextPrecision, AnswerCorrectness\n",
        "\n",
        "# Customize prompt settings for each metric\n",
        "context_recall.context_recall_prompt.instruction = ContextRecall.context_recall_prompt[\"instruction\"]\n",
        "context_recall.context_recall_prompt.output_format_instruction = ContextRecall.context_recall_prompt[\"output_format_instruction\"]\n",
        "context_recall.context_recall_prompt.examples = ContextRecall.context_recall_prompt[\"examples\"]\n",
        "\n",
        "context_precision.context_precision_prompt.instruction = ContextPrecision.context_precision_prompt[\"instruction\"]\n",
        "context_precision.context_precision_prompt.output_format_instruction = ContextPrecision.context_precision_prompt[\"output_format_instruction\"]\n",
        "context_precision.context_precision_prompt.examples = ContextPrecision.context_precision_prompt[\"examples\"]\n",
        "\n",
        "answer_correctness.correctness_prompt.instruction = AnswerCorrectness.correctness_prompt[\"instruction\"]\n",
        "answer_correctness.correctness_prompt.output_format_instruction = AnswerCorrectness.correctness_prompt[\"output_format_instruction\"]\n",
        "answer_correctness.correctness_prompt.examples = AnswerCorrectness.correctness_prompt[\"examples\"]\n",
        "\n",
        "data_samples = {\n",
        "    'question': [\n",
        "        'Which department is Michael Johnson in?',\n",
        "        'Which department is Michael Johnson in?',\n",
        "        'Which department is Michael Johnson in?'\n",
        "    ],\n",
        "    'answer': [\n",
        "        'Based on the provided information, there is no mention of the department where Michael Johnson works. If you can provide more information about Michael Johnson, I may be able to help you find the answer.',\n",
        "        'Michael Johnson is in the HR department',\n",
        "        'Michael Johnson is in the Course Development Department'\n",
        "    ],\n",
        "    'ground_truth': [\n",
        "        'Michael Johnson is a member of the Course Development Department',\n",
        "        'Michael Johnson is a member of the Course Development Department',\n",
        "        'Michael Johnson is a member of the Course Development Department'\n",
        "    ],\n",
        "    'contexts': [\n",
        "        ['Provides administrative management and coordination support, optimizing administrative workflows.', 'Performance Management Department Han Shan Li Fei I902 041 Human Resources'],\n",
        "        ['Li Kai, Director of the Course Development Department', 'Newton discovered the law of universal gravitation'],\n",
        "        ['Newton discovered the law of universal gravitation', 'Michael Johnson, engineer in the Course Development Department, has recently been responsible for curriculum development.'],\n",
        "    ],\n",
        "}\n",
        "\n",
        "dataset = Dataset.from_dict(data_samples)\n",
        "\n",
        "score = evaluate(\n",
        "    dataset=dataset,\n",
        "    metrics=[answer_correctness, context_recall, context_precision],\n",
        "    llm=Tongyi(model_name=\"qwen-plus-0919\"),\n",
        "    embeddings=DashScopeEmbeddings(model=\"text-embedding-v3\"))\n",
        "\n",
        "score.to_pandas()"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "7c1dc0f4",
      "metadata": {},
      "source": [
        "<div>\n",
        "<style scoped>\n",
        "    .dataframe tbody tr th:only-of-type {\n",
        "        vertical-align: middle;\n",
        "    }\n",
        "\n",
        "    .dataframe tbody tr th {\n",
        "        vertical-align: top;\n",
        "    }\n",
        "\n",
        "    .dataframe thead th {\n",
        "        text-align: right;\n",
        "    }\n",
        "</style>\n",
        "<table border=\"1\" class=\"dataframe\">\n",
        "  <thead>\n",
        "    <tr style=\"text-align: right;\">\n",
        "      <th></th>\n",
        "      <th>question</th>\n",
        "      <th>answer</th>\n",
        "      <th>ground_truth</th>\n",
        "      <th>contexts</th>\n",
        "      <th>answer_correctness</th>\n",
        "      <th>context_recall</th>\n",
        "      <th>context_precision</th>\n",
        "    </tr>\n",
        "  </thead>\n",
        "  <tbody>\n",
        "    <tr>\n",
        "      <th>0</th>\n",
        "      <td>Which department is Michael Johnson in?</td>\n",
        "      <td>Based on the provided information, there is no mention of the department where Michael Johnson works. If you can provide more information about Michael Johnson, I may be able to help you find the answer.</td>\n",
        "      <td>Michael Johnson is a member of the Course Development Department.</td>\n",
        "      <td>[Providing administrative management and coordination support, optimizing administrative workflows. , Performance Management Department Han Shan Li Fei I902 041 ...</td>\n",
        "      <td>0.166901</td>\n",
        "      <td>0.0</td>\n",
        "      <td>0.0</td>\n",
        "    </tr>\n",
        "    <tr>\n",
        "      <th>1</th>\n",
        "      <td>Which department is Michael Johnson in?</td>\n",
        "      <td>Michael Johnson is in the Human Resources Department.</td>\n",
        "      <td>Michael Johnson is a member of the Course Development Department.</td>\n",
        "      <td>[Li Kai, Director of the Course Development Department , Newton discovered the law of universal gravitation]</td>\n",
        "      <td>0.196046</td>\n",
        "      <td>0.0</td>\n",
        "      <td>0.0</td>\n",
        "    </tr>\n",
        "    <tr>\n",
        "      <th>2</th>\n",
        "      <td>Which department is Michael Johnson in?</td>\n",
        "      <td>Michael Johnson is in the Course Development Department.</td>\n",
        "      <td>Michael Johnson is a member of the Course Development Department.</td>\n",
        "      <td>[Newton discovered the law of universal gravitation, Michael Johnson, an engineer in the Course Development Department, has recently been responsible for curriculum development]</td>\n",
        "      <td>0.998264</td>\n",
        "      <td>1.0</td>\n",
        "      <td>0.5</td>\n",
        "    </tr>\n",
        "  </tbody>\n",
        "</table>\n",
        "</div>  \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "2b35c832",
      "metadata": {},
      "source": [
        "### More evaluation metrics\n",
        "In addition to RAG, LLMs and natural language processing (NLP) are applied in a wide range of tasks, such as agents, natural language to SQL conversion, machine translation, text summarization,  andquestion answering. Ragas provides a variety of metrics that can be used to evaluate the performance of these tasks, ensuring that the outputs of LLMs are accurate, relevant, and aligned with the expected results.\n",
        "\n",
        "| Evaluation metric            | Use case | Metric meaning                                                                 |\n",
        "|---------------------|----------|--------------------------------------------------------------------------|\n",
        "| [ToolCallAccuracy](https://docs.ragas.io/en/latest/concepts/metrics/available_metrics/agents/#example)    | Agent    | Evaluates the LLM's performance in identifying and invoking tools required to complete specific tasks. This metric is obtained by comparing reference tool calls with tool calls made by the LLM, with a value range of 0-1. |\n",
        "| [DataCompyScore](https://docs.ragas.io/en/latest/concepts/metrics/available_metrics/sql/)      | natural language to SQL   | Evaluates the difference between the results obtained from database queries using SQL statements generated by the LLM and the correct results. The value ranges from 0 to 1.                     |\n",
        "| [LLMSQLEquivalence](https://docs.ragas.io/en/latest/concepts/metrics/available_metrics/sql/#non-execution-based-metrics)   | natural language to SQL   | LLMSQLEquivalence: Compares the LLM-generated SQL with the ground-truth SQL for semantic equivalence without executing them against a database. The value ranges from 0 to 1.   |\n",
        "| [BleuScore](https://docs.ragas.io/en/latest/concepts/metrics/available_metrics/traditional/#bleu-score)           | General     | Measures the n-gram overlap between the generated response and a reference response. Initially designed for evaluating machine translation systems, this metric does not require the use of an LLM during evaluation, and its value ranges from 0 to 1. In the [2.7 tutorial](2_7_Improve_Model_Accuracy_and_Efficiency_via_Fine_Tuning.ipynb), you will learn how to fine-tune LLMs, and BleuScore can be used to measure the benefits brought by fine-tuning.  \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "d0ed74ea",
      "metadata": {},
      "source": [
        "## 🔥 Quiz\n",
        "\n",
        "### 🔍 Single-choice question\n",
        "<details>\n",
        "<summary style=\"cursor: pointer; padding: 12px;  border: 1px solid #dee2e6; border-radius: 6px;\">\n",
        "<b>What does the Context Precision metric measure❓(Select 1.)</b>\n",
        "\n",
        "- A. Overall response quality\n",
        "- B. Whether retrieved text segments relevant to the question are ranked higher\n",
        "- C. Whether the generated answer is relevant to the retrieved text segments\n",
        "- D. Whether the generated answer is relevant to the question\n",
        "\n",
        "**[Click to view the answer]**\n",
        "</summary>\n",
        "\n",
        "<div style=\"margin-top: 10px; padding: 15px; border: 1px solid #dee2e6; border-radius: 0 0 6px 6px;\">\n",
        "\n",
        "✅ **Reference Answer: B**  \n",
        "📝 **Explanation**:  \n",
        "- Context Precision specifically evaluates the ranking quality of the retrieval system—i.e., whether relevant documents appear at the top of the retrieved list. It does not measure the quality of the final answer (answer_correctness) or the overall relevance of the answer to the question (answer_relevance).\n",
        "\n",
        "</div>\n",
        "</details>  \n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "id": "9979a0c1",
      "metadata": {},
      "source": []
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "llm_learn",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.12.10"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 5
}
