{
 "nbformat": 4,
 "nbformat_minor": 0,
 "metadata": {
  "colab": {
   "provenance": []
  },
  "kernelspec": {
   "name": "python3",
   "display_name": "Python 3"
  },
  "language_info": {
   "name": "python"
  }
 },
 "cells": [
  {
   "cell_type": "markdown",
   "source": [
    "# Deep dive into RAG Evaluation\n",
    "\n",
    "In this notebook, we'll show you how to evaluate the output of a RAG system. The high-level RAG flow is depicted in the diagram below.\n",
    "![Screenshot 2024-03-11 at 10.05.47.png]()"
   ],
   "metadata": {
    "id": "eXdiZhdNJU6N"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "We will focus on the evaluation of **Retrive** and **Response** (or **Generation**), and present a set of metrics for each phase. We will deep dive into each metric, to give you a full understanding of how we evaluate models and why we do it this way, and provide code so you can repdroduce on your own data.\n",
    "\n",
    "To demonstrate the metrics, we will use data from the [Docugami's KG-RAG](https://github.com/docugami/KG-RAG-datasets/tree/main/sec-10-q/data/v1) dataset, a RAG dataset for financial 10Q filing reports. We will focus only on evaluation, without performing the actual Retrieval and response Generation steps."
   ],
   "metadata": {
    "id": "Vi4P1tqgxJxH"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "# Table of content\n",
    "\n",
    "1. [Getting started](#getting-started)\n",
    "2. [Retrieval Evaluation](#retrieval-evaluation)\n",
    "3. [Generation Evaluation](#generation-evaluation)\n",
    "4. [Final Comments](#final-comments)"
   ],
   "metadata": {
    "id": "5bnEyZUl_KmS"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "<a name=\"getting-started\"></a>\n",
    "## Getting Started\n",
    "\n",
    "Let's start by setting the environment and downloading the dataset."
   ],
   "metadata": {
    "id": "rVLK7Bhux8Bs"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "%%capture\n",
    "!pip install llama-index cohere openai\n",
    "!pip install mistralai"
   ],
   "metadata": {
    "id": "VmGk_rOco3m5"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# required imports\n",
    "import cohere\n",
    "from getpass import getpass\n",
    "import os\n",
    "import re\n",
    "import json\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "from llama_index.core import SimpleDirectoryReader\n",
    "from llama_index.core.llama_dataset import download_llama_dataset, LabelledRagDataset\n",
    "from openai import Client\n",
    "from mistralai.client import MistralClient"
   ],
   "metadata": {
    "id": "SIpPuVxfo_vz"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "For Response evaluation, we will use an LLM as a judge.\n",
    "Any LLM can be used for this goal, but because evaluation is a very challenging task, we recommend using powerful LLMs, possibly as an ensemble of models. In [previous work](https://arxiv.org/pdf/2303.16634.pdf), it has been shown that models tend to assign higher scores to their own output. Since we generated the answers in this notebook using `command-r`, we will not use it for evaluation. We will provide two alternatives, `gpt-4` and `mistral`. We set `gpt-4` as the default model because, as mentioned above, evaluation is challenging, and `gpt-4` is powerful enough to efficiently perform the task."
   ],
   "metadata": {
    "id": "J0ZO3ki_yIJE"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# Get keys\n",
    "openai_api_key = getpass(\"Enter your OpenAI API Key: \")\n",
    "# uncomment if you want to use mistral\n",
    "#mistral_api_key = getpass[\"Enter your Mistral API Key: \"]\n",
    "\n",
    "# Define the model you want to use - you can replace gpt-4 with any other gpt version\n",
    "model = \"gpt-4\"\n",
    "# uncomment if you want to use mistral\n",
    "#model = \"mistral-large-latest\"\n"
   ],
   "metadata": {
    "id": "-SdZGWLTtLok",
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "outputId": "46d48939-5547-4e53-de78-9ae16440107d"
   },
   "execution_count": null,
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Enter your OpenAI API Key: ··········\n"
     ]
    }
   ]
  },
  {
   "cell_type": "code",
   "source": [
    "if model == \"gpt-4\":\n",
    "  client = Client(api_key=openai_api_key)\n",
    "else:\n",
    "  client = MistralClient(api_key=mistral_api_key)"
   ],
   "metadata": {
    "id": "Zk02dD_9mc7B"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# let's define a function to get the model's response for a given input\n",
    "def get_response(model, client, prompt):\n",
    "  response = client.chat.completions.create(\n",
    "      model=model,\n",
    "      messages=[{\"role\": \"user\", \"content\": prompt}],\n",
    "      temperature=0)\n",
    "  return response.choices[0].message.content"
   ],
   "metadata": {
    "id": "Kx4cxDGw6LI_"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# load the DocugamiKgRagSec10Q dataset\n",
    "if os.path.exists(\"./data/source_files\") and os.path.exists(\"./data/rag_dataset.json\"):\n",
    "        rag_dataset = LabelledRagDataset.from_json(\"./data/rag_dataset.json\")\n",
    "        documents = SimpleDirectoryReader(input_dir=\"./data/source_files\").load_data(show_progress=True)\n",
    "else:\n",
    "    rag_dataset, documents = download_llama_dataset(\"DocugamiKgRagSec10Q\", \"./data\")"
   ],
   "metadata": {
    "id": "voJk7dPvuSdN",
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "outputId": "1dd2c527-7d4a-4278-e14b-2c9479ad5caf"
   },
   "execution_count": null,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stderr",
     "text": [
      "Loading files: 100%|██████████| 20/20 [01:44<00:00,  5.21s/file]\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "<a name=\"retrieval-evaluation\"></a>\n",
    "## Retrieval Evaluation\n",
    "\n",
    "In the Retrieval phase, we evaluate the set of **retrieved documents** against the **golden documents** set.\n",
    "\n",
    "We use three standard metrics to evaluate retrieval:\n",
    "\n",
    "*   **Precision**: the proportion of returned documents that are relevant, according to the gold annotation\n",
    "*   **Recall**: the proportion of relevant documents in the gold data found in the retrieved documents\n",
    "*   **Mean Average Precision** (**MAP**): measures the capability of the retriever to return relevant documents at the top of the list\n",
    "\n",
    "We implement these three metrics in the class below:"
   ],
   "metadata": {
    "id": "lB6vO4JvMEkT"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "class RetrievalEvaluator:\n",
    "\n",
    "    def compute_precision(self, retrieved_documents, golden_documents):\n",
    "      # compute the percentage of retrieved documents found in the golden docs\n",
    "      return len(set(retrieved_documents).intersection(golden_documents)) / len(retrieved_documents)\n",
    "\n",
    "    def compute_recall(self, retrieved_documents, golden_documents):\n",
    "      # compute the percentage of golden documents found in the retrieved docs\n",
    "      return len(set(retrieved_documents).intersection(golden_documents)) / len(golden_documents)\n",
    "\n",
    "    def compute_mean_average_precision(self, retrieved_documents, golden_documents):\n",
    "      # check which among the retrieved docs is found in the gold, keeping the order\n",
    "      correct_retrieved_documents = [1 if x in golden_documents else 0 for x in retrieved_documents]\n",
    "      # compute map\n",
    "      map = np.mean([sum(correct_retrieved_documents[: i + 1]) / (i + 1) for i, v in enumerate(correct_retrieved_documents) if v == 1])\n",
    "      return map\n",
    "\n",
    "    def run_evals(self, retrieved_documents, golden_documents):\n",
    "      precision = round(self.compute_precision(retrieved_documents, golden_documents),2)\n",
    "      recall = round(self.compute_recall(retrieved_documents, golden_documents),2)\n",
    "      map = round(self.compute_mean_average_precision(retrieved_documents, golden_documents),2)\n",
    "      results = {'precision': [precision],\n",
    "                 'recall': [recall],\n",
    "                 'map': [map]}\n",
    "      results = pd.DataFrame(results)\n",
    "      return results\n",
    "\n"
   ],
   "metadata": {
    "id": "CooNq035eU6f"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "Let's now see how to use the class above to compute the results on a single datapoint."
   ],
   "metadata": {
    "id": "MWW2VM-Tj8iQ"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# select the index of a single datapoint - the first one in the dataset\n",
    "idx = 0\n",
    "\n",
    "# select the query\n",
    "query = rag_dataset[idx].query\n",
    "\n",
    "# and the golden docs\n",
    "golden_docs = rag_dataset[idx].reference_answer.split('SOURCE(S): ')[1].split(', ')\n",
    "\n",
    "# let's assume we have the following set of retrieved docs\n",
    "retrieved_docs = ['2022 Q3 AAPL.pdf', '2023 Q1 MSFT.pdf', '2023 Q1 AAPL.pdf']\n",
    "\n",
    "print(f'Query: {query}')\n",
    "print(f'Golden docs: {golden_docs}')\n",
    "print(f'Retrieved docs: {retrieved_docs}')"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "QyMGDqaqe7fg",
    "outputId": "e95242c7-dcdd-48bb-9da2-52e25a2a2d41"
   },
   "execution_count": null,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Query: How has Apple's total net sales changed over time?\n",
      "Golden docs: ['2022 Q3 AAPL.pdf', '2023 Q1 AAPL.pdf', '2023 Q2 AAPL.pdf', '2023 Q3 AAPL.pdf']\n",
      "Retrieved docs: ['2022 Q3 AAPL.pdf', '2023 Q1 MSFT.pdf', '2023 Q1 AAPL.pdf']\n"
     ]
    }
   ]
  },
  {
   "cell_type": "code",
   "source": [
    "# we can now instantiate the evaluator\n",
    "evaluate_retrieval = RetrievalEvaluator()\n",
    "\n",
    "# and run the evaluation\n",
    "evaluate_retrieval.run_evals(retrieved_docs,golden_docs)\n"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 81
    },
    "id": "Ron8lm3Z1t1M",
    "outputId": "a37eee30-c318-4a2f-8dec-bfd8f4d8c8f2"
   },
   "execution_count": null,
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "   precision  recall   map\n",
       "0       0.67     0.5  0.83"
      ],
      "text/html": [
       "\n",
       "  <div id=\"df-4d8adbf1-3648-49eb-8108-258daed0afda\" class=\"colab-df-container\">\n",
       "    <div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>precision</th>\n",
       "      <th>recall</th>\n",
       "      <th>map</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0.67</td>\n",
       "      <td>0.5</td>\n",
       "      <td>0.83</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>\n",
       "    <div class=\"colab-df-buttons\">\n",
       "\n",
       "  <div class=\"colab-df-container\">\n",
       "    <button class=\"colab-df-convert\" onclick=\"convertToInteractive('df-4d8adbf1-3648-49eb-8108-258daed0afda')\"\n",
       "            title=\"Convert this dataframe to an interactive table.\"\n",
       "            style=\"display:none;\">\n",
       "\n",
       "  <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\" viewBox=\"0 -960 960 960\">\n",
       "    <path d=\"M120-120v-720h720v720H120Zm60-500h600v-160H180v160Zm220 220h160v-160H400v160Zm0 220h160v-160H400v160ZM180-400h160v-160H180v160Zm440 0h160v-160H620v160ZM180-180h160v-160H180v160Zm440 0h160v-160H620v160Z\"/>\n",
       "  </svg>\n",
       "    </button>\n",
       "\n",
       "  <style>\n",
       "    .colab-df-container {\n",
       "      display:flex;\n",
       "      gap: 12px;\n",
       "    }\n",
       "\n",
       "    .colab-df-convert {\n",
       "      background-color: #E8F0FE;\n",
       "      border: none;\n",
       "      border-radius: 50%;\n",
       "      cursor: pointer;\n",
       "      display: none;\n",
       "      fill: #1967D2;\n",
       "      height: 32px;\n",
       "      padding: 0 0 0 0;\n",
       "      width: 32px;\n",
       "    }\n",
       "\n",
       "    .colab-df-convert:hover {\n",
       "      background-color: #E2EBFA;\n",
       "      box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
       "      fill: #174EA6;\n",
       "    }\n",
       "\n",
       "    .colab-df-buttons div {\n",
       "      margin-bottom: 4px;\n",
       "    }\n",
       "\n",
       "    [theme=dark] .colab-df-convert {\n",
       "      background-color: #3B4455;\n",
       "      fill: #D2E3FC;\n",
       "    }\n",
       "\n",
       "    [theme=dark] .colab-df-convert:hover {\n",
       "      background-color: #434B5C;\n",
       "      box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
       "      filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
       "      fill: #FFFFFF;\n",
       "    }\n",
       "  </style>\n",
       "\n",
       "    <script>\n",
       "      const buttonEl =\n",
       "        document.querySelector('#df-4d8adbf1-3648-49eb-8108-258daed0afda button.colab-df-convert');\n",
       "      buttonEl.style.display =\n",
       "        google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
       "\n",
       "      async function convertToInteractive(key) {\n",
       "        const element = document.querySelector('#df-4d8adbf1-3648-49eb-8108-258daed0afda');\n",
       "        const dataTable =\n",
       "          await google.colab.kernel.invokeFunction('convertToInteractive',\n",
       "                                                    [key], {});\n",
       "        if (!dataTable) return;\n",
       "\n",
       "        const docLinkHtml = 'Like what you see? Visit the ' +\n",
       "          '<a target=\"_blank\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\n",
       "          + ' to learn more about interactive tables.';\n",
       "        element.innerHTML = '';\n",
       "        dataTable['output_type'] = 'display_data';\n",
       "        await google.colab.output.renderOutput(dataTable, element);\n",
       "        const docLink = document.createElement('div');\n",
       "        docLink.innerHTML = docLinkHtml;\n",
       "        element.appendChild(docLink);\n",
       "      }\n",
       "    </script>\n",
       "  </div>\n",
       "\n",
       "    </div>\n",
       "  </div>\n"
      ],
      "application/vnd.google.colaboratory.intrinsic+json": {
       "type": "dataframe",
       "summary": "{\n  \"name\": \"evaluate_retrieval\",\n  \"rows\": 1,\n  \"fields\": [\n    {\n      \"column\": \"precision\",\n      \"properties\": {\n        \"dtype\": \"number\",\n        \"std\": null,\n        \"min\": 0.67,\n        \"max\": 0.67,\n        \"num_unique_values\": 1,\n        \"samples\": [\n          0.67\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"recall\",\n      \"properties\": {\n        \"dtype\": \"number\",\n        \"std\": null,\n        \"min\": 0.5,\n        \"max\": 0.5,\n        \"num_unique_values\": 1,\n        \"samples\": [\n          0.5\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    },\n    {\n      \"column\": \"map\",\n      \"properties\": {\n        \"dtype\": \"number\",\n        \"std\": null,\n        \"min\": 0.83,\n        \"max\": 0.83,\n        \"num_unique_values\": 1,\n        \"samples\": [\n          0.83\n        ],\n        \"semantic_type\": \"\",\n        \"description\": \"\"\n      }\n    }\n  ]\n}"
      }
     },
     "metadata": {},
     "execution_count": 9
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "What are the figures above telling us?\n",
    "\n",
    "*   Precision (0.67) tells us that 2 out of 3 of the retrieved docs are correct\n",
    "*   Recall (0.5) means that 2 out of 4 relevant docs have been retrieved\n",
    "*   MAP (0.83) is computed as the average of 1/1 (the highest ranked doc is correct) and 2/3 (the 2nd ranked doc is wrong, the 3rd is correct).\n",
    "\n",
    "While the example here focuses on a single datapoint, you can easily apply the same metrics to all your dataset and get the overall performance of your Retrieve phase."
   ],
   "metadata": {
    "id": "sL-sIiSl2vw9"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "<a name=\"generation-evaluation\"></a>\n",
    "## Generation Evaluation\n",
    "\n",
    "Evaluating grounded generation (the second step of RAG) is notoriously difficult, because generations are usually complex and rich of information, and simply labelling an answer as \"good\" or \"bad\" is not enough.\n",
    "To overcome this issue, we first decompose complex answers into a set of basic *claims*, where a claim is any sentence or part of a sentence in the answer that expresses a verifiable fact. Subsequently, we check the validity of each claim independently, defining the overall quality of the answer based on the correctness of the claims it includes.\n",
    "\n",
    "We use claims to compute three metrics:\n",
    "\n",
    "*   **Faithfulness**, which measures how many of the claims in the generated response are supported by the retrieved documents. This is a fundamental metric, as it tells us how *grounded* in the documents the response is, and, contextually, it allows us to spot hallucinations\n",
    "\n",
    "*   **Correctness**, which checks which claims in the response also occur in the gold answer\n",
    "\n",
    "*   And **Coverage**, by which we assess how many of the claims in the gold answer are included in the generated response.\n",
    "\n",
    "Note that Faithfulness and Correctness share the exact same approach, the difference being that the former checks the claims against the supporting docs, while the latter against the golden answer.\n",
    "Also, while Correctness is measuring the precision of the claims in the response, Coverage can be seen as complementary, as it measures recall."
   ],
   "metadata": {
    "id": "mwEyE9IaklPL"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "### Claim Extraction\n",
    "\n",
    "Let's now see how we implement the evaluation described above using LLMs. Let's start with **claim extraction**."
   ],
   "metadata": {
    "id": "ySgduJrTTjvK"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# first, let's define a function which extracts the claims from a response\n",
    "def extract_claims(query, response, model, client):\n",
    "\n",
    "  # define the instructions on how to extract the claims\n",
    "  preamble = \"You are shown a prompt and a completion. You have to identify the main claims stated in the completion. A claim is any sentence or part of a sentence that expresses a verifiable fact. Please return a bullet list, in which every line includes one of the claims you identified. Do not add any further explanation to the bullet points.\"\n",
    "\n",
    "  # build the prompt\n",
    "  prompt = f\"{preamble}\\n\\nPROMPT: {query}\\n\\nCOMPLETION: {response}\"\n",
    "\n",
    "  # get the claims\n",
    "  claims = get_response(model, client, prompt)\n",
    "\n",
    "  return claims\n"
   ],
   "metadata": {
    "id": "X2fQa0Vbuy3W"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "# now, let's consider this answer, which we previously generated with command-r\n",
    "response = \"Apple's total net sales experienced a decline over the last year. The three-month period ended July 1, 2023, saw a total net sale of $81,797 million, which was a 1% decrease from the same period in 2022. The nine-month period ended July 1, 2023, fared slightly better, with a 3% decrease in net sales compared to the first nine months of 2022.\\nThis downward trend continued into the three and six-month periods ending April 1, 2023. Apple's total net sales decreased by 3% and 4% respectively, compared to the same periods in 2022.\"\n",
    "\n",
    "# let's extract the claims\n",
    "claims = extract_claims(query, response, model, client)\n",
    "\n",
    "# and see what the model returns\n",
    "print(f\"List of claims extracted from the model's response:\\n\\n{claims}\")"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "y_0NXZEfu0AJ",
    "outputId": "3fa32ab2-6898-449f-f062-187df7aea41e"
   },
   "execution_count": null,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "List of claims extracted from the model's response:\n",
      "\n",
      "- Apple's total net sales experienced a decline over the last year.\n",
      "- The three-month period ended July 1, 2023, saw a total net sale of $81,797 million.\n",
      "- This was a 1% decrease from the same period in 2022.\n",
      "- The nine-month period ended July 1, 2023, had a 3% decrease in net sales compared to the first nine months of 2022.\n",
      "- The downward trend continued into the three and six-month periods ending April 1, 2023.\n",
      "- Apple's total net sales decreased by 3% and 4% respectively, compared to the same periods in 2022.\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "### Claim Assessment\n",
    "\n",
    "Nice! now that we have the list of claims, we can go ahead and **assess the validity** of each claim."
   ],
   "metadata": {
    "id": "50QRlCVe7dZf"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# Let's create a function that checks each claim against a reference text,\n",
    "# which here we will call \"context\". As you will see, we will use different contexts,\n",
    "# depending on the metric we want to compute.\n",
    "\n",
    "def assess_claims(query, claims, context, model, client):\n",
    "\n",
    "  # define the instructions on how to perform the assessment.\n",
    "  # the model has to append to each row a binary SUPPORTED tag\n",
    "  preamble = \"You are shown a prompt, a context and a list of claims. You have to check which of the claims in the list are supported by the context. Please return the list of claims exactly as is it, just append to each row “SUPPORTED=1” if the claim is supported by the context, or “SUPPORTED=0” if the claim is not supported by the context. Do not add any further explanation to the bullet points.\"\n",
    "\n",
    "  # turn list into string\n",
    "  context = '\\n'.join(context)\n",
    "\n",
    "  # build the prompt\n",
    "  prompt = f\"{preamble}\\n\\nPROMPT: {query}\\n\\nCONTEXT:\\n{context}\\n\\nCLAIMS:\\n{claims}\"\n",
    "\n",
    "  # get the response\n",
    "  assessment = get_response(model, client, prompt)\n",
    "\n",
    "  return assessment"
   ],
   "metadata": {
    "id": "jrg9-dlMTRpB"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "markdown",
   "source": [
    "### Faithfulness"
   ],
   "metadata": {
    "id": "1OmD9pKxArk7"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# Let's start with Faithfulness: in this case, we want to assess the claims\n",
    "# in the response against the retrieved documents (i.e., context = retrieved documents)\n",
    "\n",
    "# for the sake of clarity, we report the actual text of the retrieved documents\n",
    "retrieved_documents = ['Products and Services Performance\\nThe following table shows net sales by category for the three- and six-month periods ended April 1, 2023 and March 26, 2022 (dollars in millions):\\nThree Months Ended Six Months Ended\\nApril 1,\\n2023March 26,\\n2022 ChangeApril 1,\\n2023March 26,\\n2022 Change\\nNet sales by category:\\niPhone $ 51,334 $ 50,570 2 %$ 117,109 $ 122,198 (4)%\\nMac 7,168 10,435 (31)% 14,903 21,287 (30)%\\niPad 6,670 7,646 (13)% 16,066 14,894 8 %\\nWearables, Home and Accessories 8,757 8,806 (1)% 22,239 23,507 (5)%\\nServices 20,907 19,821 5 % 41,673 39,337 6 %\\nTotal net sales $ 94,836 $ 97,278 (3)%$ 211,990 $ 221,223 (4)%\\niPhone\\niPhone net sales were relatively flat during the second quarter of 2023 compared to the secon d quarter of 2022. Year-over-year iPhone net sales decreased\\nduring the first six months of 2023 due primarily to lower net sales from the Company’ s new iPhone models launched in the fourth quarter of 2022.\\nMac\\nMac net sales decreased during the second quarter and first six months of 2023 compared to the same periods in 2022 due primarily to lower net sales of\\nMacBook Pro.\\niPad\\niPad net sales decreased during the second quarter of 2023 compared to the second quarter of 2022 due primarily to lower net sales of iPad Pro  and iPad Air.\\nYear-over-year iPad net sales increased during the first six months of 2023 due primarily to higher net sales of iPad, partially offset by lower net sales of iPad\\nmini .\\nWearables, Home and Accessories\\nWearables, Home and Accessories net sales were relatively flat during the second quarter of 2023 compared to the second quarter of 2022. Year-over-year\\nWearables, Home and Accessories net sales decreased during the first six months of 2023 due primarily to lower net sales of AirPods .\\nServices\\nServices net sales increased during the second quarter and first six months of 2023 compared to the same periods in 2022 due primarily to higher net sales from\\ncloud services, music and advertising.® ®\\n®\\n®\\nApple Inc. | Q2 2023 Form 10-Q | 16', 'Products and Services Performance\\nThe following table shows net sales by category for the three- and nine-month periods ended July 1, 2023 and June 25, 2022 (dollars in millions):\\nThree Months Ended Nine Months Ended\\nJuly 1,\\n2023June 25,\\n2022 ChangeJuly 1,\\n2023June 25,\\n2022 Change\\nNet sales by category:\\niPhone $ 39,669 $ 40,665 (2)%$ 156,778 $ 162,863 (4)%\\nMac 6,840 7,382 (7)% 21,743 28,669 (24)%\\niPad 5,791 7,224 (20)% 21,857 22,118 (1)%\\nWearables, Home and Accessories 8,284 8,084 2 % 30,523 31,591 (3)%\\nServices 21,213 19,604 8 % 62,886 58,941 7 %\\nTotal net sales $ 81,797 $ 82,959 (1)%$ 293,787 $ 304,182 (3)%\\niPhone\\niPhone net sales decreased during the third quarter and first nine months of 2023 compared to the same periods in 2022 due primarily to lower net sales from\\ncertain iPhone models, partially of fset by higher net sales of iPhone 14 Pro models.\\nMac\\nMac net sales decreased during the third quarter and first nine months of 2023 compared to the same periods in 2022 due primarily to lower net sales of laptops.\\niPad\\niPad net sales decreased during the third quarter of 2023 compared to the third quarter of 2022 due primarily to lower net sales across most iPad models. Year-\\nover-year iPad net sales were relatively flat during the first nine months of 2023.\\nWearables, Home and Accessories\\nWearables, Home and Accessories net sales increased during the third quarter of 2023 compare d to the third quarter of 2022 due primarily to higher net sales of\\nWearables, which includes AirPods , Apple Watch  and Beats  products, partially offset by lower net sales of accessories. Year-over-year Wearables, Home\\nand Accessories net sales decreased during the first nine months of 2023 due primarily to lower net sales of W earables and accessories.\\nServices\\nServices net sales increased during the third quarter of 2023 compared to the third quarter of 2022 due primarily to higher net sales from advertising, cloud\\nservices and the App Store . Year-over-year Services net sales increased during the first nine months of 2023 due primarily to higher net sales from cloud\\nservices, advertising and music.® ® ®\\n®\\nApple Inc. | Q3 2023 Form 10-Q | 16']\n",
    "\n",
    "# get the Faithfulness assessment for each claim\n",
    "assessed_claims_faithfulness = assess_claims(query=query,\n",
    "                                             claims=claims,\n",
    "                                             context=retrieved_documents,\n",
    "                                             model=model,\n",
    "                                             client=client)\n",
    "\n",
    "print(f\"Assessment of the claims extracted from the model's response:\\n\\n{assessed_claims_faithfulness}\")"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "5VmLE4yCwEMe",
    "outputId": "10874986-1f91-45a3-c081-dc2aae3fd618"
   },
   "execution_count": null,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Assessment of the claims extracted from the model's response:\n",
      "\n",
      "- Apple's total net sales experienced a decline over the last year. SUPPORTED=1\n",
      "- The three-month period ended July 1, 2023, saw a total net sale of $81,797 million. SUPPORTED=1\n",
      "- This was a 1% decrease from the same period in 2022. SUPPORTED=1\n",
      "- The nine-month period ended July 1, 2023, had a 3% decrease in net sales compared to the first nine months of 2022. SUPPORTED=1\n",
      "- The downward trend continued into the three and six-month periods ending April 1, 2023. SUPPORTED=1\n",
      "- Apple's total net sales decreased by 3% and 4% respectively, compared to the same periods in 2022. SUPPORTED=1\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "Great, we now have an assessment for each of the claims: in the last step, we just need to use these assessments to define the final score."
   ],
   "metadata": {
    "id": "vBbLuSkO-iN7"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# given the list of claims and their label, compute the final score\n",
    "# as the proportion of correct claims over the full list of claims\n",
    "def get_final_score(claims_list):\n",
    "  supported = len(re.findall(\"SUPPORTED=1\", claims_list))\n",
    "  non_supported = len(re.findall(\"SUPPORTED=0\", claims_list))\n",
    "  score = supported / (supported+non_supported)\n",
    "  return round(score, 2)"
   ],
   "metadata": {
    "id": "tFk-IxK_wDVT"
   },
   "execution_count": null,
   "outputs": []
  },
  {
   "cell_type": "code",
   "source": [
    "score_faithfulness = get_final_score(assessed_claims_faithfulness)\n",
    "print(f'Faithfulness: {score_faithfulness}')"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "Z680DvFOW2Ea",
    "outputId": "1666057d-7981-43d3-f4bb-c10fdc861cae"
   },
   "execution_count": null,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Faithfulness: 1.0\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "The final Faithfulness score is 1, which means that the model's response is fully grounded in the retrieved documents: that's a very good news :)\n",
    "\n",
    "Before moving on, let's modify the model's response by adding a piece of information which is **not** grounded in any document, and re-compute Faithfulness."
   ],
   "metadata": {
    "id": "3_Ks8ELQrdQO"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# let's mess up the century, changing 2022 to 1922\n",
    "modified_response = response.replace('2022', '1922')\n",
    "\n",
    "# extract the claims from the modified response\n",
    "modified_claims = extract_claims(query, modified_response, model, client)\n",
    "\n",
    "# and get assess the modified claims\n",
    "assessed_modified_claims = assess_claims(query=query,\n",
    "                                         claims=modified_claims,\n",
    "                                         context=retrieved_documents,\n",
    "                                         model=model,\n",
    "                                         client=client)\n",
    "\n",
    "print(f\"Assessment of the modified claims:\\n\\n{assessed_modified_claims}\\n\")\n",
    "\n",
    "score_faithfulness_modified_claims = get_final_score(assessed_modified_claims)\n",
    "print(f'Faithfulness: {score_faithfulness_modified_claims}')"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "YEZUA8XMAxLE",
    "outputId": "9b6e59b1-a4f9-46bf-cdf8-a4f520462d12"
   },
   "execution_count": null,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Assessment of the modified claims:\n",
      "\n",
      "- Apple's total net sales experienced a decline over the last year. SUPPORTED=1\n",
      "- The three-month period ended July 1, 2023, saw a total net sale of $81,797 million. SUPPORTED=1\n",
      "- This was a 1% decrease from the same period in 1922. SUPPORTED=0\n",
      "- The nine-month period ended July 1, 2023, had a 3% decrease in net sales compared to the first nine months of 1922. SUPPORTED=0\n",
      "- The downward trend continued into the three and six-month periods ending April 1, 2023. SUPPORTED=1\n",
      "- Apple's total net sales decreased by 3% and 4% respectively, compared to the same periods in 1922. SUPPORTED=0\n",
      "\n",
      "Faithfulness: 0.5\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "As you can see, by assessing claims one by one, we are able to spot **hallucinations**, that is, the (corrupted) cases in which the information provided by the model is not grounded in any of the retrieved documents."
   ],
   "metadata": {
    "id": "TccmzE9TCQsN"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "### Correctness\n",
    "\n",
    "As said, Faithfulness and Correctness share the same logic, the only difference being that we will check the claims against the gold answer. We can therefore repeat the process above, and just substitute the `context`."
   ],
   "metadata": {
    "id": "bPK-MND1riQD"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# let's get the gold answer from the dataset\n",
    "golden_answer = rag_dataset[idx].reference_answer\n",
    "\n",
    "# and check the claims in the response against the gold.\n",
    "# note that assess_claims takes exactly the same args as with Faithfulness\n",
    "# except for the context, that now is the golden_answer\n",
    "assessed_claims_correctness = assess_claims(query=query,\n",
    "                                            claims=claims,\n",
    "                                            context=golden_answer, # note the different context\n",
    "                                            model=model,\n",
    "                                            client=client)\n",
    "\n",
    "\n",
    "print(f\"Assess the claims extracted from the model's response against the golden answer:\\n\\n{assessed_claims_correctness}\")"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "qVeQDVDYEr45",
    "outputId": "a14a5b09-b8f7-4563-ade5-fe2cd1384cfb"
   },
   "execution_count": null,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Assess the claims extracted from the model's response against the golden answer:\n",
      "\n",
      "- Apple's total net sales experienced a decline over the last year. SUPPORTED=1\n",
      "- The three-month period ended July 1, 2023, saw a total net sale of $81,797 million. SUPPORTED=1\n",
      "- This was a 1% decrease from the same period in 2022. SUPPORTED=0\n",
      "- The nine-month period ended July 1, 2023, had a 3% decrease in net sales compared to the first nine months of 2022. SUPPORTED=0\n",
      "- The downward trend continued into the three and six-month periods ending April 1, 2023. SUPPORTED=1\n",
      "- Apple's total net sales decreased by 3% and 4% respectively, compared to the same periods in 2022. SUPPORTED=0\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "As mentioned above, automatic evaluation is a hard task, and even when using powerful models, claim assessment can present problems: for example, the third claim is labelled as 0, even if it might be inferred from the information in the gold answer."
   ],
   "metadata": {
    "id": "V4pCRJ0qPQWT"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# we can now compute the final Correctness score\n",
    "score_correctness = get_final_score(assessed_claims_correctness)\n",
    "print(f'Correctness: {score_correctness}')"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "nObA30PzxQha",
    "outputId": "2e354428-ea91-4c7e-94b6-31433018058c"
   },
   "execution_count": null,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Correctness: 0.5\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "For Correctness, we found that only half of the claims in the generated response are found in the gold answer. Note that this is not necessarily an issue: reference answers are often non-exhaustive, especially in dataset including open-ended questions, like the one we are considering in this post, and *both* the generated and golden answer can include relevant information.\n"
   ],
   "metadata": {
    "id": "4Nd0bQ3rR1c3"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "### Coverage\n",
    "\n",
    "We finally move to Coverage. Remember that, in this case, we want to check how many of the claims *in the gold answer* are included in the generated response. To do it, we first need to extract the claims from the gold answer."
   ],
   "metadata": {
    "id": "YhsEJDNXR8OY"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# let's extract the golden claims\n",
    "gold_claims = extract_claims(query, golden_answer, model, client)\n",
    "\n",
    "print(f\"List of claims extracted from the gold answer:\\n\\n{gold_claims}\")"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "wJeHJhiAxQjy",
    "outputId": "8da8edf0-5605-4548-aab4-002d0202ee88"
   },
   "execution_count": null,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "List of claims extracted from the gold answer:\n",
      "\n",
      "- For the quarterly period ended June 25, 2022, the total net sales were $82,959 million.\n",
      "- For the quarterly period ended December 31, 2022, the total net sales were $117,154 million.\n",
      "- For the quarterly period ended April 1, 2023, the total net sales were $94,836 million.\n",
      "- For the quarterly period ended July 1, 2023, the total net sales were $81,797 million.\n",
      "- There was an increase in total net sales from the quarter ended June 25, 2022, to the quarter ended December 31, 2022.\n",
      "- There was a decrease in total net sales in the quarters ended April 1, 2023, and July 1, 2023.\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "Then, we check which of these claims is present in the response generated by the model."
   ],
   "metadata": {
    "id": "6VXxxrL1SuFv"
   }
  },
  {
   "cell_type": "code",
   "source": [
    "# note that in, this case, the context is the model's response\n",
    "assessed_claims_coverage = assess_claims(query=query,\n",
    "                                         claims=gold_claims,\n",
    "                                         context=response,\n",
    "                                         model=model,\n",
    "                                         client=client)\n",
    "\n",
    "\n",
    "print(f\"Assess which of the gold claims is in the model's response:\\n\\n{assessed_claims_coverage}\")"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "Ro-sYsp5SnOo",
    "outputId": "d334a72e-a87a-4900-b9e6-9a6c2140dffb"
   },
   "execution_count": null,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Assess which of the gold claims is in the model's response:\n",
      "\n",
      "- For the quarterly period ended June 25, 2022, the total net sales were $82,959 million. SUPPORTED=0\n",
      "- For the quarterly period ended December 31, 2022, the total net sales were $117,154 million. SUPPORTED=0\n",
      "- For the quarterly period ended April 1, 2023, the total net sales were $94,836 million. SUPPORTED=0\n",
      "- For the quarterly period ended July 1, 2023, the total net sales were $81,797 million. SUPPORTED=1\n",
      "- There was an increase in total net sales from the quarter ended June 25, 2022, to the quarter ended December 31, 2022. SUPPORTED=0\n",
      "- There was a decrease in total net sales in the quarters ended April 1, 2023, and July 1, 2023. SUPPORTED=1\n"
     ]
    }
   ]
  },
  {
   "cell_type": "code",
   "source": [
    "# we compute the final Coverage score\n",
    "score_coverage = get_final_score(assessed_claims_coverage)\n",
    "print(f'Coverage: {score_coverage}')"
   ],
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "FUvVUN8qSnG2",
    "outputId": "e854432a-f13b-4596-fb7c-f9b51ee870ec"
   },
   "execution_count": null,
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Coverage: 0.33\n"
     ]
    }
   ]
  },
  {
   "cell_type": "markdown",
   "source": [
    "The Coverage score is telling us that 1/3 of the information in the gold answer is present in the generated answer. This is a useful information, that, similarly to what said above regarding Correctness, can raise further questions, such as: is it acceptable to have diverging information in the generated answer? Is any crucial piece of information missing in the generated answer?\n",
    "\n",
    "The answer to these questions is use case-specific, and has to be made by the end user: The claim-based approach implemented here supports the user by providing a clear and detailed view on what the model is assessing and how."
   ],
   "metadata": {
    "id": "pduSBr2-U_R7"
   }
  },
  {
   "cell_type": "markdown",
   "source": [
    "<a name=\"final-comments\"></a>\n",
    "## Final Comments\n",
    "\n",
    "RAG evaluation is a hard task, especially the evaluation of the generated response. In this notebook we offer a clear, robust and replicable approach to evaluation, on which you can build on to build your evaluation pipeline."
   ],
   "metadata": {
    "id": "0-1FsN2dHzMS"
   }
  }
 ]
}
