{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "9ad59b9d",
   "metadata": {},
   "source": [
    "<a href=\"https://colab.research.google.com/github/jerryjliu/llama_index/blob/main/docs/examples/evaluation/answer_and_context_relevancy.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c693f512-0033-4bca-9824-261701e4a4d4",
   "metadata": {},
   "source": [
    "# Answer Relevancy and Context Relevancy Evaluations"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c69792a3-24a1-424b-afc0-c28b13f2cb42",
   "metadata": {},
   "source": [
    "In this notebook, we demonstrate how to utilize the `AnswerRelevancyEvaluator` and `ContextRelevancyEvaluator` classes to get a measure on the relevancy of a generated answer and retrieved contexts, respectively, to a given user query. Both of these evaluators return a `score` that is between 0 and 1 as well as a generated `feedback` explaining the score. Note that, higher score means higher relevancy. In particular, we prompt the judge LLM to take a step-by-step approach in providing a relevancy score, asking it to answer the following two questions of a generated answer to a query for answer relevancy (for context relevancy these are slightly adjusted):\n",
    "\n",
    "1. Does the provided response match the subject matter of the user's query?\n",
    "2. Does the provided response attempt to address the focus or perspective on the subject matter taken on by the user's query?\n",
    "\n",
    "Each question is worth 1 point and so a perfect evaluation would yield a score of 2/2."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "00f5d108-6cad-4b5f-848d-6f4edeef2c61",
   "metadata": {},
   "outputs": [],
   "source": [
    "import nest_asyncio\n",
    "from tqdm.asyncio import tqdm_asyncio\n",
    "\n",
    "nest_asyncio.apply()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4a7436d4-8cf2-444f-a776-4544f666ef3c",
   "metadata": {},
   "outputs": [],
   "source": [
    "def displayify_df(df):\n",
    "    \"\"\"For pretty displaying DataFrame in a notebook.\"\"\"\n",
    "    display_df = df.style.set_properties(\n",
    "        **{\n",
    "            \"inline-size\": \"300px\",\n",
    "            \"overflow-wrap\": \"break-word\",\n",
    "        }\n",
    "    )\n",
    "    display(display_df)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "55957d59-44ab-45d3-9716-fb1aeb69633e",
   "metadata": {},
   "source": [
    "### Download the dataset (`LabelledRagDataset`)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a228c9ab-96d5-4e4c-8b8c-febe8375b649",
   "metadata": {},
   "source": [
    "For this demonstration, we will use a llama-dataset provided through our [llama-hub](https://llamahub.ai)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "df1997b3-d3fc-4a95-b139-09c1fb256021",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.llama_dataset import download_llama_dataset\n",
    "from llama_index.llama_pack import download_llama_pack\n",
    "from llama_index import VectorStoreIndex\n",
    "\n",
    "# download and install dependencies for benchmark dataset\n",
    "rag_dataset, documents = download_llama_dataset(\n",
    "    \"EvaluatingLlmSurveyPaperDataset\", \"./data\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "39e019d4-90ae-4e8c-91b6-7313493b4983",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>query</th>\n",
       "      <th>reference_contexts</th>\n",
       "      <th>reference_answer</th>\n",
       "      <th>reference_answer_by</th>\n",
       "      <th>query_by</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>What are the potential risks associated with l...</td>\n",
       "      <td>[Evaluating Large Language Models: A\\nComprehe...</td>\n",
       "      <td>According to the context information, the pote...</td>\n",
       "      <td>ai (gpt-3.5-turbo)</td>\n",
       "      <td>ai (gpt-3.5-turbo)</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>How does the survey categorize the evaluation ...</td>\n",
       "      <td>[Evaluating Large Language Models: A\\nComprehe...</td>\n",
       "      <td>The survey categorizes the evaluation of LLMs ...</td>\n",
       "      <td>ai (gpt-3.5-turbo)</td>\n",
       "      <td>ai (gpt-3.5-turbo)</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>What are the different types of reasoning disc...</td>\n",
       "      <td>[Contents\\n1 Introduction 4\\n2 Taxonomy and Ro...</td>\n",
       "      <td>The different types of reasoning discussed in ...</td>\n",
       "      <td>ai (gpt-3.5-turbo)</td>\n",
       "      <td>ai (gpt-3.5-turbo)</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>How is toxicity evaluated in language models a...</td>\n",
       "      <td>[Contents\\n1 Introduction 4\\n2 Taxonomy and Ro...</td>\n",
       "      <td>Toxicity is evaluated in language models accor...</td>\n",
       "      <td>ai (gpt-3.5-turbo)</td>\n",
       "      <td>ai (gpt-3.5-turbo)</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>In the context of specialized LLMs evaluation,...</td>\n",
       "      <td>[5.1.3 Alignment Robustness . . . . . . . . . ...</td>\n",
       "      <td>In the context of specialized LLMs evaluation,...</td>\n",
       "      <td>ai (gpt-3.5-turbo)</td>\n",
       "      <td>ai (gpt-3.5-turbo)</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                               query  \\\n",
       "0  What are the potential risks associated with l...   \n",
       "1  How does the survey categorize the evaluation ...   \n",
       "2  What are the different types of reasoning disc...   \n",
       "3  How is toxicity evaluated in language models a...   \n",
       "4  In the context of specialized LLMs evaluation,...   \n",
       "\n",
       "                                  reference_contexts  \\\n",
       "0  [Evaluating Large Language Models: A\\nComprehe...   \n",
       "1  [Evaluating Large Language Models: A\\nComprehe...   \n",
       "2  [Contents\\n1 Introduction 4\\n2 Taxonomy and Ro...   \n",
       "3  [Contents\\n1 Introduction 4\\n2 Taxonomy and Ro...   \n",
       "4  [5.1.3 Alignment Robustness . . . . . . . . . ...   \n",
       "\n",
       "                                    reference_answer reference_answer_by  \\\n",
       "0  According to the context information, the pote...  ai (gpt-3.5-turbo)   \n",
       "1  The survey categorizes the evaluation of LLMs ...  ai (gpt-3.5-turbo)   \n",
       "2  The different types of reasoning discussed in ...  ai (gpt-3.5-turbo)   \n",
       "3  Toxicity is evaluated in language models accor...  ai (gpt-3.5-turbo)   \n",
       "4  In the context of specialized LLMs evaluation,...  ai (gpt-3.5-turbo)   \n",
       "\n",
       "             query_by  \n",
       "0  ai (gpt-3.5-turbo)  \n",
       "1  ai (gpt-3.5-turbo)  \n",
       "2  ai (gpt-3.5-turbo)  \n",
       "3  ai (gpt-3.5-turbo)  \n",
       "4  ai (gpt-3.5-turbo)  "
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "rag_dataset.to_pandas()[:5]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3a3ae70a-d1a2-4f86-909a-e9c6fb063930",
   "metadata": {},
   "source": [
    "Next, we build a RAG over the same source documents used to created the `rag_dataset`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a4e6d215-1644-4da9-8f15-5c1c77ab35cf",
   "metadata": {},
   "outputs": [],
   "source": [
    "index = VectorStoreIndex.from_documents(documents=documents)\n",
    "query_engine = index.as_query_engine()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88027888-102f-40df-a581-3047b35d4f11",
   "metadata": {},
   "source": [
    "With our RAG (i.e `query_engine`) defined, we can make predictions (i.e., generate responses to the query) with it over the `rag_dataset`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d609c502-e2a7-4dfe-ba1d-5ee54fe59691",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Batch processing of predictions: 100%|████████████████████| 100/100 [00:08<00:00, 12.12it/s]\n",
      "Batch processing of predictions: 100%|████████████████████| 100/100 [00:08<00:00, 12.37it/s]\n",
      "Batch processing of predictions: 100%|██████████████████████| 76/76 [00:06<00:00, 10.93it/s]\n"
     ]
    }
   ],
   "source": [
    "prediction_dataset = await rag_dataset.amake_predictions_with(\n",
    "    predictor=query_engine, batch_size=100, show_progress=True\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "db779e26-dad1-4380-a131-b906339a936e",
   "metadata": {},
   "source": [
    "### Evaluating Answer and Context Relevancy Separately"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "96344bde-30c7-416f-a650-58e79b5abaff",
   "metadata": {},
   "source": [
    "We first need to define our evaluators (i.e. `AnswerRelevancyEvaluator` & `ContextRelevancyEvaluator`):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b8fb6c86-b88a-4344-b390-a4decfa71061",
   "metadata": {},
   "outputs": [],
   "source": [
    "# instantiate the gpt-4 judges\n",
    "from llama_index.llms import OpenAI\n",
    "from llama_index import ServiceContext\n",
    "from llama_index.evaluation import (\n",
    "    AnswerRelevancyEvaluator,\n",
    "    ContextRelevancyEvaluator,\n",
    ")\n",
    "\n",
    "judges = {}\n",
    "\n",
    "judges[\"answer_relevancy\"] = AnswerRelevancyEvaluator(\n",
    "    service_context=ServiceContext.from_defaults(\n",
    "        llm=OpenAI(temperature=0, model=\"gpt-3.5-turbo\"),\n",
    "    )\n",
    ")\n",
    "\n",
    "judges[\"context_relevancy\"] = ContextRelevancyEvaluator(\n",
    "    service_context=ServiceContext.from_defaults(\n",
    "        llm=OpenAI(temperature=0, model=\"gpt-4\"),\n",
    "    )\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "650e5262-1fa0-411f-9f09-e2557f96116a",
   "metadata": {},
   "source": [
    "Now, we can use our evaluator to make evaluations by looping through all of the <example, prediction> pairs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2c5d02ce-c7f3-49b7-b397-b94b0aa3fc48",
   "metadata": {},
   "outputs": [],
   "source": [
    "eval_tasks = []\n",
    "for example, prediction in zip(\n",
    "    rag_dataset.examples, prediction_dataset.predictions\n",
    "):\n",
    "    eval_tasks.append(\n",
    "        judges[\"answer_relevancy\"].aevaluate(\n",
    "            query=example.query,\n",
    "            response=prediction.response,\n",
    "            sleep_time_in_seconds=1.0,\n",
    "        )\n",
    "    )\n",
    "    eval_tasks.append(\n",
    "        judges[\"context_relevancy\"].aevaluate(\n",
    "            query=example.query,\n",
    "            contexts=prediction.contexts,\n",
    "            sleep_time_in_seconds=1.0,\n",
    "        )\n",
    "    )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9155634d-58c7-4dac-ac16-93a8537ddef5",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|█████████████████████████████████████████████████████| 250/250 [00:28<00:00,  8.85it/s]\n"
     ]
    }
   ],
   "source": [
    "eval_results1 = await tqdm_asyncio.gather(*eval_tasks[:250])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a481eb08-1922-4e1c-bed1-459398c2e69c",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|█████████████████████████████████████████████████████| 302/302 [00:31<00:00,  9.62it/s]\n"
     ]
    }
   ],
   "source": [
    "eval_results2 = await tqdm_asyncio.gather(*eval_tasks[250:])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7211e3f5-c52d-48da-98b0-7b4e5b9ce002",
   "metadata": {},
   "outputs": [],
   "source": [
    "eval_results = eval_results1 + eval_results2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f56052e8-957f-4945-b910-24f70116ea79",
   "metadata": {},
   "outputs": [],
   "source": [
    "evals = {\n",
    "    \"answer_relevancy\": eval_results[::2],\n",
    "    \"context_relevancy\": eval_results[1::2],\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9c31a4ac-5871-4e78-856d-5055f249598f",
   "metadata": {},
   "source": [
    "### Taking a look at the evaluation results"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d037782-b7fa-43f6-ad83-b80faad3606b",
   "metadata": {},
   "source": [
    "Here we use a utility function to convert the list of `EvaluationResult` objects into something more notebook friendly. This utility will provide two DataFrames, one deep one containing all of the evaluation results, and another one which aggregates via taking the mean of all the scores, per evaluation method."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ffa305d6-bba1-45ef-87a1-c4a2415fa54b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from llama_index.evaluation.notebook_utils import get_eval_results_df\n",
    "import pandas as pd\n",
    "\n",
    "deep_dfs = {}\n",
    "mean_dfs = {}\n",
    "for metric in evals.keys():\n",
    "    deep_df, mean_df = get_eval_results_df(\n",
    "        names=[\"baseline\"] * len(evals[metric]),\n",
    "        results_arr=evals[metric],\n",
    "        metric=metric,\n",
    "    )\n",
    "    deep_dfs[metric] = deep_df\n",
    "    mean_dfs[metric] = mean_df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5bf8b125-6406-41e9-afad-eb3786645576",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th>rag</th>\n",
       "      <th>baseline</th>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>metrics</th>\n",
       "      <th></th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>mean_answer_relevancy_score</th>\n",
       "      <td>0.914855</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>mean_context_relevancy_score</th>\n",
       "      <td>0.572273</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "rag                           baseline\n",
       "metrics                               \n",
       "mean_answer_relevancy_score   0.914855\n",
       "mean_context_relevancy_score  0.572273"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "mean_scores_df = pd.concat(\n",
    "    [mdf.reset_index() for _, mdf in mean_dfs.items()],\n",
    "    axis=0,\n",
    "    ignore_index=True,\n",
    ")\n",
    "mean_scores_df = mean_scores_df.set_index(\"index\")\n",
    "mean_scores_df.index = mean_scores_df.index.set_names([\"metrics\"])\n",
    "mean_scores_df"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc452d5a-a2f5-4fb6-b41e-ff61cc7cb810",
   "metadata": {},
   "source": [
    "The above utility also provides the mean score across all of the evaluations in `mean_df`."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "52f8b67c-8501-4ad2-ba71-d20ed24dc041",
   "metadata": {},
   "source": [
    "We can get a look at the raw distribution of the scores by invoking `value_counts()` on the `deep_df`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "72768281-4fd2-480c-a1f2-5432606b0dfb",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "scores\n",
       "1.0    250\n",
       "0.0     21\n",
       "0.5      5\n",
       "Name: count, dtype: int64"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "deep_dfs[\"answer_relevancy\"][\"scores\"].value_counts()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "43110a59-feba-42e7-aa77-d765a5844642",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "scores\n",
       "1.000    89\n",
       "0.000    70\n",
       "0.750    49\n",
       "0.250    23\n",
       "0.625    14\n",
       "0.500    11\n",
       "0.375    10\n",
       "0.875     9\n",
       "Name: count, dtype: int64"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "deep_dfs[\"context_relevancy\"][\"scores\"].value_counts()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4324ddbd-0b59-430a-89d5-c706bd55a184",
   "metadata": {},
   "source": [
    "It looks like for the most part, the default RAG does fairly well in terms of generating answers that are relevant to the query. Getting a closer look is made possible by viewing the records of any of the `deep_df`'s."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a617540f-9eb7-47d7-8b96-7618236f1c69",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<style type=\"text/css\">\n",
       "#T_165c3_row0_col0, #T_165c3_row0_col1, #T_165c3_row0_col2, #T_165c3_row0_col3, #T_165c3_row0_col4, #T_165c3_row0_col5, #T_165c3_row1_col0, #T_165c3_row1_col1, #T_165c3_row1_col2, #T_165c3_row1_col3, #T_165c3_row1_col4, #T_165c3_row1_col5 {\n",
       "  inline-size: 300px;\n",
       "  overflow-wrap: break-word;\n",
       "}\n",
       "</style>\n",
       "<table id=\"T_165c3\">\n",
       "  <thead>\n",
       "    <tr>\n",
       "      <th class=\"blank level0\" >&nbsp;</th>\n",
       "      <th id=\"T_165c3_level0_col0\" class=\"col_heading level0 col0\" >rag</th>\n",
       "      <th id=\"T_165c3_level0_col1\" class=\"col_heading level0 col1\" >query</th>\n",
       "      <th id=\"T_165c3_level0_col2\" class=\"col_heading level0 col2\" >answer</th>\n",
       "      <th id=\"T_165c3_level0_col3\" class=\"col_heading level0 col3\" >contexts</th>\n",
       "      <th id=\"T_165c3_level0_col4\" class=\"col_heading level0 col4\" >scores</th>\n",
       "      <th id=\"T_165c3_level0_col5\" class=\"col_heading level0 col5\" >feedbacks</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th id=\"T_165c3_level0_row0\" class=\"row_heading level0 row0\" >0</th>\n",
       "      <td id=\"T_165c3_row0_col0\" class=\"data row0 col0\" >baseline</td>\n",
       "      <td id=\"T_165c3_row0_col1\" class=\"data row0 col1\" >What are the potential risks associated with large language models (LLMs) according to the context information?</td>\n",
       "      <td id=\"T_165c3_row0_col2\" class=\"data row0 col2\" >None</td>\n",
       "      <td id=\"T_165c3_row0_col3\" class=\"data row0 col3\" >['Evaluating Large Language Models: A\\nComprehensive Survey\\nZishan Guo∗, Renren Jin∗, Chuang Liu∗, Yufei Huang, Dan Shi, Supryadi\\nLinhao Yu, Yan Liu, Jiaxuan Li, Bojian Xiong, Deyi Xiong†\\nTianjin University\\n{guozishan, rrjin, liuc_09, yuki_731, shidan, supryadi}@tju.edu.cn\\n{linhaoyu, yan_liu, jiaxuanlee, xbj1355, dyxiong}@tju.edu.cn\\nAbstract\\nLarge language models (LLMs) have demonstrated remarkable capabilities\\nacross a broad spectrum of tasks. They have attracted significant attention\\nand been deployed in numerous downstream applications. Nevertheless, akin\\nto a double-edged sword, LLMs also present potential risks. They could\\nsuffer from private data leaks or yield inappropriate, harmful, or misleading\\ncontent. Additionally, the rapid progress of LLMs raises concerns about the\\npotential emergence of superintelligent systems without adequate safeguards.\\nTo effectively capitalize on LLM capacities as well as ensure their safe and\\nbeneficial development, it is critical to conduct a rigorous and comprehensive\\nevaluation of LLMs.\\nThis survey endeavors to offer a panoramic perspective on the evaluation\\nof LLMs. We categorize the evaluation of LLMs into three major groups:\\nknowledgeandcapabilityevaluation, alignmentevaluationandsafetyevaluation.\\nIn addition to the comprehensive review on the evaluation methodologies and\\nbenchmarks on these three aspects, we collate a compendium of evaluations\\npertaining to LLMs’ performance in specialized domains, and discuss the\\nconstruction of comprehensive evaluation platforms that cover LLM evaluations\\non capabilities, alignment, safety, and applicability.\\nWe hope that this comprehensive overview will stimulate further research\\ninterests in the evaluation of LLMs, with the ultimate goal of making evaluation\\nserve as a cornerstone in guiding the responsible development of LLMs. We\\nenvision that this will channel their evolution into a direction that maximizes\\nsocietal benefit while minimizing potential risks. A curated list of related\\npapers has been publicly available at a GitHub repository.1\\n∗Equal contribution\\n†Corresponding author.\\n1https://github.com/tjunlp-lab/Awesome-LLMs-Evaluation-Papers\\n1arXiv:2310.19736v3  [cs.CL]  25 Nov 2023', 'criteria. Multilingual Holistic Bias (Costa-jussà et al., 2023) extends the HolisticBias dataset\\nto 50 languages, achieving the largest scale of English template-based text expansion.\\nWhether using automatic or manual evaluations, both approaches inevitably carry human\\nsubjectivity and cannot establish a comprehensive and fair evaluation standard. Unqover\\n(Li et al., 2020) is the first to transform the task of evaluating biases generated by models\\ninto a multiple-choice question, covering gender, nationality, race, and religion categories.\\nThey provide models with ambiguous and disambiguous contexts and ask them to choose\\nbetween options with and without stereotypes, evaluating both PLMs and models fine-tuned\\non multiple-choice question answering datasets. BBQ (Parrish et al., 2022) adopts this\\napproach but extends the types of biases to nine categories. All sentence templates are\\nmanually created, and in addition to the two contrasting group answers, the model is also\\nprovided with correct answers like “I don’t know” and “I’m not sure”, and a statistical bias\\nscore metric is proposed to evaluate multiple question answering models. CBBQ (Huang\\n& Xiong, 2023) extends BBQ to Chinese. Based on Chinese socio-cultural factors, CBBQ\\nadds four categories: disease, educational qualification, household registration, and region.\\nThey manually rewrite ambiguous text templates and use GPT-4 to generate disambiguous\\ntemplates, greatly increasing the dataset’s diversity and extensibility. Additionally, they\\nimprove the experimental setup for LLMs and evaluate existing Chinese open-source LLMs,\\nfinding that current Chinese LLMs not only have higher bias scores but also exhibit behavioral\\ninconsistencies, revealing a significant gap compared to GPT-3.5-Turbo.\\nIn addition to these aforementioned evaluation methods, we could also use advanced LLMs for\\nscoring bias, such as GPT-4, or employ models that perform best in training bias detection\\ntasks to detect the level of bias in answers. Such models can be used not only in the evaluation\\nphase but also for identifying biases in data for pre-training LLMs, facilitating debiasing in\\ntraining data.\\nAs the development of multilingual LLMs and domain-specific LLMs progresses, studies on\\nthe fairness of these models become increasingly important. Zhao et al. (2020) create datasets\\nto study gender bias in multilingual embeddings and cross-lingual tasks, revealing gender\\nbias from both internal and external perspectives. Moreover, FairLex (Chalkidis et al., 2022)\\nproposes a multilingual legal dataset as fairness benchmark, covering four judicial jurisdictions\\n(European Commission, United States, Swiss Federation, and People’s Republic of China), five\\nlanguages (English, German, French, Italian, and Chinese), and various sensitive attributes\\n(gender, age, region, etc.). As LLMs have been applied and deployed in the finance and legal\\nsectors, these studies deserve high attention.\\n4.3 Toxicity\\nLLMs are usually trained on a huge amount of online data which may contain toxic behavior\\nand unsafe content. These include hate speech, offensive/abusive language, pornographic\\ncontent, etc. It is hence very desirable to evaluate how well trained LLMs deal with toxicity.\\nConsidering the proficiency of LLMs in understanding and generating sentences, we categorize\\nthe evaluation of toxicity into two tasks: toxicity identification and classification evaluation,\\nand the evaluation of toxicity in generated sentences.\\n29']</td>\n",
       "      <td id=\"T_165c3_row0_col4\" class=\"data row0 col4\" >1.000000</td>\n",
       "      <td id=\"T_165c3_row0_col5\" class=\"data row0 col5\" >1. The retrieved context does match the subject matter of the user's query. It discusses the potential risks associated with large language models (LLMs), including private data leaks, inappropriate or harmful content, and the emergence of superintelligent systems without adequate safeguards. It also discusses the potential for bias in LLMs, and the risk of toxicity in the content generated by LLMs. Therefore, it is relevant to the user's query about the potential risks associated with LLMs. (2/2)\n",
       "2. The retrieved context can be used to provide a full answer to the user's query. It provides a comprehensive overview of the potential risks associated with LLMs, including data privacy, inappropriate content, superintelligence, bias, and toxicity. It also discusses the importance of evaluating these risks and the methodologies for doing so. Therefore, it provides a complete answer to the user's query. (2/2)\n",
       "\n",
       "[RESULT] 4/4</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_165c3_level0_row1\" class=\"row_heading level0 row1\" >1</th>\n",
       "      <td id=\"T_165c3_row1_col0\" class=\"data row1 col0\" >baseline</td>\n",
       "      <td id=\"T_165c3_row1_col1\" class=\"data row1 col1\" >How does the survey categorize the evaluation of LLMs and what are the three major groups mentioned?</td>\n",
       "      <td id=\"T_165c3_row1_col2\" class=\"data row1 col2\" >None</td>\n",
       "      <td id=\"T_165c3_row1_col3\" class=\"data row1 col3\" >['Question \\nAnsweringTool \\nLearning\\nReasoning\\nKnowledge \\nCompletionEthics \\nand \\nMorality Bias\\nToxicity\\nTruthfulnessRobustnessEvaluation\\nRisk \\nEvaluation\\nBiology and \\nMedicine\\nEducationLegislationComputer \\nScienceFinance\\nBenchmarks for\\nHolistic Evaluation\\nBenchmarks \\nforKnowledge and Reasoning\\nBenchmarks \\nforNLU and NLGKnowledge and Capability\\nLarge Language \\nModel EvaluationAlignment Evaluation\\nSafety\\nSpecialized LLMs\\nEvaluation Organization\\n…Figure 1: Our proposed taxonomy of major categories and sub-categories of LLM evaluation.\\nOur survey expands the scope to synthesize findings from both capability and alignment\\nevaluations of LLMs. By complementing these previous surveys through an integrated\\nperspective and expanded scope, our work provides a comprehensive overview of the current\\nstate of LLM evaluation research. The distinctions between our survey and these two related\\nworks further highlight the novel contributions of our study to the literature.\\n2 Taxonomy and Roadmap\\nThe primary objective of this survey is to meticulously categorize the evaluation of LLMs,\\nfurnishing readers with a well-structured taxonomy framework. Through this framework,\\nreaders can gain a nuanced understanding of LLMs’ performance and the attendant challenges\\nacross diverse and pivotal domains.\\nNumerous studies posit that the bedrock of LLMs’ capabilities resides in knowledge and\\nreasoning, serving as the underpinning for their exceptional performance across a myriad of\\ntasks. Nonetheless, the effective application of these capabilities necessitates a meticulous\\nexamination of alignment concerns to ensure that the model’s outputs remain consistent with\\nuser expectations. Moreover, the vulnerability of LLMs to malicious exploits or inadvertent\\nmisuse underscores the imperative nature of safety considerations. Once alignment and safety\\nconcerns have been addressed, LLMs can be judiciously deployed within specialized domains,\\ncatalyzing task automation and facilitating intelligent decision-making. Thus, our overarching\\n6', 'This survey systematically elaborates on the core capabilities of LLMs, encompassing critical\\naspects like knowledge and reasoning. Furthermore, we delve into alignment evaluation and\\nsafety evaluation, including ethical concerns, biases, toxicity, and truthfulness, to ensure the\\nsafe, trustworthy and ethical application of LLMs. Simultaneously, we explore the potential\\napplications of LLMs across diverse domains, including biology, education, law, computer\\nscience, and finance. Most importantly, we provide a range of popular benchmark evaluations\\nto assist researchers, developers and practitioners in understanding and evaluating LLMs’\\nperformance.\\nWe anticipate that this survey would drive the development of LLMs evaluations, offering\\nclear guidance to steer the controlled advancement of these models. This will enable LLMs\\nto better serve the community and the world, ensuring their applications in various domains\\nare safe, reliable, and beneficial. With eager anticipation, we embrace the future challenges\\nof LLMs’ development and evaluation.\\n58']</td>\n",
       "      <td id=\"T_165c3_row1_col4\" class=\"data row1 col4\" >0.375000</td>\n",
       "      <td id=\"T_165c3_row1_col5\" class=\"data row1 col5\" >1. The retrieved context does match the subject matter of the user's query. The user's query is about how a survey categorizes the evaluation of Large Language Models (LLMs) and the three major groups mentioned. The context provided discusses the categorization of LLMs evaluation in the survey, mentioning aspects like knowledge and reasoning, alignment evaluation, safety evaluation, and potential applications across diverse domains. \n",
       "\n",
       "2. However, the context does not provide a full answer to the user's query. While it does discuss the categorization of LLMs evaluation, it does not clearly mention the three major groups. The context mentions several aspects of LLMs evaluation, but it is not clear which of these are considered the three major groups. \n",
       "\n",
       "[RESULT] 1.5</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n"
      ],
      "text/plain": [
       "<pandas.io.formats.style.Styler at 0x29f9cb6a0>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "displayify_df(deep_dfs[\"context_relevancy\"].head(2))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5d2a0d5f-253d-433b-85b5-abbd7d95bbe5",
   "metadata": {},
   "source": [
    "And, of course you can apply any filters as you like. For example, if you want to look at the examples that yielded less than perfect results."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b8cfaa1e-720b-4753-b8cc-4c76b6f21e65",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<style type=\"text/css\">\n",
       "#T_eb560_row0_col0, #T_eb560_row0_col1, #T_eb560_row0_col2, #T_eb560_row0_col3, #T_eb560_row0_col4, #T_eb560_row0_col5, #T_eb560_row1_col0, #T_eb560_row1_col1, #T_eb560_row1_col2, #T_eb560_row1_col3, #T_eb560_row1_col4, #T_eb560_row1_col5, #T_eb560_row2_col0, #T_eb560_row2_col1, #T_eb560_row2_col2, #T_eb560_row2_col3, #T_eb560_row2_col4, #T_eb560_row2_col5, #T_eb560_row3_col0, #T_eb560_row3_col1, #T_eb560_row3_col2, #T_eb560_row3_col3, #T_eb560_row3_col4, #T_eb560_row3_col5, #T_eb560_row4_col0, #T_eb560_row4_col1, #T_eb560_row4_col2, #T_eb560_row4_col3, #T_eb560_row4_col4, #T_eb560_row4_col5 {\n",
       "  inline-size: 300px;\n",
       "  overflow-wrap: break-word;\n",
       "}\n",
       "</style>\n",
       "<table id=\"T_eb560\">\n",
       "  <thead>\n",
       "    <tr>\n",
       "      <th class=\"blank level0\" >&nbsp;</th>\n",
       "      <th id=\"T_eb560_level0_col0\" class=\"col_heading level0 col0\" >rag</th>\n",
       "      <th id=\"T_eb560_level0_col1\" class=\"col_heading level0 col1\" >query</th>\n",
       "      <th id=\"T_eb560_level0_col2\" class=\"col_heading level0 col2\" >answer</th>\n",
       "      <th id=\"T_eb560_level0_col3\" class=\"col_heading level0 col3\" >contexts</th>\n",
       "      <th id=\"T_eb560_level0_col4\" class=\"col_heading level0 col4\" >scores</th>\n",
       "      <th id=\"T_eb560_level0_col5\" class=\"col_heading level0 col5\" >feedbacks</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th id=\"T_eb560_level0_row0\" class=\"row_heading level0 row0\" >1</th>\n",
       "      <td id=\"T_eb560_row0_col0\" class=\"data row0 col0\" >baseline</td>\n",
       "      <td id=\"T_eb560_row0_col1\" class=\"data row0 col1\" >How does the survey categorize the evaluation of LLMs and what are the three major groups mentioned?</td>\n",
       "      <td id=\"T_eb560_row0_col2\" class=\"data row0 col2\" >None</td>\n",
       "      <td id=\"T_eb560_row0_col3\" class=\"data row0 col3\" >['Question \\nAnsweringTool \\nLearning\\nReasoning\\nKnowledge \\nCompletionEthics \\nand \\nMorality Bias\\nToxicity\\nTruthfulnessRobustnessEvaluation\\nRisk \\nEvaluation\\nBiology and \\nMedicine\\nEducationLegislationComputer \\nScienceFinance\\nBenchmarks for\\nHolistic Evaluation\\nBenchmarks \\nforKnowledge and Reasoning\\nBenchmarks \\nforNLU and NLGKnowledge and Capability\\nLarge Language \\nModel EvaluationAlignment Evaluation\\nSafety\\nSpecialized LLMs\\nEvaluation Organization\\n…Figure 1: Our proposed taxonomy of major categories and sub-categories of LLM evaluation.\\nOur survey expands the scope to synthesize findings from both capability and alignment\\nevaluations of LLMs. By complementing these previous surveys through an integrated\\nperspective and expanded scope, our work provides a comprehensive overview of the current\\nstate of LLM evaluation research. The distinctions between our survey and these two related\\nworks further highlight the novel contributions of our study to the literature.\\n2 Taxonomy and Roadmap\\nThe primary objective of this survey is to meticulously categorize the evaluation of LLMs,\\nfurnishing readers with a well-structured taxonomy framework. Through this framework,\\nreaders can gain a nuanced understanding of LLMs’ performance and the attendant challenges\\nacross diverse and pivotal domains.\\nNumerous studies posit that the bedrock of LLMs’ capabilities resides in knowledge and\\nreasoning, serving as the underpinning for their exceptional performance across a myriad of\\ntasks. Nonetheless, the effective application of these capabilities necessitates a meticulous\\nexamination of alignment concerns to ensure that the model’s outputs remain consistent with\\nuser expectations. Moreover, the vulnerability of LLMs to malicious exploits or inadvertent\\nmisuse underscores the imperative nature of safety considerations. Once alignment and safety\\nconcerns have been addressed, LLMs can be judiciously deployed within specialized domains,\\ncatalyzing task automation and facilitating intelligent decision-making. Thus, our overarching\\n6', 'This survey systematically elaborates on the core capabilities of LLMs, encompassing critical\\naspects like knowledge and reasoning. Furthermore, we delve into alignment evaluation and\\nsafety evaluation, including ethical concerns, biases, toxicity, and truthfulness, to ensure the\\nsafe, trustworthy and ethical application of LLMs. Simultaneously, we explore the potential\\napplications of LLMs across diverse domains, including biology, education, law, computer\\nscience, and finance. Most importantly, we provide a range of popular benchmark evaluations\\nto assist researchers, developers and practitioners in understanding and evaluating LLMs’\\nperformance.\\nWe anticipate that this survey would drive the development of LLMs evaluations, offering\\nclear guidance to steer the controlled advancement of these models. This will enable LLMs\\nto better serve the community and the world, ensuring their applications in various domains\\nare safe, reliable, and beneficial. With eager anticipation, we embrace the future challenges\\nof LLMs’ development and evaluation.\\n58']</td>\n",
       "      <td id=\"T_eb560_row0_col4\" class=\"data row0 col4\" >0.375000</td>\n",
       "      <td id=\"T_eb560_row0_col5\" class=\"data row0 col5\" >1. The retrieved context does match the subject matter of the user's query. The user's query is about how a survey categorizes the evaluation of Large Language Models (LLMs) and the three major groups mentioned. The context provided discusses the categorization of LLMs evaluation in the survey, mentioning aspects like knowledge and reasoning, alignment evaluation, safety evaluation, and potential applications across diverse domains. \n",
       "\n",
       "2. However, the context does not provide a full answer to the user's query. While it does discuss the categorization of LLMs evaluation, it does not clearly mention the three major groups. The context mentions several aspects of LLMs evaluation, but it is not clear which of these are considered the three major groups. \n",
       "\n",
       "[RESULT] 1.5</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_eb560_level0_row1\" class=\"row_heading level0 row1\" >9</th>\n",
       "      <td id=\"T_eb560_row1_col0\" class=\"data row1 col0\" >baseline</td>\n",
       "      <td id=\"T_eb560_row1_col1\" class=\"data row1 col1\" >How does this survey on LLM evaluation differ from previous reviews conducted by Chang et al. (2023) and Liu et al. (2023i)?</td>\n",
       "      <td id=\"T_eb560_row1_col2\" class=\"data row1 col2\" >None</td>\n",
       "      <td id=\"T_eb560_row1_col3\" class=\"data row1 col3\" >['This survey systematically elaborates on the core capabilities of LLMs, encompassing critical\\naspects like knowledge and reasoning. Furthermore, we delve into alignment evaluation and\\nsafety evaluation, including ethical concerns, biases, toxicity, and truthfulness, to ensure the\\nsafe, trustworthy and ethical application of LLMs. Simultaneously, we explore the potential\\napplications of LLMs across diverse domains, including biology, education, law, computer\\nscience, and finance. Most importantly, we provide a range of popular benchmark evaluations\\nto assist researchers, developers and practitioners in understanding and evaluating LLMs’\\nperformance.\\nWe anticipate that this survey would drive the development of LLMs evaluations, offering\\nclear guidance to steer the controlled advancement of these models. This will enable LLMs\\nto better serve the community and the world, ensuring their applications in various domains\\nare safe, reliable, and beneficial. With eager anticipation, we embrace the future challenges\\nof LLMs’ development and evaluation.\\n58', '(2021)\\nBEGIN (Dziri et al., 2022b)\\nConsisTest (Lotfi et al., 2022)\\nSummarizationXSumFaith (Maynez et al., 2020)\\nFactCC (Kryscinski et al., 2020)\\nSummEval (Fabbri et al., 2021)\\nFRANK (Pagnoni et al., 2021)\\nSummaC (Laban et al., 2022)\\nWang et al. (2020)\\nGoyal & Durrett (2021)\\nCao et al. (2022)\\nCLIFF (Cao & Wang, 2021)\\nAggreFact (Tang et al., 2023a)\\nPolyTope (Huang et al., 2020)\\nMethodsNLI-based MethodsWelleck et al. (2019)\\nLotfi et al. (2022)\\nFalke et al. (2019)\\nLaban et al. (2022)\\nMaynez et al. (2020)\\nAharoni et al. (2022)\\nUtama et al. (2022)\\nRoit et al. (2023)\\nQAQG-based MethodsFEQA (Durmus et al., 2020)\\nQAGS (Wang et al., 2020)\\nQuestEval (Scialom et al., 2021)\\nQAFactEval (Fabbri et al., 2022)\\nQ2 (Honovich et al., 2021)\\nFaithDial (Dziri et al., 2022a)\\nDeng et al. (2023b)\\nLLMs-based MethodsFIB (Tam et al., 2023)\\nFacTool (Chern et al., 2023)\\nFActScore (Min et al., 2023)\\nSelfCheckGPT (Manakul et al., 2023)\\nSAPLMA (Azaria & Mitchell, 2023)\\nLin et al. (2022b)\\nKadavath et al. (2022)\\nFigure 3: Overview of alignment evaluations.\\n4 Alignment Evaluation\\nAlthough instruction-tuned LLMs exhibit impressive capabilities, these aligned LLMs are\\nstill suffering from annotators’ biases, catering to humans, hallucination, etc. To provide a\\ncomprehensive view of LLMs’ alignment evaluation, in this section, we discuss those of ethics,\\nbias, toxicity, and truthfulness, as illustrated in Figure 3.\\n21']</td>\n",
       "      <td id=\"T_eb560_row1_col4\" class=\"data row1 col4\" >0.000000</td>\n",
       "      <td id=\"T_eb560_row1_col5\" class=\"data row1 col5\" >1. The retrieved context does not match the subject matter of the user's query. The user's query is asking for a comparison between the current survey on LLM evaluation and previous reviews conducted by Chang et al. (2023) and Liu et al. (2023i). However, the context does not mention these previous reviews at all, making it impossible to draw any comparisons. Therefore, the context does not match the subject matter of the user's query. (0/2)\n",
       "2. The retrieved context cannot be used exclusively to provide a full answer to the user's query. As mentioned above, the context does not mention the previous reviews by Chang et al. and Liu et al., which are the main focus of the user's query. Therefore, it cannot provide a full answer to the user's query. (0/2)\n",
       "\n",
       "[RESULT] 0.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_eb560_level0_row2\" class=\"row_heading level0 row2\" >11</th>\n",
       "      <td id=\"T_eb560_row2_col0\" class=\"data row2 col0\" >baseline</td>\n",
       "      <td id=\"T_eb560_row2_col1\" class=\"data row2 col1\" >According to the document, what are the two main concerns that need to be addressed before deploying LLMs within specialized domains?</td>\n",
       "      <td id=\"T_eb560_row2_col2\" class=\"data row2 col2\" >None</td>\n",
       "      <td id=\"T_eb560_row2_col3\" class=\"data row2 col3\" >['This survey systematically elaborates on the core capabilities of LLMs, encompassing critical\\naspects like knowledge and reasoning. Furthermore, we delve into alignment evaluation and\\nsafety evaluation, including ethical concerns, biases, toxicity, and truthfulness, to ensure the\\nsafe, trustworthy and ethical application of LLMs. Simultaneously, we explore the potential\\napplications of LLMs across diverse domains, including biology, education, law, computer\\nscience, and finance. Most importantly, we provide a range of popular benchmark evaluations\\nto assist researchers, developers and practitioners in understanding and evaluating LLMs’\\nperformance.\\nWe anticipate that this survey would drive the development of LLMs evaluations, offering\\nclear guidance to steer the controlled advancement of these models. This will enable LLMs\\nto better serve the community and the world, ensuring their applications in various domains\\nare safe, reliable, and beneficial. With eager anticipation, we embrace the future challenges\\nof LLMs’ development and evaluation.\\n58', 'objective is to delve into evaluations encompassing these five fundamental domains and their\\nrespective subdomains, as illustrated in Figure 1.\\nSection 3, titled “Knowledge and Capability Evaluation”, centers on the comprehensive\\nassessment of the fundamental knowledge and reasoning capabilities exhibited by LLMs. This\\nsection is meticulously divided into four distinct subsections: Question-Answering, Knowledge\\nCompletion, Reasoning, and Tool Learning. Question-answering and knowledge completion\\ntasks stand as quintessential assessments for gauging the practical application of knowledge,\\nwhile the various reasoning tasks serve as a litmus test for probing the meta-reasoning and\\nintricate reasoning competencies of LLMs. Furthermore, the recently emphasized special\\nability of tool learning is spotlighted, showcasing its significance in empowering models to\\nadeptly handle and generate domain-specific content.\\nSection 4, designated as “Alignment Evaluation”, hones in on the scrutiny of LLMs’ perfor-\\nmance across critical dimensions, encompassing ethical considerations, moral implications,\\nbias detection, toxicity assessment, and truthfulness evaluation. The pivotal aim here is to\\nscrutinize and mitigate the potential risks that may emerge in the realms of ethics, bias,\\nand toxicity, as LLMs can inadvertently generate discriminatory, biased, or offensive content.\\nFurthermore, this section acknowledges the phenomenon of hallucinations within LLMs, which\\ncan lead to the inadvertent dissemination of false information. As such, an indispensable\\nfacet of this evaluation involves the rigorous assessment of truthfulness, underscoring its\\nsignificance as an essential aspect to evaluate and rectify.\\nSection 5, titled “Safety Evaluation”, embarks on a comprehensive exploration of two funda-\\nmental dimensions: the robustness of LLMs and their evaluation in the context of Artificial\\nGeneral Intelligence (AGI). LLMs are routinely deployed in real-world scenarios, where their\\nrobustness becomes paramount. Robustness equips them to navigate disturbances stemming\\nfrom users and the environment, while also shielding against malicious attacks and deception,\\nthereby ensuring consistent high-level performance. Furthermore, as LLMs inexorably ad-\\nvance toward human-level capabilities, the evaluation expands its purview to encompass more\\nprofound security concerns. These include but are not limited to power-seeking behaviors\\nand the development of situational awareness, factors that necessitate meticulous evaluation\\nto safeguard against unforeseen challenges.\\nSection 6, titled “Specialized LLMs Evaluation”, serves as an extension of LLMs evaluation\\nparadigm into diverse specialized domains. Within this section, we turn our attention to the\\nevaluation of LLMs specifically tailored for application in distinct domains. Our selection\\nencompasses currently prominent specialized LLMs spanning fields such as biology, education,\\nlaw, computer science, and finance. The objective here is to systematically assess their\\naptitude and limitations when confronted with domain-specific challenges and intricacies.\\nSection 7, denominated “Evaluation Organization”, serves as a comprehensive introduction\\nto the prevalent benchmarks and methodologies employed in the evaluation of LLMs. In light\\nof the rapid proliferation of LLMs, users are confronted with the challenge of identifying the\\nmost apt models to meet their specific requirements while minimizing the scope of evaluations.\\nIn this context, we present an overview of well-established and widely recognized benchmark\\n7']</td>\n",
       "      <td id=\"T_eb560_row2_col4\" class=\"data row2 col4\" >0.750000</td>\n",
       "      <td id=\"T_eb560_row2_col5\" class=\"data row2 col5\" >The retrieved context does match the subject matter of the user's query. It discusses the concerns that need to be addressed before deploying LLMs within specialized domains. The two main concerns mentioned are the alignment evaluation, which includes ethical considerations, moral implications, bias detection, toxicity assessment, and truthfulness evaluation, and the safety evaluation, which includes the robustness of LLMs and their evaluation in the context of Artificial General Intelligence (AGI). \n",
       "\n",
       "However, the context does not provide a full answer to the user's query. While it does mention the two main concerns, it does not go into detail about why these concerns need to be addressed before deploying LLMs within specialized domains. The context provides a general overview of the concerns, but it does not specifically tie these concerns to the deployment of LLMs within specialized domains. \n",
       "\n",
       "[RESULT] 3.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_eb560_level0_row3\" class=\"row_heading level0 row3\" >12</th>\n",
       "      <td id=\"T_eb560_row3_col0\" class=\"data row3 col0\" >baseline</td>\n",
       "      <td id=\"T_eb560_row3_col1\" class=\"data row3 col1\" >In the \"Alignment Evaluation\" section, what are some of the dimensions that are assessed to mitigate potential risks associated with LLMs?</td>\n",
       "      <td id=\"T_eb560_row3_col2\" class=\"data row3 col2\" >None</td>\n",
       "      <td id=\"T_eb560_row3_col3\" class=\"data row3 col3\" >['This survey systematically elaborates on the core capabilities of LLMs, encompassing critical\\naspects like knowledge and reasoning. Furthermore, we delve into alignment evaluation and\\nsafety evaluation, including ethical concerns, biases, toxicity, and truthfulness, to ensure the\\nsafe, trustworthy and ethical application of LLMs. Simultaneously, we explore the potential\\napplications of LLMs across diverse domains, including biology, education, law, computer\\nscience, and finance. Most importantly, we provide a range of popular benchmark evaluations\\nto assist researchers, developers and practitioners in understanding and evaluating LLMs’\\nperformance.\\nWe anticipate that this survey would drive the development of LLMs evaluations, offering\\nclear guidance to steer the controlled advancement of these models. This will enable LLMs\\nto better serve the community and the world, ensuring their applications in various domains\\nare safe, reliable, and beneficial. With eager anticipation, we embrace the future challenges\\nof LLMs’ development and evaluation.\\n58', 'Question \\nAnsweringTool \\nLearning\\nReasoning\\nKnowledge \\nCompletionEthics \\nand \\nMorality Bias\\nToxicity\\nTruthfulnessRobustnessEvaluation\\nRisk \\nEvaluation\\nBiology and \\nMedicine\\nEducationLegislationComputer \\nScienceFinance\\nBenchmarks for\\nHolistic Evaluation\\nBenchmarks \\nforKnowledge and Reasoning\\nBenchmarks \\nforNLU and NLGKnowledge and Capability\\nLarge Language \\nModel EvaluationAlignment Evaluation\\nSafety\\nSpecialized LLMs\\nEvaluation Organization\\n…Figure 1: Our proposed taxonomy of major categories and sub-categories of LLM evaluation.\\nOur survey expands the scope to synthesize findings from both capability and alignment\\nevaluations of LLMs. By complementing these previous surveys through an integrated\\nperspective and expanded scope, our work provides a comprehensive overview of the current\\nstate of LLM evaluation research. The distinctions between our survey and these two related\\nworks further highlight the novel contributions of our study to the literature.\\n2 Taxonomy and Roadmap\\nThe primary objective of this survey is to meticulously categorize the evaluation of LLMs,\\nfurnishing readers with a well-structured taxonomy framework. Through this framework,\\nreaders can gain a nuanced understanding of LLMs’ performance and the attendant challenges\\nacross diverse and pivotal domains.\\nNumerous studies posit that the bedrock of LLMs’ capabilities resides in knowledge and\\nreasoning, serving as the underpinning for their exceptional performance across a myriad of\\ntasks. Nonetheless, the effective application of these capabilities necessitates a meticulous\\nexamination of alignment concerns to ensure that the model’s outputs remain consistent with\\nuser expectations. Moreover, the vulnerability of LLMs to malicious exploits or inadvertent\\nmisuse underscores the imperative nature of safety considerations. Once alignment and safety\\nconcerns have been addressed, LLMs can be judiciously deployed within specialized domains,\\ncatalyzing task automation and facilitating intelligent decision-making. Thus, our overarching\\n6']</td>\n",
       "      <td id=\"T_eb560_row3_col4\" class=\"data row3 col4\" >0.750000</td>\n",
       "      <td id=\"T_eb560_row3_col5\" class=\"data row3 col5\" >1. The retrieved context does match the subject matter of the user's query. The user's query is about the dimensions assessed in the \"Alignment Evaluation\" section to mitigate potential risks associated with LLMs (Large Language Models). The context talks about the evaluation of LLMs, including alignment evaluation and safety evaluation. It mentions aspects like knowledge and reasoning, ethical concerns, biases, toxicity, and truthfulness. These are some of the dimensions that could be assessed to mitigate potential risks associated with LLMs. So, the context is relevant to the query. (2/2)\n",
       "\n",
       "2. However, the retrieved context does not provide a full answer to the user's query. While it mentions some dimensions that could be assessed in alignment evaluation (like knowledge and reasoning, ethical concerns, biases, toxicity, and truthfulness), it does not explicitly state that these are the dimensions assessed to mitigate potential risks associated with LLMs. The context does not provide a comprehensive list of dimensions or explain how these dimensions help mitigate risks. Therefore, the context cannot be used exclusively to provide a full answer to the user's query. (1/2)\n",
       "\n",
       "[RESULT] 3.0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_eb560_level0_row4\" class=\"row_heading level0 row4\" >14</th>\n",
       "      <td id=\"T_eb560_row4_col0\" class=\"data row4 col0\" >baseline</td>\n",
       "      <td id=\"T_eb560_row4_col1\" class=\"data row4 col1\" >What is the purpose of evaluating the knowledge and capability of LLMs?</td>\n",
       "      <td id=\"T_eb560_row4_col2\" class=\"data row4 col2\" >None</td>\n",
       "      <td id=\"T_eb560_row4_col3\" class=\"data row4 col3\" >['objective is to delve into evaluations encompassing these five fundamental domains and their\\nrespective subdomains, as illustrated in Figure 1.\\nSection 3, titled “Knowledge and Capability Evaluation”, centers on the comprehensive\\nassessment of the fundamental knowledge and reasoning capabilities exhibited by LLMs. This\\nsection is meticulously divided into four distinct subsections: Question-Answering, Knowledge\\nCompletion, Reasoning, and Tool Learning. Question-answering and knowledge completion\\ntasks stand as quintessential assessments for gauging the practical application of knowledge,\\nwhile the various reasoning tasks serve as a litmus test for probing the meta-reasoning and\\nintricate reasoning competencies of LLMs. Furthermore, the recently emphasized special\\nability of tool learning is spotlighted, showcasing its significance in empowering models to\\nadeptly handle and generate domain-specific content.\\nSection 4, designated as “Alignment Evaluation”, hones in on the scrutiny of LLMs’ perfor-\\nmance across critical dimensions, encompassing ethical considerations, moral implications,\\nbias detection, toxicity assessment, and truthfulness evaluation. The pivotal aim here is to\\nscrutinize and mitigate the potential risks that may emerge in the realms of ethics, bias,\\nand toxicity, as LLMs can inadvertently generate discriminatory, biased, or offensive content.\\nFurthermore, this section acknowledges the phenomenon of hallucinations within LLMs, which\\ncan lead to the inadvertent dissemination of false information. As such, an indispensable\\nfacet of this evaluation involves the rigorous assessment of truthfulness, underscoring its\\nsignificance as an essential aspect to evaluate and rectify.\\nSection 5, titled “Safety Evaluation”, embarks on a comprehensive exploration of two funda-\\nmental dimensions: the robustness of LLMs and their evaluation in the context of Artificial\\nGeneral Intelligence (AGI). LLMs are routinely deployed in real-world scenarios, where their\\nrobustness becomes paramount. Robustness equips them to navigate disturbances stemming\\nfrom users and the environment, while also shielding against malicious attacks and deception,\\nthereby ensuring consistent high-level performance. Furthermore, as LLMs inexorably ad-\\nvance toward human-level capabilities, the evaluation expands its purview to encompass more\\nprofound security concerns. These include but are not limited to power-seeking behaviors\\nand the development of situational awareness, factors that necessitate meticulous evaluation\\nto safeguard against unforeseen challenges.\\nSection 6, titled “Specialized LLMs Evaluation”, serves as an extension of LLMs evaluation\\nparadigm into diverse specialized domains. Within this section, we turn our attention to the\\nevaluation of LLMs specifically tailored for application in distinct domains. Our selection\\nencompasses currently prominent specialized LLMs spanning fields such as biology, education,\\nlaw, computer science, and finance. The objective here is to systematically assess their\\naptitude and limitations when confronted with domain-specific challenges and intricacies.\\nSection 7, denominated “Evaluation Organization”, serves as a comprehensive introduction\\nto the prevalent benchmarks and methodologies employed in the evaluation of LLMs. In light\\nof the rapid proliferation of LLMs, users are confronted with the challenge of identifying the\\nmost apt models to meet their specific requirements while minimizing the scope of evaluations.\\nIn this context, we present an overview of well-established and widely recognized benchmark\\n7', 'evaluations. This serves the purpose of aiding users in making judicious and well-informed\\ndecisions when selecting an appropriate LLM for their particular needs.\\nPleasebeawarethatourtaxonomyframeworkdoesnotpurporttocomprehensivelyencompass\\nthe entirety of the evaluation landscape. In essence, our aim is to address the following\\nfundamental questions:\\n•What are the capabilities of LLMs?\\n•What factors must be taken into account when deploying LLMs?\\n•In which domains can LLMs find practical applications?\\n•How do LLMs perform in these diverse domains?\\nWe will now embark on an in-depth exploration of each category within the LLM evaluation\\ntaxonomy, sequentially addressing capabilities, concerns, applications, and performance.\\n3 Knowledge and Capability Evaluation\\nEvaluating the knowledge and capability of LLMs has become an important research area as\\nthese models grow in scale and capability. As LLMs are deployed in more applications, it is\\ncrucial to rigorously assess their strengths and limitations across a diverse range of tasks and\\ndatasets. In this section, we aim to offer a comprehensive overview of the evaluation methods\\nand benchmarks pertinent to LLMs, spanning various capabilities such as question answering,\\nknowledge completion, reasoning, and tool use. Our objective is to provide an exhaustive\\nsynthesis of the current advancements in the systematic evaluation and benchmarking of\\nLLMs’ knowledge and capabilities, as illustrated in Figure 2.\\n3.1 Question Answering\\nQuestionansweringisaveryimportantmeansforLLMsevaluation, andthequestionanswering\\nability of LLMs directly determines whether the final output can meet the expectation. At\\nthe same time, however, since any form of LLMs evaluation can be regarded as question\\nanswering or transfer to question answering form, there are rare datasets and works that\\npurely evaluate question answering ability of LLMs. Most of the datasets are curated to\\nevaluate other capabilities of LLMs.\\nTherefore, we believe that the datasets simply used to evaluate the question answering ability\\nof LLMs must be from a wide range of sources, preferably covering all fields rather than\\naiming at some fields, and the questions do not need to be very professional but general.\\nAccording to the above criteria for datasets focusing on question answering capability, we can\\nfind that many datasets are qualified, e.g., SQuAD (Rajpurkar et al., 2016), NarrativeQA\\n(Kociský et al., 2018), HotpotQA (Yang et al., 2018), CoQA (Reddy et al., 2019). Although\\nthese datasets predate LLMs, they can still be used to evaluate the question answering ability\\nof LLMs. Kwiatkowski et al. (2019) present the Natural Questions corpus. The questions\\n8']</td>\n",
       "      <td id=\"T_eb560_row4_col4\" class=\"data row4 col4\" >0.750000</td>\n",
       "      <td id=\"T_eb560_row4_col5\" class=\"data row4 col5\" >The retrieved context is relevant to the user's query as it discusses the purpose of evaluating the knowledge and capability of LLMs (Large Language Models). It explains that the evaluation is important to assess their strengths and limitations across a diverse range of tasks and datasets. The context also mentions the different aspects of LLMs that are evaluated, such as question answering, knowledge completion, reasoning, and tool use. \n",
       "\n",
       "However, the context does not fully answer the user's query. While it does provide a general idea of why LLMs are evaluated, it does not delve into the specific purpose of these evaluations. For instance, it does not explain how these evaluations can help improve the performance of LLMs, or how they can be used to identify areas where LLMs may need further development or training.\n",
       "\n",
       "[RESULT] 3.0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n"
      ],
      "text/plain": [
       "<pandas.io.formats.style.Styler at 0x17e3b25f0>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "cond = deep_dfs[\"context_relevancy\"][\"scores\"] < 1\n",
    "displayify_df(deep_dfs[\"context_relevancy\"][cond].head(5))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "llama_index_3.10",
   "language": "python",
   "name": "llama_index_3.10"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
