{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "f361b775-0fb3-4f09-89b9-6672775382bb",
   "metadata": {},
   "source": [
    "# Improving ML Performance via Data Curation with Train vs Test Splits\n",
    "\n",
    "In typical Machine Learning projects, we split our dataset into **training** data for fitting models and **test** data to evaluate model performance. For noisy real-world datasets, detecting/correcting errors in the training data is important to train robust models, but it's less recognized that the test set can also be noisy.\n",
    "For accurate model evaluation, it is vital to **find and fix issues in the test data** as well. Some evaluation metrics are particularly sensitive to outliers and noisy labels.\n",
    "This tutorial demonstrates a way to use cleanlab (via `Datalab`) to curate both your training and test data, ensuring **robust model training** and **reliable performance evaluation**.\n",
    "We recommend first completing some Datalab tutorials before diving into this more complex subject.\n",
    "\n",
    "Here's how we recommend handling noisy training and test data (this tutorial walks through these steps):\n",
    "\n",
    "1. [Preprocess](https://towardsdatascience.com/introduction-to-data-preprocessing-in-machine-learning-a9fa83a5dc9d) your training and test data to be suitable for ML. Use cleanlab to check for fundamental train/test setup problems in the merged dataset like train/test leakage or drift.\n",
    "2. Fit your ML model to your noisy training data and get its predictions/embeddings for your test data. Use these model outputs with cleanlab to detect issues in your **test** data.\n",
    "3. Manually review/correct cleanlab-detected issues in your test data. **We caution against blindly automated correction of test data**. Changes to your test set should be carefully verified to ensure they will lead to more accurate model evaluation. We also caution against comparing the performance of different ML models across different versions of your test data; performance comparions between models should be based on the same test data.\n",
    "4. Cross-validate a new copy of your ML model on your training data, and then use it with cleanlab to detect issues in the **training** dataset. Do not include test data in any part of this step to avoid leaking test set information into the training data curation.\n",
    "5. You can try **automated techniques** to curate your training data based on cleanlab results, train models on the curated training data, and evaluate them on the cleaned test data.\n",
    "\n",
    "Consider this tutorial as a blueprint for using cleanlab in diverse ML projects spanning various data modalities. The same ideas apply if you substitute *test* data with *validation* data above. In a final advanced section of this tutorial, we show how training data edits can be parameterized in terms of cleanlab's detected issues, such that hyperparameter optimization can identify the optimal combination of data edits for training an effective ML model.\n",
    "\n",
    "**Note**: This tutorial trains an XGBoost model on a tabular dataset, but the same approach applies to *any* ML model and data modality.\n",
    "\n",
    "\n",
    "### Why did you make this tutorial?\n",
    "\n",
    "**TLDR:** Reliable ML requires both reliable training and reliable evaluation. This tutorial shows you how to achieve both using cleanlab.\n",
    "\n",
    "**Longer answer:** Many users wish to use cleanlab to improve their ML model by improving their data, but make subtle mistakes. This multi-step tutorial shows one way to do this properly.\n",
    "Some users curate (e.g. fix label issues in) their training data, train ML model, and evaluate it on test data. But they see no improvement in test-set accuracy, because they have introduced *distribution-shift* by altering their training data. If the test data also has issues, they must also be fixed for a faithful model evaluation.\n",
    "Other users therefore curate their test data too, but some blindly auto-fix their test data, which is dangerous! This cleanlab package is based on ML and thus inevitably imperfect. Issues that cleanlab detected in test data should **not** be blindly auto-fixed -- this risks making model evaluation wrong.\n",
    "Instead we recommend the multi-step workflow above, where less algorithmic/automated correction is applied to test data than to training data (focus your manual efforts on curating test rather than training data)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "72907260",
   "metadata": {},
   "source": [
    "## 1. Install dependencies\n",
    "\n",
    "`Datalab` has additional dependencies that are not included in the standard installation of cleanlab.\n",
    "You can use `pip` to install all packages required for this tutorial as follows:\n",
    "\n",
    "```ipython3\n",
    "!pip install xgboost\n",
    "!pip install \"cleanlab[datalab]\"\n",
    "# Make sure to install the version corresponding to this tutorial\n",
    "# E.g. if viewing master branch documentation:\n",
    "#     !pip install git+https://github.com/cleanlab/cleanlab.git\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2d638465",
   "metadata": {
    "nbsphinx": "hidden"
   },
   "outputs": [],
   "source": [
    "# Package installation (hidden on docs website).\n",
    "dependencies = [\"cleanlab\", \"xgboost\", \"datasets\"]\n",
    "\n",
    "if \"google.colab\" in str(get_ipython()):  # Check if it's running in Google Colab\n",
    "    %pip install cleanlab  # for colab\n",
    "    cmd = ' '.join([dep for dep in dependencies if dep != \"cleanlab\"])\n",
    "    %pip install $cmd\n",
    "else:\n",
    "    dependencies_test = [dependency.split('>')[0] if '>' in dependency \n",
    "                         else dependency.split('<')[0] if '<' in dependency \n",
    "                         else dependency.split('=')[0] for dependency in dependencies]\n",
    "    missing_dependencies = []\n",
    "    for dependency in dependencies_test:\n",
    "        try:\n",
    "            __import__(dependency)\n",
    "        except ImportError:\n",
    "            missing_dependencies.append(dependency)\n",
    "\n",
    "    if len(missing_dependencies) > 0:\n",
    "        print(\"Missing required dependencies:\")\n",
    "        print(*missing_dependencies, sep=\", \")\n",
    "        print(\"\\nPlease install them before running the rest of this notebook.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b0bbf715-47c6-44ea-b15e-89800e62ee04",
   "metadata": {},
   "outputs": [],
   "source": [
    "import random\n",
    "import os\n",
    "import math\n",
    "import numpy as np\n",
    "from xgboost import XGBClassifier\n",
    "from sklearn import preprocessing\n",
    "from sklearn.metrics import accuracy_score\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "import pandas as pd\n",
    "import cleanlab\n",
    "from cleanlab import Datalab\n",
    "\n",
    "SEED = 123456  # for reproducibility\n",
    "np.random.seed(SEED)\n",
    "random.seed(SEED)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "850bbadc-547a-4bb0-8bcf-033c6890ce5e",
   "metadata": {},
   "source": [
    "## 2. Preprocess the data \n",
    "\n",
    "This tutorial considers a classification task with structured/tabular data. The ML task is to predict each student's final grade in a course (class label) based on various numeric/categorical features about them (exam scores and notes)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c58f8015-d051-411c-9e03-5659cf3ad956",
   "metadata": {},
   "outputs": [],
   "source": [
    "df_train = pd.read_csv(\n",
    "    \"https://cleanlab-public.s3.amazonaws.com/Datasets/student-grades/clos_train_data.csv\"\n",
    ")\n",
    "\n",
    "df_test = pd.read_csv(\n",
    "    \"https://cleanlab-public.s3.amazonaws.com/Datasets/student-grades/clos_test_data.csv\"\n",
    ")\n",
    "\n",
    "df_train.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b39cd525-7a09-4d8e-811e-44ac1072f438",
   "metadata": {},
   "source": [
    "Before training a ML model, we [preprocess](https://towardsdatascience.com/introduction-to-data-preprocessing-in-machine-learning-a9fa83a5dc9d) our dataset. The type of preprocessing that is best will depend on what ML model you use. This tutorial will demonstrate an XGBoost model, so we'll process the **notes** and **noisy_letter_grade** columns into categorical columns for this model (each category encoded as an integer). You can alternatively use [Cleanlab Studio](https://cleanlab.ai/blog/data-centric-ai/), which will automatically produce a high-accuracy ML model for your raw data, without you having to worry about any ML modeling or data preprocessing work."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1b5f50e6-d125-4e61-b63e-4004f0c9099a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create label encoders for the categorical columns\n",
    "grade_le = preprocessing.LabelEncoder()\n",
    "notes_le = preprocessing.LabelEncoder()\n",
    "\n",
    "# Process the feature columns\n",
    "train_features = df_train.drop([\"stud_ID\", \"noisy_letter_grade\"], axis=1).copy()\n",
    "train_features[\"notes\"] = notes_le.fit_transform(train_features[\"notes\"])\n",
    "train_features[\"notes\"] = train_features[\"notes\"].astype(\"category\")\n",
    "\n",
    "# Process the label column\n",
    "train_labels = pd.DataFrame(grade_le.fit_transform(df_train[\"noisy_letter_grade\"].copy()), columns=[\"noisy_letter_grade\"])\n",
    "\n",
    "# Keep separate copies of these training features and labels for later use\n",
    "train_features_v2 = train_features.copy()\n",
    "train_labels_v2 = train_labels.copy()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "750cd820-7565-4314-9dda-808cbe7c638f",
   "metadata": {},
   "source": [
    "We first solely preprocessed the training data to avoid information leakage (using test data information that would not be available at prediction time). Here's how the preprocessed training features look:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a36c21e9-1c32-4df9-bd87-fffeb8c2175f",
   "metadata": {},
   "outputs": [],
   "source": [
    "train_features.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ff6da7bf-07b6-4a49-b7be-994713688bda",
   "metadata": {},
   "source": [
    "We apply the same preprocessing to the test data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5f856a3a-8aae-4836-b146-9ab68d8d1c7a",
   "metadata": {},
   "outputs": [],
   "source": [
    "test_features = df_test.drop(\n",
    "    [\"stud_ID\", \"noisy_letter_grade\"], axis=1\n",
    ").copy()\n",
    "test_features[\"notes\"] = notes_le.transform(test_features[\"notes\"])\n",
    "test_features[\"notes\"] = test_features[\"notes\"].astype(\"category\")\n",
    "\n",
    "test_labels = pd.DataFrame(grade_le.transform(df_test[\"noisy_letter_grade\"].copy()), columns=[\"noisy_letter_grade\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ff6da7bf-07b6-4a49-b7be-994713688bdd",
   "metadata": {},
   "source": [
    "We then appropriately format the datasets for the ML model used in this tutorial."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "46275634-da56-4e58-9061-8108be2b585d",
   "metadata": {},
   "outputs": [],
   "source": [
    "train_labels = train_labels.astype('object')\n",
    "test_labels = test_labels.astype('object')\n",
    "\n",
    "train_features[\"notes\"] = train_features[\"notes\"].astype(int)\n",
    "test_features[\"notes\"] = test_features[\"notes\"].astype(int)\n",
    "\n",
    "preprocessed_train_data = pd.concat([train_features, train_labels], axis=1)\n",
    "preprocessed_train_data[\"stud_ID\"] = df_train[\"stud_ID\"]\n",
    "\n",
    "preprocessed_test_data = pd.concat([test_features, test_labels], axis=1)\n",
    "preprocessed_test_data[\"stud_ID\"] = df_test[\"stud_ID\"]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b7f2d8d0-c5ac-4e46-9ab0-aee52adaae0d",
   "metadata": {},
   "source": [
    "## 3. Check for fundamental problems in the train/test setup\n",
    "\n",
    "Before training any ML model, we can quickly check for fundamental issues in our setup with cleanlab. To audit all of our data at once, we merge the training and test sets into one dataset, from which we construct a `Datalab` object. Datalab automatically detects many types of common issues in a dataset, but requires a trained ML model for a comprehensive audit. We haven't trained any model yet, so here we instruct Datalab to only check for specific data issues: near duplicates, and whether the data appears non-IID (violations of the IID assumption include: data drift or lack of statistical independence between data points).\n",
    "\n",
    "Datalab can detect many additional types of data issues, depending on what inputs it is given. Below we provide `features = features_df` as the sole input to `Datalab.find_issues()`, which solely contains numerical values here. If you have heterogenoues/complex data types (eg. text or images), you could instead provide vector feature representations (eg. pretrained model embeddings) of your data as the `features`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "769c4c5e-a7ff-4e02-bee5-2b2e676aec14",
   "metadata": {},
   "outputs": [],
   "source": [
    "full_df = pd.concat([preprocessed_train_data, preprocessed_test_data], axis=0).reset_index(drop=True)\n",
    "features_df = full_df.drop([\"noisy_letter_grade\", \"stud_ID\"], axis=1)  # can instead use model embeddings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7ac47c3d-9e87-45b7-9064-bfa45578872e",
   "metadata": {},
   "outputs": [],
   "source": [
    "lab = Datalab(data=full_df, label_name=\"noisy_letter_grade\", task=\"classification\")\n",
    "lab.find_issues(features=features_df.to_numpy(), issue_types={\"near_duplicate\": {}, \"non_iid\": {}})\n",
    "lab.report(show_summary_score=True, show_all_issues=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "08d46ce6-29e7-4aa5-9e27-37e9e3c7107a",
   "metadata": {},
   "source": [
    "cleanlab does not find significant evidence that our data is non-[IID](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables), which is good. Otherwise, we'd need to further consider where our data came from and whether conclusions/predictions from this dataset can really generalize to our population of interest.\n",
    "\n",
    "But cleanlab did detect many near duplicates in the dataset. We see some exact duplicates between our training and test data, which may indicate data leakage!  Since we didn't expect these duplicates in our dataset, let's drop the extra duplicated copies of test data points found in our training set from this training set. This helps ensure that our model evaluations reflect generalization capabilities.\n",
    "Here's how we can review the near duplicates detected via Datalab."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6cef169e-d15b-4d18-9cb7-8ea589557e6b",
   "metadata": {},
   "outputs": [],
   "source": [
    "full_duplicate_results = lab.get_issues(\"near_duplicate\")\n",
    "full_duplicate_results.sort_values(\"near_duplicate_score\").head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0900691f-ee72-43e5-b0c5-90fa0a703594",
   "metadata": {},
   "source": [
    "To distinguish between near vs. exact duplicates, we can filter where the `distance_to_nearest_neighbor` column has value = 0.\n",
    "We specifically filter for exact duplicates between our training and test set in order to drop the extra copies of such data points from our training set."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b68e0418-86cf-431f-9107-2dd0a310ca42",
   "metadata": {},
   "outputs": [],
   "source": [
    "train_idx_cutoff = len(preprocessed_train_data) - 1  # last index of training data in the merged dataset\n",
    "\n",
    "# Create column to list which duplicate sets include some test data:\n",
    "full_duplicate_results['nd_set_has_index_over_training_cutoff'] = full_duplicate_results['near_duplicate_sets'].apply(lambda x: any(i > train_idx_cutoff for i in x))\n",
    "\n",
    "exact_duplicates = full_duplicate_results.query('is_near_duplicate_issue == True and near_duplicate_score == 0.0 and nd_set_has_index_over_training_cutoff == True').sort_values(\"near_duplicate_score\")\n",
    "exact_duplicates"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0e9bd131-429f-48af-b4fc-ed8b907950b9",
   "metadata": {},
   "outputs": [],
   "source": [
    "exact_duplicates_indices = exact_duplicates.index\n",
    "exact_duplicates_indices"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a6b10842-0ad2-4441-a8a4-a424d9c14557",
   "metadata": {},
   "source": [
    "Below we remove the exact duplicates that occur between our training and test sets from the training data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e72320ec-7792-4347-b2fb-630f2519127c",
   "metadata": {},
   "outputs": [],
   "source": [
    "indices_of_duplicates_to_drop = [idx for idx in exact_duplicates_indices if idx <= train_idx_cutoff]\n",
    "indices_of_duplicates_to_drop"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b1f56824-c706-4448-9581-a07ea0cd9041",
   "metadata": {},
   "source": [
    "Here are the examples we'll drop from our *training* data, since they are exact duplicates of *test* examples."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8520ba4a-3ad6-408a-b377-3f47c32d745a",
   "metadata": {},
   "outputs": [],
   "source": [
    "full_df.iloc[indices_of_duplicates_to_drop]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3c002665-c48b-4f04-91f7-ad112a49efc7",
   "metadata": {},
   "outputs": [],
   "source": [
    "df_train = df_train.drop(indices_of_duplicates_to_drop, axis=0).reset_index(drop=True)\n",
    "train_features = train_features.drop(indices_of_duplicates_to_drop, axis=0).reset_index(drop=True)\n",
    "train_labels = train_labels.drop(indices_of_duplicates_to_drop, axis=0).reset_index(drop=True).astype(int)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7553d5b2-1ba9-4dca-8110-eda6a8e11281",
   "metadata": {},
   "source": [
    "## 4. Train model with original (noisy) training data\n",
    "\n",
    "After handling fundamental issues in our training/test setup, let's fit our ML model to the training data. Here we use XGBoost as an example, but the same ideas of this tutorial apply to any other ML model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "36319f39-f563-4f63-913f-821373180350",
   "metadata": {},
   "outputs": [],
   "source": [
    "train_labels = train_labels[\"noisy_letter_grade\"]\n",
    "clf = XGBClassifier(tree_method=\"hist\", enable_categorical=True, random_state=SEED)\n",
    "clf.fit(train_features, train_labels)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c214e30b-4c82-4295-a3b0-68493904836b",
   "metadata": {},
   "source": [
    "### Compute out-of-sample predicted probabilities for the test data from this baseline model\n",
    "\n",
    "Make sure that the columns of your predicted class probabilities are properly ordered with respect to the ordering of classes, which for Datalab is: lexicographically sorted by class name."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "044c0eb1-299a-4851-b1bf-268d5bce56c1",
   "metadata": {},
   "outputs": [],
   "source": [
    "test_pred_probs = clf.predict_proba(test_features)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5a7e3406-845a-42ff-87d5-a104837234c4",
   "metadata": {},
   "source": [
    "## 5. Check for issues in test data and manually address them\n",
    "\n",
    "While we could evaluate our model's accuracy using the predictions above, this will be unreliable if the test data have issues. Based on the given labels, model predictions, and feature representations, Datalab can automatically detect issues lurking in our test data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c43df278-abfe-40e5-9d48-2df3efea9379",
   "metadata": {},
   "outputs": [],
   "source": [
    "test_lab = Datalab(data=df_test, label_name=\"noisy_letter_grade\", task=\"classification\")\n",
    "test_features_array = test_features.to_numpy()  # could alternatively be model embeddings\n",
    "test_lab.find_issues(features=test_features_array, pred_probs=test_pred_probs)\n",
    "test_lab.report(show_summary_score=True, show_all_issues=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ed0c82c6-43df-4b4a-b8ad-0d5884ab068a",
   "metadata": {},
   "source": [
    "Datalab automatically audits our dataset for various common issues. The report above indicates many label issues in our data.\n",
    "\n",
    "We can see which examples are estimated to be mislabeled (as well as a numeric quality score quantifying how likely their label is correct) via the `get_issues()` method. To review the most likely label errors, we sort our data by the `label_score` (a lower score represents that the label is less likely to be correct)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "77c7f776-54b3-45b5-9207-715d6d2e90c0",
   "metadata": {},
   "outputs": [],
   "source": [
    "test_label_issue_results = test_lab.get_issues(\"label\")\n",
    "test_label_issues_ordered = df_test.join(test_label_issue_results)\n",
    "test_label_issues_ordered = test_label_issues_ordered[test_label_issue_results[\"is_label_issue\"] == True].sort_values(\"label_score\")\n",
    "\n",
    "print(test_label_issues_ordered)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "940c4c6c-ce5b-4863-9675-471ff7596229",
   "metadata": {},
   "source": [
    "The dataframe above shows the original label (`given_label`) for examples that cleanlab finds most likely to be mislabeled, as well as an alternative `predicted_label` for each example. These examples have likely been labeled incorrectly and should be carefully re-examined. After manually inspecting our label issues above, we can add the indices for the label issues we want to remove from our data to our previously defined list. \n",
    "\n",
    "Remember to inspect and **manually** handle issues detected in your test data and to **avoid** handling them automatically. Otherwise you risk misleading model evaluations!\n",
    "\n",
    "In this case, we manually found that the first 11 label issues with lowest `label_score` correspond to real label errors. We'll drop those data points from our test set, in order to curate a cleaner test set. Here we solely address mislabeled data for brevity, but you can similarly address other issues detected in your test data to ensure the most reliable model evaluation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7e218d04-0729-4f42-b264-51c73601ebe6",
   "metadata": {},
   "outputs": [],
   "source": [
    "indices_to_drop_from_test_data = test_label_issues_ordered.index[:11]  # found by manually inspecting test_label_issues_ordered"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7e2bdb41-321e-4929-aa01-1f60948b9e8b",
   "metadata": {},
   "outputs": [],
   "source": [
    "df_test_cleaned = df_test.drop(indices_to_drop_from_test_data, axis=0).reset_index(drop=True)\n",
    "test_features = test_features.drop(indices_to_drop_from_test_data, axis=0).reset_index(drop=True)\n",
    "test_labels = test_labels.drop(indices_to_drop_from_test_data, axis=0).reset_index(drop=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc9aba3f-1413-4a04-a74d-eb2febaf6763",
   "metadata": {},
   "source": [
    "### Use clean test data to evaluate the performance of model trained on noisy training data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5ce2d89f-e832-448d-bfac-9941da15c895",
   "metadata": {},
   "outputs": [],
   "source": [
    "preds = clf.predict(test_features)\n",
    "acc_original = accuracy_score(test_labels.astype(int), preds.astype(int))\n",
    "print(\n",
    "    f\"Accuracy of model fit to noisy training data, measured on clean test data: {round(acc_original*100,1)}%\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "045f5e46-8985-4a7c-bc6f-9f7be509b787",
   "metadata": {},
   "source": [
    "Although curating clean test data does not directly help train a better ML model, more reliable model evaluation can improve your overall ML project. For instance, clean test data enables better informed decisions regarding when to deploy a model and better model/hyperparameter selection. \n",
    "While manually curating data can be tedious, [Cleanlab Studio](https://cleanlab.ai/blog/data-centric-ai/) offers data correction interfaces to streamline this work.\n",
    "\n",
    "\n",
    "## 6. Check for issues in training data and algorithmically correct them\n",
    "\n",
    "To run Datalab on our training set, we first compute out-of-sample predicted probabilities for our training data (via cross-validation)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9f437756-112e-4531-84fc-6ceadd0c9ef5",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.model_selection import cross_val_predict\n",
    "\n",
    "num_crossval_folds = 5\n",
    "pred_probs = cross_val_predict(\n",
    "    clf,\n",
    "    train_features,\n",
    "    train_labels,\n",
    "    cv=num_crossval_folds,\n",
    "    method=\"predict_proba\",\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "323134e9-8339-4847-9a1d-455ca0a6f449",
   "metadata": {},
   "source": [
    "Based on these ML model outputs, we similarly run Datalab to detect issues in our training data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "707625f6",
   "metadata": {},
   "outputs": [],
   "source": [
    "train_features_array = train_features.to_numpy()  # could alternatively be model embeddings\n",
    "\n",
    "train_lab = Datalab(data=df_train, label_name=\"noisy_letter_grade\", task=\"classification\")\n",
    "train_lab.find_issues(features=train_features_array, pred_probs=pred_probs)\n",
    "train_lab.report(show_summary_score=True, show_all_issues=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d3e46f20",
   "metadata": {},
   "source": [
    "Now instead of manually inspecting the detected issues in our training data, we will **automatically filter** all data points out of the training set that cleanlab has flagged as being likely mislabeled, outliers, or near duplicates. Unlike the test data which cannot be blindly auto-curated because we must ensure reliable model evaluation, the training data can be more aggressively modified as long as we're able to faithfully evaluate the resulting fitted model. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "25afe46c-a521-483c-b168-728c76d970dc",
   "metadata": {},
   "outputs": [],
   "source": [
    "label_issue_results = train_lab.get_issues(\"label\")\n",
    "label_issues_idx = label_issue_results[label_issue_results[\"is_label_issue\"] == True].index\n",
    "label_issues_idx"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6efcf06f-cc40-4964-87df-5204d3b1b9d4",
   "metadata": {},
   "outputs": [],
   "source": [
    "near_duplicates = train_lab.get_issues(\"near_duplicate\")\n",
    "near_duplicates_idx = near_duplicates[near_duplicates[\"is_near_duplicate_issue\"] == True].index\n",
    "near_duplicates_idx"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7bc87d72-bbd5-4ed2-bc38-2218862ddfbd",
   "metadata": {},
   "outputs": [],
   "source": [
    "outliers = train_lab.get_issues(\"outlier\")\n",
    "outliers_idx = outliers[outliers[\"is_outlier_issue\"] == True].index\n",
    "outliers_idx"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9c70be3e-0ba2-4e3e-8c50-359d402ca1fe",
   "metadata": {},
   "outputs": [],
   "source": [
    "idx_to_drop = list(set(list(label_issues_idx) + list(near_duplicates_idx) + list(outliers_idx)))\n",
    "len(idx_to_drop)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "08080458-0cd7-447d-80e6-384cb8d31eaf",
   "metadata": {},
   "outputs": [],
   "source": [
    "df_train_curated = df_train.drop(idx_to_drop, axis=0).reset_index(drop=True)\n",
    "train_features = train_features.drop(idx_to_drop, axis=0).reset_index(drop=True)\n",
    "train_labels = train_labels.drop(idx_to_drop, axis=0).reset_index(drop=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af8560d6-70e3-4cee-944e-49f047b9fff4",
   "metadata": {},
   "source": [
    "## 7. Train model on cleaned training data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "009bb215-4d26-47da-a230-d0ccf4122629",
   "metadata": {},
   "outputs": [],
   "source": [
    "clean_clf = XGBClassifier(tree_method=\"hist\", enable_categorical=True, random_state=SEED)\n",
    "clean_clf.fit(train_features, train_labels)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2fe23e39-fe7b-4145-af55-c7fc1f245850",
   "metadata": {},
   "source": [
    "**In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.\n",
    "On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1b0f83f8-07e6-4702-a39e-94336268bfef",
   "metadata": {},
   "source": [
    "### Use clean test data to evaluate the performance of model trained on cleaned training data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dcaeda51-9b24-4c04-889d-7e63563594fc",
   "metadata": {},
   "outputs": [],
   "source": [
    "clean_preds = clean_clf.predict(test_features)\n",
    "acc_clean = accuracy_score(test_labels.astype(int), clean_preds.astype(int))\n",
    "print(\n",
    "    f\"Accuracy of model fit to clean training data, measured on clean test data: {round(acc_clean*100,1)}%\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b367cd66",
   "metadata": {},
   "source": [
    "Although this simple data filtering may not be the maximally effective training set curation (particularly if the initial ML model was poor-quality and hence the detected issues are inaccurate), we can at least faithfully assess its effect using our clean test data. In this case, we do see the resulting ML model has improved, even with this simple training data filtering."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d2624388-ad39-4c88-88c3-51a224ad549a",
   "metadata": {},
   "source": [
    "## 8. Identifying better training data curation strategies via hyperparameter optimization techniques"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "96e8e3fe-b15f-41e0-87dd-0efb786f2920",
   "metadata": {},
   "source": [
    "Thus far, we've seen how to detect issues in the training and test data to improve model training and evaluation.\n",
    "While we should manually curate the test data to ensure faithful evaluation, we are free to algorithmically curate the training data. Since the simple filtering strategy above is not necessarily optimal, here we consider how to identify a better algorithmic curation strategy. Note however that the **best strategy** will be a hybrid of automated and manual data corrections, as you can efficiently do via the data correction interface in [Cleanlab Studio](https://cleanlab.ai/blog/data-centric-ai/).\n",
    "\n",
    "\n",
    "Above we made basic training data edits to improve test performance, where each one of these data edits can be quantitatively parameterized (eg. what fraction of each issue to filter from the dataset). We can use (hyper)parameter-tuning techniques to automatically search for combinations of training data edits that result in particularly accurate models. Here we apply this hyperparameter optimization to maximize test set performance for brevity, but in practice you should use a separate *validation* set (which you can curate similarly to the test data in this tutorial, in order to ensure reliable model evaluations).\n",
    "\n",
    "We define a dict to parameterize our dataset changes:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1d92d78d-e4a8-4322-bf38-f5a5dae3bf17",
   "metadata": {},
   "outputs": [],
   "source": [
    "default_edit_params = {\n",
    "        \"drop_label_issue\": 0.5,\n",
    "        \"drop_outlier\": 0.5,\n",
    "        \"drop_near_duplicate\": 0.2,\n",
    "    }"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3adcc027-7e51-4583-ab6d-9fc73f847c90",
   "metadata": {},
   "source": [
    "These example values translate into the following training data edits:\n",
    "\n",
    "- `drop_label_issue`: We filter the top 50% of the datapoints flagged with label issues (with most severe label score).\n",
    "- `drop_outlier`: We filter the top 50% most severe outliers based on outlier score (amongst the set of flagged outliers).\n",
    "- `drop_near_duplicate`: We drop **extra copies** of the top 20% of near duplicates (based on near duplicate score). Amongst each set of near duplicates, we keep the data point that has highest self-confidence score for its given label.\n",
    "\n",
    "We will search over various values for these parameters, fit a model to each corresponding training dataset edited based on the parameter values, and see which combination of values yields the best model.\n",
    "\n",
    "**Note:** Datalab detects other issue types that could also be considered in this algorithmic data curation.\n",
    "\n",
    "To more easily apply candidate training data edits, we first sort our data points flagged with each issue type based on the corresponding severity score:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "941ab2a6",
   "metadata": {},
   "outputs": [],
   "source": [
    "label_issues = train_lab.get_issues(\"label\").query(\"is_label_issue\").sort_values(\"label_score\")\n",
    "near_duplicates = train_lab.get_issues(\"near_duplicate\").query(\"is_near_duplicate_issue\").sort_values(\"near_duplicate_score\")\n",
    "outliers = train_lab.get_issues(\"outlier\").query(\"is_outlier_issue\").sort_values(\"outlier_score\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7edc7e66",
   "metadata": {},
   "source": [
    "We introduce a `edit_data` function to implement candidate training data edits, fit a model to the edited training set, and evaluate it on our cleaned test data (can skip these details)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0cc581de-71db-4622-8cfd-ce8a10609bf9",
   "metadata": {},
   "source": [
    "<details><summary>See the implementation of `edit_data` **(click to expand)**</summary>\n",
    "    \n",
    "```python\n",
    "# Note: This pulldown content is for docs.cleanlab.ai, if running on local Jupyter or Colab, please ignore it.\n",
    "\n",
    "def edit_data(train_features, train_labels, label_issues, near_duplicates, outliers, \n",
    "              drop_label_issue, drop_near_duplicate, drop_outlier):\n",
    "    \"\"\"\n",
    "    Edits the training data by dropping a specified percentage of data points identified as label issues,\n",
    "    near duplicates, and outliers based on the full datasets provided for each issue type.\n",
    "\n",
    "    Args:\n",
    "        train_features (pd.DataFrame): DataFrame containing the training features.\n",
    "        train_labels (pd.Series): Series containing the training labels.\n",
    "        label_issues (pd.DataFrame): DataFrame containing data points with label issues.\n",
    "        near_duplicates (pd.DataFrame): DataFrame containing data points identified as near duplicates.\n",
    "        outliers (pd.DataFrame): DataFrame containing data points identified as outliers.\n",
    "        drop_label_issue (float): Percentage of label issue data points to drop.\n",
    "        drop_near_duplicate (float): Percentage of near duplicate data points to drop.\n",
    "        drop_outlier (float): Percentage of outlier data points to drop.\n",
    "\n",
    "    Returns:\n",
    "        pd.DataFrame: The cleaned training features.\n",
    "        pd.Series: The cleaned training labels.\n",
    "    \"\"\"\n",
    "    # Extract indices for each type of issue\n",
    "    label_issues_idx = label_issues.index.tolist()\n",
    "    near_duplicates_idx = near_duplicates.index.tolist()\n",
    "    outliers_idx = outliers.index.tolist()\n",
    "\n",
    "    # Calculate the number of each type of data point to drop except near duplicates, which requires separate logic\n",
    "    num_label_issues_to_drop = int(len(label_issues_idx) * drop_label_issue)\n",
    "    num_outliers_to_drop = int(len(outliers_idx) * drop_outlier)\n",
    "\n",
    "    # Calculate number of near duplicates to drop\n",
    "    # Assuming the 'near_duplicate_sets' are lists of indices (integers) of near duplicates\n",
    "    clusters = []\n",
    "    for i in near_duplicates_idx:\n",
    "        # Create a set for each cluster, add the current index to its near duplicate set\n",
    "        cluster = set(near_duplicates.at[i, 'near_duplicate_sets'])\n",
    "        cluster.add(i)\n",
    "        clusters.append(cluster)\n",
    "\n",
    "    # Deduplicate clusters by converting the list of sets to a set of frozensets\n",
    "    unique_clusters = set(frozenset(cluster) for cluster in clusters)\n",
    "\n",
    "    # If you need the unique clusters back in list of lists format:\n",
    "    unique_clusters_list = [list(cluster) for cluster in unique_clusters]\n",
    "\n",
    "    near_duplicates_idx_to_drop = []\n",
    "\n",
    "    for cluster in unique_clusters_list:\n",
    "        # Calculate the number of rows to drop, ensuring at least one datapoint remains\n",
    "        n_drop = max(math.ceil(len(cluster) * drop_near_duplicate), 1)  # Drop at least k% or 1 row\n",
    "        if len(cluster) > n_drop:  # Ensure we keep at least one datapoint\n",
    "            # Randomly select datapoints to drop\n",
    "            drops = random.sample(cluster, n_drop)\n",
    "        else:\n",
    "            # If the cluster is too small, adjust the number to keep at least one datapoint\n",
    "            drops = random.sample(cluster, len(cluster) - 1)  # Keep at least one\n",
    "        near_duplicates_idx_to_drop.extend(drops)\n",
    "\n",
    "    # Determine the specific indices to drop\n",
    "    label_issues_idx_to_drop = label_issues_idx[:num_label_issues_to_drop]\n",
    "    outliers_idx_to_drop = outliers_idx[:num_outliers_to_drop]\n",
    "\n",
    "    # Combine the indices to drop\n",
    "    idx_to_drop = list(set(label_issues_idx_to_drop + near_duplicates_idx_to_drop + outliers_idx_to_drop))\n",
    "\n",
    "    # Drop the rows from the training data\n",
    "    train_features_cleaned = train_features.drop(idx_to_drop).reset_index(drop=True)\n",
    "    train_labels_cleaned = train_labels.drop(idx_to_drop).reset_index(drop=True)\n",
    "\n",
    "    return train_features_cleaned, train_labels_cleaned\n",
    "```\n",
    "\n",
    "</details>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "50666fb9",
   "metadata": {
    "nbsphinx": "hidden"
   },
   "outputs": [],
   "source": [
    "def edit_data(train_features, train_labels, label_issues, near_duplicates, outliers, drop_label_issue, drop_near_duplicate, drop_outlier):\n",
    "    \"\"\"\n",
    "    Edits the training data by dropping a specified percentage of data points identified as label issues,\n",
    "    near duplicates, and outliers based on the full datasets provided for each issue type.\n",
    "    \n",
    "    Args:\n",
    "        train_features (pd.DataFrame): DataFrame containing the training features.\n",
    "        train_labels (pd.Series): Series containing the training labels.\n",
    "        label_issues (pd.DataFrame): DataFrame containing data points with label issues.\n",
    "        near_duplicates (pd.DataFrame): DataFrame containing data points identified as near duplicates.\n",
    "        outliers (pd.DataFrame): DataFrame containing data points identified as outliers.\n",
    "        drop_label_issue (float): Percentage of label issue data points to drop.\n",
    "        drop_near_duplicate (float): Percentage of near duplicate data points to drop.\n",
    "        drop_outlier (float): Percentage of outlier data points to drop.\n",
    "    \n",
    "    Returns:\n",
    "        pd.DataFrame: The cleaned training features.\n",
    "        pd.Series: The cleaned training labels.\n",
    "    \"\"\"\n",
    "    # Extract indices for each type of issue\n",
    "    label_issues_idx = label_issues.index.tolist()\n",
    "    near_duplicates_idx = near_duplicates.index.tolist()\n",
    "    outliers_idx = outliers.index.tolist()\n",
    "    \n",
    "    # Calculate the number of each type of data point to drop except near duplicates, which requires separate logic\n",
    "    num_label_issues_to_drop = int(len(label_issues_idx) * drop_label_issue)\n",
    "    num_outliers_to_drop = int(len(outliers_idx) * drop_outlier)\n",
    "\n",
    "    # Calculate number of near duplicates to drop\n",
    "    # Assuming the 'near_duplicate_sets' are lists of indices (integers) of near duplicates\n",
    "    clusters = []\n",
    "    for i in near_duplicates_idx:\n",
    "        # Create a set for each cluster, add the current index to its near duplicate set\n",
    "        cluster = set(near_duplicates.at[i, 'near_duplicate_sets'])\n",
    "        cluster.add(i)\n",
    "        clusters.append(cluster)\n",
    "    \n",
    "    # Deduplicate clusters by converting the list of sets to a set of frozensets\n",
    "    unique_clusters = set(frozenset(cluster) for cluster in clusters)\n",
    "    \n",
    "    # If you need the unique clusters back in list of lists format:\n",
    "    unique_clusters_list = [list(cluster) for cluster in unique_clusters]\n",
    "    \n",
    "    near_duplicates_idx_to_drop = []\n",
    "    \n",
    "    for cluster in unique_clusters_list:\n",
    "        # Calculate the number of rows to drop, ensuring at least one datapoint remains\n",
    "        n_drop = max(math.ceil(len(cluster) * drop_near_duplicate), 1)  # Drop at least k% or 1 row\n",
    "        if len(cluster) > n_drop:  # Ensure we keep at least one datapoint\n",
    "            # Randomly select datapoints to drop\n",
    "            drops = random.sample(cluster, n_drop)\n",
    "        else:\n",
    "            # If the cluster is too small, adjust the number to keep at least one datapoint\n",
    "            drops = random.sample(cluster, len(cluster) - 1)  # Keep at least one\n",
    "        near_duplicates_idx_to_drop.extend(drops)\n",
    "    \n",
    "    # Determine the specific indices to drop\n",
    "    label_issues_idx_to_drop = label_issues_idx[:num_label_issues_to_drop]\n",
    "    outliers_idx_to_drop = outliers_idx[:num_outliers_to_drop]\n",
    "    \n",
    "    # Combine the indices to drop\n",
    "    idx_to_drop = list(set(label_issues_idx_to_drop + near_duplicates_idx_to_drop + outliers_idx_to_drop))\n",
    "    \n",
    "    # Drop the rows from the training data\n",
    "    train_features_cleaned = train_features.drop(idx_to_drop).reset_index(drop=True)\n",
    "    train_labels_cleaned = train_labels.drop(idx_to_drop).reset_index(drop=True)\n",
    "    \n",
    "    return train_features_cleaned, train_labels_cleaned"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f5aa2883-d20d-481f-a012-fcc7ff8e3e7e",
   "metadata": {},
   "outputs": [],
   "source": [
    "from itertools import product\n",
    "\n",
    "# List of possible values for each data edit parameter to search over (finer grid will yield better results but longer runtimes)\n",
    "param_grid = {\n",
    "    'drop_label_issue': [0.2, 0.5, 0.7, 1.0],\n",
    "    'drop_near_duplicate': [0.0, 0.2, 0.5],\n",
    "    'drop_outlier': [0.2, 0.5, 0.7],\n",
    "}\n",
    "\n",
    "# Generate all combinations of parameters\n",
    "param_combinations = list(product(param_grid['drop_label_issue'], param_grid['drop_near_duplicate'], param_grid['drop_outlier']))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ce1c0ada-88b1-4654-b43f-3c0b59002979",
   "metadata": {},
   "outputs": [],
   "source": [
    "best_score = 0\n",
    "best_params = None\n",
    "\n",
    "for drop_label_issue, drop_near_duplicate, drop_outlier in param_combinations:\n",
    "    # Preprocess the data for the current combination of parameters\n",
    "    train_features_preprocessed, train_labels_preprocessed = edit_data(\n",
    "        train_features_v2, train_labels_v2, label_issues, near_duplicates, outliers,\n",
    "        drop_label_issue, drop_near_duplicate, drop_outlier)\n",
    "    \n",
    "    # Train and evaluate the model\n",
    "    model = XGBClassifier(tree_method=\"hist\", enable_categorical=True, random_state=SEED)\n",
    "    model.fit(train_features_preprocessed, train_labels_preprocessed)\n",
    "    predictions = model.predict(test_features)\n",
    "    accuracy = accuracy_score(test_labels.astype(int), predictions.astype(int))\n",
    "    \n",
    "    # Update the best score and parameters if the current model is better\n",
    "    if accuracy > best_score:\n",
    "        best_score = accuracy\n",
    "        best_params = {'drop_label_issue': drop_label_issue, 'drop_near_duplicate': drop_near_duplicate, 'drop_outlier': drop_outlier}\n",
    "\n",
    "# Print the best parameters and score\n",
    "print(f\"Best parameters found in search: {best_params}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3f572acf-31c3-4874-9100-451796e35b06",
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\n",
    "    f\"Accuracy of model fit to optimally cleaned training data, measured on clean test data: {round(best_score*100,1)}%\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc306eff-f3b7-4098-9f7e-3d17d1d0016a",
   "metadata": {},
   "source": [
    "## 9. Conclusion\n",
    "\n",
    "This tutorial demonstrated how you can properly use cleanlab to improve your own ML model. When dealing with noisy data, you should **first manually curate your test data to ensure reliable model evaluation**. After that, you can **algorithmically curate your training data**. We demonstrated a simple hyperparameter tuning technique to identify effective training data edits that produce an accurate model. As well as how cleanlab can help catch fundamental problems in the overall train/test setup like duplicates/leakage and data drift.\n",
    "\n",
    "Note that we never evaluated different models with different test set versions (which does **not** yield meaningful comparisons). We curated the test data to be as high-quality as possible and then based all model evaluations on this fixed version of the test data.\n",
    "\n",
    "For brevity, this tutorial focused mostly around label issues and data pruning strategies. For classification tasks where you already have high-quality test data and solely want to handle label errors in your training data: cleanlab's `CleanLearning` class offers an *alternative* convenience method to **train a robust ML model**. You can achieve **better results** by considering additional data issues beyond label errors and curation strategies like fixing incorrect values -- this is all streamlined via the intelligent data correction interface of [Cleanlab Studio](https://cleanlab.ai/blog/data-centric-ai/)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6a025a88",
   "metadata": {
    "nbsphinx": "hidden"
   },
   "outputs": [],
   "source": [
    "# Note: This cell is only for docs.cleanlab.ai, if running on local Jupyter or Colab, please ignore it.\n",
    "\n",
    "assert(acc_clean*100 - acc_original*100 >= 0.8)\n",
    "assert(best_score*100 - acc_clean*100 >= 2)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
