{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Sfmml1VCqCHm"
   },
   "source": [
    "# The Workflows of Data-centric AI for Classification with Noisy Labels\n",
    "\n",
    "In this tutorial, you will learn how to easily incorporate [cleanlab](https://github.com/cleanlab/cleanlab) into your ML development workflows to:\n",
    "\n",
    "- Automatically find issues such as label errors, outliers and near duplicates lurking in your classification data.\n",
    "- Score the label quality of every example in your dataset.\n",
    "- Train robust models in the presence of label issues.\n",
    "- Identify overlapping classes that you can merge to make the learning task less ambiguous.\n",
    "- Generate an overall label health score to track improvements in your labels as you clean your datasets over time.\n",
    "\n",
    "This tutorial provides an in-depth survey of many possible different ways that cleanlab can be utilized for Data-Centric AI. If you have a different use-case in mind that is not supported, please [tell us about it](https://github.com/cleanlab/cleanlab/issues)!\n",
    "While this tutorial focuses on standard multi-class (and binary) classification datasets, cleanlab also supports other tasks including: [data labeled by multiple annotators](multiannotator.html), [multi-label classification](../cleanlab/filter.rst#cleanlab.filter.find_label_issues), and [token classification of text](token_classification.html).\n",
    "\n",
    "**cleanlab is grounded in theory and science**. Learn more:\n",
    "\n",
    "[Research Publications](https://cleanlab.ai/research)  |  [Label Errors found by cleanlab](https://labelerrors.com/)  |  [Examples using cleanlab](https://github.com/cleanlab/examples)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "XBK4cAOUyLgW"
   },
   "source": [
    "## Install dependencies and import them"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can use pip to install all packages required for this tutorial as follows:\n",
    "\n",
    "```\n",
    "!pip install matplotlib \n",
    "!pip install cleanlab[datalab]\n",
    "# Make sure to install the version corresponding to this tutorial\n",
    "# E.g. if viewing master branch documentation:\n",
    "#     !pip install git+https://github.com/cleanlab/cleanlab.git\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "nbsphinx": "hidden"
   },
   "outputs": [],
   "source": [
    "# Package installation (hidden on docs website).\n",
    "# Package versions used: matplotlib==3.5.1 \n",
    "\n",
    "dependencies = [\"cleanlab\", \"matplotlib\", \"datasets\"]\n",
    "\n",
    "if \"google.colab\" in str(get_ipython()):  # Check if it's running in Google Colab\n",
    "    %pip install cleanlab  # for colab\n",
    "    cmd = ' '.join([dep for dep in dependencies if dep != \"cleanlab\"])\n",
    "    %pip install $cmd\n",
    "else:\n",
    "    dependencies_test = [dependency.split('>')[0] if '>' in dependency \n",
    "                         else dependency.split('<')[0] if '<' in dependency \n",
    "                         else dependency.split('=')[0] for dependency in dependencies]\n",
    "    missing_dependencies = []\n",
    "    for dependency in dependencies_test:\n",
    "        try:\n",
    "            __import__(dependency)\n",
    "        except ImportError:\n",
    "            missing_dependencies.append(dependency)\n",
    "\n",
    "    if len(missing_dependencies) > 0:\n",
    "        print(\"Missing required dependencies:\")\n",
    "        print(*missing_dependencies, sep=\", \")\n",
    "        print(\"\\nPlease install them before running the rest of this notebook.\")\n",
    "\n",
    "%config InlineBackend.print_figure_kwargs={\"facecolor\": \"w\"}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "avXlHJcXjruP"
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import cleanlab\n",
    "from cleanlab import Datalab\n",
    "from cleanlab.classification import CleanLearning\n",
    "from cleanlab.benchmarking import noise_generation\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.model_selection import cross_val_predict\n",
    "from numpy.random import multivariate_normal\n",
    "from matplotlib import pyplot as plt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "I6VuupksjruQ"
   },
   "source": [
    "## Create the data (can skip these details)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<details><summary>See the code for data generation **(click to expand)**</summary>\n",
    "\n",
    "\n",
    "```python\n",
    "# Note: This pulldown content is for docs.cleanlab.ai, if running on local Jupyter or Colab, please ignore it.\n",
    "\n",
    "SEED = 0\n",
    "\n",
    "def make_data(\n",
    "    means=[[3, 2], [7, 7], [0, 8], [0, 10]],\n",
    "    covs=[\n",
    "        [[5, -1.5], [-1.5, 1]],\n",
    "        [[1, 0.5], [0.5, 4]],\n",
    "        [[5, 1], [1, 5]],\n",
    "        [[3, 1], [1, 1]],\n",
    "    ],\n",
    "    sizes=[100, 50, 50, 50],\n",
    "    avg_trace=0.8,\n",
    "    seed=SEED,  # set to None for non-reproducible randomness\n",
    "):\n",
    "    np.random.seed(seed=SEED)\n",
    "\n",
    "    K = len(means)  # number of classes\n",
    "    data = []\n",
    "    labels = []\n",
    "    test_data = []\n",
    "    test_labels = []\n",
    "\n",
    "    for idx in range(K):\n",
    "        data.append(\n",
    "            np.random.multivariate_normal(\n",
    "                mean=means[idx], cov=covs[idx], size=sizes[idx]\n",
    "            )\n",
    "        )\n",
    "        test_data.append(\n",
    "            np.random.multivariate_normal(\n",
    "                mean=means[idx], cov=covs[idx], size=sizes[idx]\n",
    "            )\n",
    "        )\n",
    "        labels.append(np.array([idx for i in range(sizes[idx])]))\n",
    "        test_labels.append(np.array([idx for i in range(sizes[idx])]))\n",
    "    X_train = np.vstack(data)\n",
    "    y_train = np.hstack(labels)\n",
    "    X_test = np.vstack(test_data)\n",
    "    y_test = np.hstack(test_labels)\n",
    "\n",
    "    # Compute p(y=k) the prior distribution over true labels.\n",
    "    py_true = np.bincount(y_train) / float(len(y_train))\n",
    "\n",
    "    noise_matrix_true = noise_generation.generate_noise_matrix_from_trace(\n",
    "        K,\n",
    "        trace=avg_trace * K,\n",
    "        py=py_true,\n",
    "        valid_noise_matrix=True,\n",
    "        seed=SEED,\n",
    "    )\n",
    "\n",
    "    # Generate our noisy labels using the noise_marix.\n",
    "    s = noise_generation.generate_noisy_labels(y_train, noise_matrix_true)\n",
    "    s_test = noise_generation.generate_noisy_labels(y_test, noise_matrix_true)\n",
    "    ps = np.bincount(s) / float(len(s))  # Prior distribution over noisy labels\n",
    "\n",
    "    return {\n",
    "        \"data\": X_train,\n",
    "        \"true_labels\": y_train,  # You never get to see these perfect labels.\n",
    "        \"labels\": s,  # Instead, you have these labels, which have some errors.\n",
    "        \"test_data\": X_test,\n",
    "        \"test_labels\": y_test,  # Perfect labels used for \"true\" measure of model's performance during deployment.\n",
    "        \"noisy_test_labels\": s_test,  # With IID train/test split, you'd have these labels, which also have some errors.\n",
    "        \"ps\": ps,\n",
    "        \"py_true\": py_true,\n",
    "        \"noise_matrix_true\": noise_matrix_true,\n",
    "        \"class_names\": [\"purple\", \"blue\", \"seafoam green\", \"yellow\"],\n",
    "    }\n",
    "\n",
    "\n",
    "data_dict = make_data()\n",
    "for key, val in data_dict.items():  # Map data_dict to variables in namespace\n",
    "    exec(key + \"=val\")\n",
    "\n",
    "# Display dataset visually using matplotlib\n",
    "def plot_data(data, circles, title, alpha=1.0):\n",
    "    plt.figure(figsize=(14, 5))\n",
    "    plt.scatter(data[:, 0], data[:, 1], c=labels, s=60)\n",
    "    for i in circles:\n",
    "        plt.plot(\n",
    "            data[i][0],\n",
    "            data[i][1],\n",
    "            \"o\",\n",
    "            markerfacecolor=\"none\",\n",
    "            markeredgecolor=\"red\",\n",
    "            markersize=14,\n",
    "            markeredgewidth=2.5,\n",
    "            alpha=alpha\n",
    "        )\n",
    "    _ = plt.title(title, fontsize=25)\n",
    "```\n",
    "\n",
    "</details>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "nbsphinx": "hidden"
   },
   "outputs": [],
   "source": [
    "SEED = 0\n",
    "\n",
    "def make_data(\n",
    "    means=[[3, 2], [7, 7], [0, 8], [0, 10]],\n",
    "    covs=[\n",
    "        [[5, -1.5], [-1.5, 1]],\n",
    "        [[1, 0.5], [0.5, 4]],\n",
    "        [[5, 1], [1, 5]],\n",
    "        [[3, 1], [1, 1]],\n",
    "    ],\n",
    "    sizes=[100, 50, 50, 50],\n",
    "    avg_trace=0.8,\n",
    "    seed=SEED,  # set to None for non-reproducible randomness\n",
    "):\n",
    "    np.random.seed(seed=SEED)\n",
    "\n",
    "    K = len(means)  # number of classes\n",
    "    data = []\n",
    "    labels = []\n",
    "    test_data = []\n",
    "    test_labels = []\n",
    "\n",
    "    for idx in range(K):\n",
    "        data.append(\n",
    "            np.random.multivariate_normal(\n",
    "                mean=means[idx], cov=covs[idx], size=sizes[idx]\n",
    "            )\n",
    "        )\n",
    "        test_data.append(\n",
    "            np.random.multivariate_normal(\n",
    "                mean=means[idx], cov=covs[idx], size=sizes[idx]\n",
    "            )\n",
    "        )\n",
    "        labels.append(np.array([idx for i in range(sizes[idx])]))\n",
    "        test_labels.append(np.array([idx for i in range(sizes[idx])]))\n",
    "    X_train = np.vstack(data)\n",
    "    y_train = np.hstack(labels)\n",
    "    X_test = np.vstack(test_data)\n",
    "    y_test = np.hstack(test_labels)\n",
    "\n",
    "    # Compute p(y=k) the prior distribution over true labels.\n",
    "    py_true = np.bincount(y_train) / float(len(y_train))\n",
    "\n",
    "    noise_matrix_true = noise_generation.generate_noise_matrix_from_trace(\n",
    "        K,\n",
    "        trace=avg_trace * K,\n",
    "        py=py_true,\n",
    "        valid_noise_matrix=True,\n",
    "        seed=SEED,\n",
    "    )\n",
    "\n",
    "    # Generate our noisy labels using the noise_marix.\n",
    "    s = noise_generation.generate_noisy_labels(y_train, noise_matrix_true)\n",
    "    s_test = noise_generation.generate_noisy_labels(y_test, noise_matrix_true)\n",
    "    ps = np.bincount(s) / float(len(s))  # Prior distribution over noisy labels\n",
    "\n",
    "    return {\n",
    "        \"data\": X_train,\n",
    "        \"true_labels\": y_train,  # You never get to see these perfect labels.\n",
    "        \"labels\": s,  # Instead, you have these labels, which have some errors.\n",
    "        \"test_data\": X_test,\n",
    "        \"test_labels\": y_test,  # Perfect labels used for \"true\" measure of model's performance during deployment.\n",
    "        \"noisy_test_labels\": s_test,  # With IID train/test split, you'd have these labels, which also have some errors.\n",
    "        \"ps\": ps,\n",
    "        \"py_true\": py_true,\n",
    "        \"noise_matrix_true\": noise_matrix_true,\n",
    "        \"class_names\": [\"purple\", \"blue\", \"seafoam green\", \"yellow\"],\n",
    "    }\n",
    "\n",
    "\n",
    "data_dict = make_data()\n",
    "for key, val in data_dict.items():  # Map data_dict to variables in namespace\n",
    "    exec(key + \"=val\")\n",
    "\n",
    "# Display dataset visually using matplotlib\n",
    "def plot_data(data, circles, title, alpha=1.0):\n",
    "    plt.figure(figsize=(14, 5))\n",
    "    plt.scatter(data[:, 0], data[:, 1], c=labels, s=60)\n",
    "    for i in circles:\n",
    "        plt.plot(\n",
    "            data[i][0],\n",
    "            data[i][1],\n",
    "            \"o\",\n",
    "            markerfacecolor=\"none\",\n",
    "            markeredgecolor=\"red\",\n",
    "            markersize=14,\n",
    "            markeredgewidth=2.5,\n",
    "            alpha=alpha\n",
    "        )\n",
    "    _ = plt.title(title, fontsize=25)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "true_errors = np.where(true_labels != labels)[0]\n",
    "plot_data(data, circles=true_errors, title=\"A realistic, messy dataset with 4 classes\", alpha=0.3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "AM6E7tNS9pZn"
   },
   "source": [
    "The figure above represents a toy dataset we'll use to demonstrate various cleanlab functionality. In this data, the features *X* are 2-dimensional and examples are colored according to their *given* label above.\n",
    "\n",
    "Like [many real-world datasets](https://labelerrors.com/), the given label happens to be incorrect for some of the examples (**circled in red**) in this dataset!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## **Workflow 1:** Use Datalab to detect many types of issues "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Datalab offers an easy interface to detect all sorts of common real-world issue in your dataset. Internally it uses many data quality algorithms, and these methods can also be directly invoked — as demonstrated in some of the subsequent workflows here."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Datalab offers several ways of loading the data\n",
    "# we’ll simply wrap the training features and noisy labels in a dictionary. \n",
    "data_dict = {\"X\": data, \"y\": labels}\n",
    "\n",
    "# get out of sample predicted probabilities via cross-validation.\n",
    "yourFavoriteModel = LogisticRegression(verbose=0, random_state=SEED)\n",
    "pred_probs = cross_val_predict(\n",
    "    estimator=yourFavoriteModel, X=data, y=labels, cv=3, method=\"predict_proba\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "All that is need to audit your data is initalize a Datalab object with your dataset and call `find_issues()`. \n",
    "\n",
    "Pass in the predicted probabilities and feature embeddings for your data and Datalab will do all the work!\n",
    "You do not necessarily need to provide all of this information depending on which types of issues you are interested in, but the more inputs you provide, the more types of issues `Datalab` can detect in your data. Using a better model to produce these inputs will ensure cleanlab more accurately estimates issues.\n",
    "Make sure that the columns of your `pred_probs` are properly ordered with respect to the ordering of classes, which for Datalab is: lexicographically sorted by class name."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lab = Datalab(data_dict, label_name=\"y\")\n",
    "lab.find_issues(pred_probs=pred_probs, features=data)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "After the audit is complete, review the findings using the `report` method:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "lab.report()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ZmUd-5tljruT"
   },
   "source": [
    "## **Workflow 2:** Use CleanLearning for more robust Machine Learning\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "AaHC5MRKjruT"
   },
   "outputs": [],
   "source": [
    "yourFavoriteModel = LogisticRegression(verbose=0, random_state=SEED)\n",
    "\n",
    "# CleanLearning: Machine Learning with cleaned data (given messy, real-world data)\n",
    "cl = cleanlab.classification.CleanLearning(yourFavoriteModel, seed=SEED)\n",
    "\n",
    "# Fit model to messy, real-world data, automatically training on cleaned data.\n",
    "_ = cl.fit(data, labels)\n",
    "\n",
    "# See the label quality for every example, which data has issues, and more.\n",
    "cl.get_label_issues().head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "78udGSU6jruT"
   },
   "source": [
    "### Clean Learning = Machine Learning with cleaned data\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Wy27rvyhjruU"
   },
   "outputs": [],
   "source": [
    "# For comparison, this is how you would have trained your model normally (without Cleanlab)\n",
    "yourFavoriteModel = LogisticRegression(verbose=0, random_state=SEED)\n",
    "yourFavoriteModel.fit(data, labels)\n",
    "print(f\"Accuracy using yourFavoriteModel: {yourFavoriteModel.score(test_data, test_labels):.0%}\")\n",
    "\n",
    "# But CleanLearning can do anything yourFavoriteModel can do, but enhanced.\n",
    "# For example, CleanLearning gives you predictions (just like yourFavoriteModel)\n",
    "# but the magic is that CleanLearning was trained as if your data did not have label errors.\n",
    "print(f\"Accuracy using yourFavoriteModel (+ CleanLearning): {cl.score(test_data, test_labels):.0%}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "rtEh09G7764o"
   },
   "source": [
    "Note! *Accuracy* refers to the accuracy with respect to the *true* error-free labels of a test set., i.e. what we actually care about in practice because that's what real-world model performance is based on. If you don't have a clean test set, you can use cleanlab to make one :)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "_b8O6_J2jruU"
   },
   "source": [
    "## **Workflow 3:** Use CleanLearning to find_label_issues in one line of code\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Db8YHnyVjruU"
   },
   "outputs": [],
   "source": [
    "# One line of code. Literally.\n",
    "issues = CleanLearning(yourFavoriteModel, seed=SEED).find_label_issues(data, labels)\n",
    "\n",
    "issues.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "8OOsvMoMjruU"
   },
   "source": [
    "### Visualize the twenty examples with lowest label quality to see if Cleanlab works.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "iJqAHuS2jruV"
   },
   "outputs": [],
   "source": [
    "lowest_quality_labels = issues[\"label_quality\"].argsort()[:20]\n",
    "plot_data(data, circles=lowest_quality_labels, title=\"The 20 lowest label quality examples\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "wdtPREswG2fe"
   },
   "source": [
    "Above, the top 20 label issues circled in red are found automatically using cleanlab (no true labels given).\n",
    "\n",
    "If you've already computed the label issues using ``CleanLearning``, you can pass them into `fit()` and it will train **much** faster (skips label-issue identification step)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "PcPTZ_JJG3Cx"
   },
   "outputs": [],
   "source": [
    "# CleanLearning can train faster if issues are provided at fitting time.\n",
    "cl.fit(data, labels, label_issues=issues)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "XYFkRMk-jruV"
   },
   "source": [
    "## **Workflow 4:** Use cleanlab to find dataset-level and class-level issues\n",
    "\n",
    "- Did you notice that the yellow and seafoam green class above are overlapping?\n",
    "- How can a model ever know (or learn) what's ground truth inside the yellow distribution?\n",
    "- If these two classes were merged, the model can learn more accurately from 3 classes (versus 4).\n",
    "\n",
    "cleanlab automatically finds data-set level issues like this, in one line of code. Check this out!\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "0lonvOYvjruV"
   },
   "outputs": [],
   "source": [
    "cleanlab.dataset.find_overlapping_classes(\n",
    "    labels=labels,\n",
    "    confident_joint=cl.confident_joint,  # cleanlab uses the confident_joint internally to quantify label noise (see cleanlab.count.compute_confident_joint)\n",
    "    class_names=class_names,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ZXkMIKlGjruV"
   },
   "source": [
    "Do the results surprise you? Did you expect the purple and seafoam green to also have so much overlap?\n",
    "\n",
    "There are two things being happening here:\n",
    "\n",
    "1. **Distribution Overlap**: The green distribution has huge variance and overlaps with other distributions.\n",
    "   - Cleanlab handles this for you: read the theory behind cleanlab for overlapping classes here: https://arxiv.org/abs/1705.01936\n",
    "2. **Label Issues**: A ton of examples (which actually belong to the purple class) have been mislabeled as \"green\" in our dataset.\n",
    "\n",
    "### Now, let's see what happens if we merge classes \"seafoam green\" and \"yellow\"\n",
    "* The top two classes found automatically by ``cleanlab.dataset.find_overlapping_classes()``"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "MfqTCa3kjruV"
   },
   "outputs": [],
   "source": [
    "yourFavoriteModel1 = LogisticRegression(verbose=0, random_state=SEED)\n",
    "yourFavoriteModel1.fit(data, labels)\n",
    "print(f\"[Original classes] Accuracy of yourFavoriteModel: {yourFavoriteModel1.score(test_data, test_labels):.0%}\")\n",
    "\n",
    "merged_labels, merged_test_labels = np.array(labels), np.array(test_labels)\n",
    "\n",
    "# Merge classes: map all yellow-labeled examples to seafoam green\n",
    "merged_labels[merged_labels == 3] = 2\n",
    "merged_test_labels[merged_test_labels == 3] = 2\n",
    "\n",
    "# Re-run our comparison. Re-run your model on the newly labeled dataset.\n",
    "yourFavoriteModel2 = LogisticRegression(verbose=0, random_state=SEED)\n",
    "yourFavoriteModel2.fit(data, merged_labels)\n",
    "print(f\"[Modified classes] Accuracy of yourFavoriteModel: {yourFavoriteModel2.score(test_data, merged_test_labels):.0%}\")\n",
    "\n",
    "# Re-run CleanLearning as well.\n",
    "yourFavoriteModel3 = LogisticRegression(verbose=0, random_state=SEED)\n",
    "cl3 = cleanlab.classification.CleanLearning(yourFavoriteModel3, seed=SEED)\n",
    "cl3.fit(data, merged_labels)\n",
    "print(f\"[Modified classes] Accuracy of yourFavoriteModel (+ CleanLearning): {cl3.score(test_data, merged_test_labels):.0%}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Bi53hnRxjruW"
   },
   "source": [
    "While on one hand that's a huge improvement, it's important to remember that choosing among three classes is an easier task than choosing among four classes, so it's not fair to directly compare these numbers.\n",
    "\n",
    "Instead, the big takeaway is...\n",
    "if you get to choose your classes, combining overlapping classes can make the learning task easier for your model. But if you have lots of classes, how do you know which ones to merge?? That's when you use `cleanlab.dataset.find_overlapping_classes`.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "BxI7bgn8L_1K"
   },
   "source": [
    "## **Workflow 5:** Clean your test set too if you're doing ML with noisy labels!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "iZ43QfbrNk0K"
   },
   "source": [
    "If your test and training data were randomly split (IID), then be aware that your test labels are likely noisy too! It is thus important to fix label issues in them before we can trust measures like test accuracy.\n",
    "\n",
    "* More about what can go wrong if you don't use a clean test set [in this paper](https://arxiv.org/abs/2103.14749)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "9ZtWAYXqMAPL"
   },
   "outputs": [],
   "source": [
    "from sklearn.metrics import accuracy_score\n",
    "\n",
    "# Fit your model on noisily labeled train data\n",
    "yourFavoriteModel = LogisticRegression(verbose=0, random_state=SEED)\n",
    "yourFavoriteModel.fit(data, labels)\n",
    "\n",
    "# Get predicted probabilities for test data (these are out-of-sample)\n",
    "my_test_pred_probs = yourFavoriteModel.predict_proba(test_data)\n",
    "my_test_preds = my_test_pred_probs.argmax(axis=1)  # predicted labels\n",
    "\n",
    "# Find label issues in the test data\n",
    "issues_test = CleanLearning(yourFavoriteModel, seed=SEED).find_label_issues(\n",
    "    labels=noisy_test_labels, pred_probs=my_test_pred_probs)\n",
    "\n",
    "# You should inspect issues_test and fix issues to ensure high-quality test data labels.\n",
    "corrected_test_labels = test_labels  # Here we'll pretend you have done this perfectly :)\n",
    "\n",
    "# Fit more robust version of model on noisily labeled training data\n",
    "cl = CleanLearning(yourFavoriteModel, seed=SEED).fit(data, labels)\n",
    "cl_test_preds = cl.predict(test_data)\n",
    "\n",
    "print(f\" Noisy Test Accuracy (on given test labels) using yourFavoriteModel: {accuracy_score(noisy_test_labels, my_test_preds):.0%}\")\n",
    "print(f\" Noisy Test Accuracy (on given test labels) using yourFavoriteModel (+ CleanLearning): {accuracy_score(noisy_test_labels, cl_test_preds):.0%}\")\n",
    "print(f\"Actual Test Accuracy (on corrected test labels) using yourFavoriteModel: {accuracy_score(corrected_test_labels, my_test_preds):.0%}\")\n",
    "print(f\"Actual Test Accuracy (on corrected test labels) using yourFavoriteModel (+ CleanLearning): {accuracy_score(corrected_test_labels, cl_test_preds):.0%}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "GluE5XAAjruW"
   },
   "source": [
    "## **Workflow 6:** One score to rule them all -- use cleanlab's overall dataset health score\n",
    "\n",
    "This score can be fairly compared across datasets or across versions of a dataset to track overall dataset quality (a.k.a. *dataset health*) over time.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "0rXP3ZPWjruW"
   },
   "outputs": [],
   "source": [
    "# One line of code.\n",
    "health = cleanlab.dataset.overall_label_health_score(\n",
    "    labels, confident_joint=cl.confident_joint\n",
    "    # cleanlab uses the confident_joint internally to quantify label noise (see cleanlab.count.compute_confident_joint)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "M85Fta_bjruW"
   },
   "source": [
    "### How accurate is this dataset health score?\n",
    "\n",
    "Because we know the true labels (we created this toy dataset), we can compare with ground truth."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "-iRPe8KXjruW"
   },
   "outputs": [],
   "source": [
    "label_acc = sum(labels != true_labels) / len(labels)\n",
    "print(f\"Percentage of label issues guessed by cleanlab {1 - health:.0%}\")\n",
    "print(f\"Percentage of (ground truth) label errors): {label_acc:.0%}\")\n",
    "\n",
    "offset = (1 - label_acc) - health\n",
    "\n",
    "print(\n",
    "    f\"\\nQuestion: cleanlab seems to be overestimating.\"\n",
    "    f\" How do we account for this {offset:.0%} difference?\"\n",
    ")\n",
    "print(\n",
    "    \"Answer: Data points that fall in between two overlapping distributions are often \"\n",
    "    \"impossible to label and are counted as issues.\"\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "8hxY5lxJjruW"
   },
   "source": [
    "## **Workflow(s) 7:** Use count, rank, filter modules directly\n",
    "\n",
    "- Using these modules directly is intended for more experienced cleanlab users. But once you understand how they work, you can create numerous powerful workflows.\n",
    "- For these workflows, you **always** need two things:\n",
    "  1.  Out-of-sample predicted probabilities (e.g. computed via cross-validation)\n",
    "  2.  Labels (can contain label errors and various issues)\n",
    "\n",
    "#### cleanlab can compute out-of-sample  predicted probabilities for you:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "ZpipUliyjruW"
   },
   "outputs": [],
   "source": [
    "pred_probs = cleanlab.count.estimate_cv_predicted_probabilities(\n",
    "    data, labels, clf=yourFavoriteModel, seed=SEED\n",
    ")\n",
    "print(f\"pred_probs is a {pred_probs.shape} matrix of predicted probabilities\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ftWk9CTrjruW"
   },
   "source": [
    "### **Workflow 7.1 (count)**: Fully characterize label noise (noise matrix, joint, prior of true labels, ...)\n",
    "\n",
    "Now that we have `pred_probs` and `labels`, advanced users can compute everything in `cleanlab.count`.\n",
    "\n",
    "- `py: prob(true_label=k)`\n",
    "  - For all classes K, this is the distribution over the actual true labels (which cleanlab can estimate for you even though you don't have the true labels).\n",
    "- `noise_matrix: p(noisy|true)`\n",
    "  - This describes how errors were introduced into your labels. It's a conditional probability matrix with the probability of flipping from the true class to every other class for the given label.\n",
    "- `inverse_noise_matrix: p(true|noisy)`\n",
    "  - This tells you the probability, for every class, that the true label is actually a different class.\n",
    "- `confident_joint`\n",
    "  - This is an unnormalized (count-based) estimate of the number of examples in our dataset with each possible (true label, given label) pairing.\n",
    "- `joint: p(true label, noisy label)`\n",
    "  - The joint distribution of noisy (given) and true labels is the most useful of all these statistics. From it, you can compute every other statistic listed above. One entry from this matrix can be interpreted as: \"The proportion of examples in our dataset whose true label is *i* and given label is *j*\".\n",
    "\n",
    "These five tools fully characterize class-conditional label noise in a dataset.\n",
    "\n",
    "#### Use cleanlab to estimate and visualize the joint distribution of label noise and noise matrix of label flipping rates:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "SLq-3q4xjruX"
   },
   "outputs": [],
   "source": [
    "(\n",
    "    py, noise_matrix, inverse_noise_matrix, confident_joint\n",
    ") = cleanlab.count.estimate_py_and_noise_matrices_from_probabilities(labels, pred_probs)\n",
    "\n",
    "# Note: you can also combine the above two lines of code into a single line of code like this\n",
    "(\n",
    "    py, noise_matrix, inverse_noise_matrix, confident_joint, pred_probs\n",
    ") = cleanlab.count.estimate_py_noise_matrices_and_cv_pred_proba(\n",
    "    data, labels, clf=yourFavoriteModel, seed=SEED\n",
    ")\n",
    "\n",
    "# Get the joint distribution of noisy and true labels from the confident joint\n",
    "# This is the most powerful statistic in machine learning with noisy labels.\n",
    "joint = cleanlab.count.estimate_joint(\n",
    "    labels, pred_probs, confident_joint=confident_joint\n",
    ")\n",
    "\n",
    "# Pretty print the joint distribution and noise matrix\n",
    "cleanlab.internal.util.print_joint_matrix(joint)\n",
    "cleanlab.internal.util.print_noise_matrix(noise_matrix)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "fKEsc-rBBbuW"
   },
   "source": [
    "In some applications, you may have a priori knowledge regarding some of these quantities. In this case, you can pass them directly into cleanlab which may be able to leverage this information to better identify label issues.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "g5LHhhuqFbXK"
   },
   "outputs": [],
   "source": [
    "cl3 = cleanlab.classification.CleanLearning(yourFavoriteModel, seed=SEED)\n",
    "_ = cl3.fit(data, labels, noise_matrix=noise_matrix_true)  # CleanLearning with a prioiri known noise_matrix"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "cfeJAGyxFFQN"
   },
   "source": [
    "### **Workflow 7.2 (filter):** Find label issues for any dataset and any model in one line of code\n",
    "\n",
    "Features of ``cleanlab.filter.find_label_issues``:\n",
    "\n",
    "* Versatility -- Choose from several [state-of-the-art](https://arxiv.org/abs/1911.00068) label-issue detection algorithms using ``filter_by=``.\n",
    "* Works with any model by using predicted probabilities (no model needed).\n",
    "* One line of code :)\n",
    "\n",
    "Remember ``CleanLearning.find_label_issues``? It uses this method internally."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "p7w8F8ezBcet"
   },
   "outputs": [],
   "source": [
    "# Get out of sample predicted probabilities via cross-validation.\n",
    "# Here we demonstrate the use of sklearn cross_val_predict as another option to get cross-validated predicted probabilities\n",
    "pred_probs = cross_val_predict(\n",
    "    estimator=yourFavoriteModel, X=data, y=labels, cv=3, method=\"predict_proba\"\n",
    ")\n",
    "\n",
    "# Find label issues\n",
    "label_issues_indices = cleanlab.filter.find_label_issues(\n",
    "    labels=labels,\n",
    "    pred_probs=pred_probs,\n",
    "    filter_by=\"both\", # 5 available filter_by options\n",
    "    return_indices_ranked_by=\"self_confidence\",  # 3 available label quality scoring options for rank ordering\n",
    "    rank_by_kwargs={\n",
    "        \"adjust_pred_probs\": True  # adjust predicted probabilities (see docstring for more details)\n",
    "    },\n",
    ")\n",
    "\n",
    "# Return dataset indices of examples with label issues\n",
    "label_issues_indices"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "4-ANXupQJPH8"
   },
   "source": [
    "\n",
    "#### Again, we can visualize the twenty examples with lowest label quality to see if Cleanlab works."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "WETRL74tE_sU"
   },
   "outputs": [],
   "source": [
    "plot_data(data, circles=label_issues_indices[:20], title=\"Top 20 label issues found by cleanlab.filter.find_label_issues()\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "BcekDhvFLntB"
   },
   "source": [
    "### Workflow 7.2 supports lots of methods to ``find_label_issues()`` via the ``filter_by`` parameter.\n",
    "* Here, we evaluate precision/recall/f1/accuracy of detecting true label issues for each method."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "kCfdx2gOLmXS"
   },
   "outputs": [],
   "source": [
    "from sklearn.metrics import precision_score, recall_score, f1_score\n",
    "import pandas as pd\n",
    "\n",
    "yourFavoriteModel = LogisticRegression(verbose=0, random_state=SEED)\n",
    "\n",
    "# Get cross-validated predicted probabilities\n",
    "# Here we demonstrate the use of sklearn cross_val_predict as another option to get cross-validated predicted probabilities\n",
    "pred_probs = cross_val_predict(\n",
    "    estimator=yourFavoriteModel, X=data, y=labels, cv=3, method=\"predict_proba\"\n",
    ")\n",
    "\n",
    "# Ground truth label issues to use for evaluating different filter_by options\n",
    "true_label_issues = (true_labels != labels)\n",
    "\n",
    "# Find label issues with different filter_by options\n",
    "filter_by_list = [\n",
    "    \"prune_by_noise_rate\",\n",
    "    \"prune_by_class\",\n",
    "    \"both\",\n",
    "    \"confident_learning\",\n",
    "    \"predicted_neq_given\",\n",
    "]\n",
    "\n",
    "results = []\n",
    "\n",
    "for filter_by in filter_by_list:\n",
    "\n",
    "    # Find label issues\n",
    "    label_issues = cleanlab.filter.find_label_issues(\n",
    "        labels=labels,\n",
    "        pred_probs=pred_probs,\n",
    "        filter_by=filter_by\n",
    "    )\n",
    "\n",
    "    precision = precision_score(true_label_issues, label_issues)\n",
    "    recall = recall_score(true_label_issues, label_issues)\n",
    "    f1 = f1_score(true_label_issues, label_issues)\n",
    "    acc = accuracy_score(true_label_issues, label_issues)\n",
    "\n",
    "    result = {\n",
    "        \"filter_by algorithm\": filter_by,\n",
    "        \"precision\": precision,\n",
    "        \"recall\": recall,\n",
    "        \"f1\": f1,\n",
    "        \"accuracy\": acc\n",
    "    }\n",
    "\n",
    "    results.append(result)\n",
    "\n",
    "# summary of results\n",
    "pd.DataFrame(results).sort_values(by='f1', ascending=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "vNkStbegYk7y"
   },
   "source": [
    "### **Workflow 7.3 (rank):** Automatically rank every example by a unique label quality score. Find errors using `cleanlab.count.num_label_issues` as a threshold.\n",
    "\n",
    "cleanlab can analyze every label in a dataset and provide a numerical score gauging its overall quality. Low-quality labels indicate examples that should be more closely inspected, perhaps because their given label is incorrect, or simply because they represent an ambiguous edge-case that's worth a second look."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "-uogYRWFYnuu"
   },
   "outputs": [],
   "source": [
    "# Estimate the number of label issues\n",
    "label_issues_count = cleanlab.count.num_label_issues(\n",
    "    labels=labels,\n",
    "    pred_probs=pred_probs\n",
    ")\n",
    "\n",
    "# Get label quality scores\n",
    "label_quality_scores = cleanlab.rank.get_label_quality_scores(\n",
    "    labels=labels,\n",
    "    pred_probs=pred_probs,\n",
    "    method=\"self_confidence\"\n",
    ")\n",
    "\n",
    "# Rank-order by label quality scores and get the top estimated number of label issues\n",
    "label_issues_indices = np.argsort(label_quality_scores)[:label_issues_count]\n",
    "\n",
    "label_issues_indices"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Qe-nGjdeYu3J"
   },
   "source": [
    "#### Again, we can visualize the label issues found to see if Cleanlab works."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "pG-ljrmcYp9Q"
   },
   "outputs": [],
   "source": [
    "plot_data(data, circles=label_issues_indices[:20], title=\"Top 20 label issues using cleanlab.rank with cleanlab.count.num_label_issues()\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ol57ouSTNAfZ"
   },
   "source": [
    "#### Not sure when to use Workflow 7.2 or 7.3 to find label issues?\n",
    "\n",
    "* Workflow 7.2 is the easiest to use as its just one line of code.\n",
    "* Workflow 7.3 is modular and extensible. As we add more label and data quality scoring functions in ``cleanlab.rank``, Workflow 7.3 will always work.\n",
    "* Workflow 7.3 is also for users who have a custom way to rank their data by label quality, and they just need to know what the cut-off is, found via ``cleanlab.count.num_label_issues``."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "gRfHlDlEKyRD"
   },
   "source": [
    "## **Workflow 8:** Ensembling label quality scores from multiple predictors"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "wL3ngCnuLEWd"
   },
   "outputs": [],
   "source": [
    "from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier\n",
    "\n",
    "# 3 models in ensemble\n",
    "model1 = LogisticRegression(penalty=\"l2\", verbose=0, random_state=SEED)\n",
    "model2 = RandomForestClassifier(max_depth=5, random_state=SEED)\n",
    "model3 = GradientBoostingClassifier(\n",
    "    n_estimators=100, learning_rate=1.0, max_depth=3, random_state=SEED\n",
    ")\n",
    "\n",
    "# Get cross-validated predicted probabilities from each model\n",
    "cv_pred_probs_1 = cross_val_predict(\n",
    "    estimator=model1, X=data, y=labels, cv=3, method=\"predict_proba\"\n",
    ")\n",
    "cv_pred_probs_2 = cross_val_predict(\n",
    "    estimator=model2, X=data, y=labels, cv=3, method=\"predict_proba\"\n",
    ")\n",
    "cv_pred_probs_3 = cross_val_predict(\n",
    "    estimator=model3, X=data, y=labels, cv=3, method=\"predict_proba\"\n",
    ")\n",
    "\n",
    "# List of predicted probabilities from each model\n",
    "pred_probs_list = [cv_pred_probs_1, cv_pred_probs_2, cv_pred_probs_3]\n",
    "\n",
    "# Get ensemble label quality scores\n",
    "label_quality_scores_best = cleanlab.rank.get_label_quality_ensemble_scores(\n",
    "    labels=labels, pred_probs_list=pred_probs_list, verbose=False\n",
    ")\n",
    "\n",
    "# Alternative approach: create single ensemble predictor and get its pred_probs\n",
    "cv_pred_probs_ensemble = (cv_pred_probs_1 + cv_pred_probs_2 + cv_pred_probs_3)/3  # uniform aggregation of predictions\n",
    "\n",
    "# Use this single set of pred_probs to find label issues\n",
    "label_quality_scores_better = cleanlab.rank.get_label_quality_scores(\n",
    "    labels=labels, pred_probs=cv_pred_probs_ensemble\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Z-ghgvqVcOJa"
   },
   "source": [
    "While ensembling different models' label quality scores (`label_quality_scores_best`) will often be superior to getting label quality scores from a single ensemble predictor (`label_quality_scores_better`), both approaches produce significantly better label quality scores than just using the predictions from a single model."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Spending too much time on data quality?\n",
    "\n",
    "Using this open-source package effectively can require significant ML expertise and experimentation, plus handling detected data issues can be cumbersome.\n",
    "\n",
    "That’s why we built [Cleanlab Studio](https://cleanlab.ai/blog/data-centric-ai/) -- an automated platform to find **and fix** issues in your dataset, 100x faster and more accurately.  Cleanlab Studio automatically runs optimized data quality algorithms from this package on top of cutting-edge AutoML & Foundation models fit to your data, and helps you fix detected issues via a smart data correction interface. [Try it](https://cleanlab.ai/) for free!\n",
    "\n",
    "<p align=\"center\">\n",
    "  <img src=\"https://raw.githubusercontent.com/cleanlab/assets/master/cleanlab/ml-with-cleanlab-studio.png\" alt=\"The modern AI pipeline automated with Cleanlab Studio\">\n",
    "</p>"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "name": "tutorial_cleanlab_2_0.ipynb",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
