{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Miscellaneous workflows with Datalab\n",
    "\n",
    "This tutorial demonstrates various useful things you can do with `Datalab` that may not be covered in other tutorials. First get familiar with `Datalab` via the [quickstart](datalab_quickstart.html)/[advanced](datalab_advanced.html) tutorials before going through this one."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## Accelerate Issue Checks with Pre-computed kNN Graphs\n",
    "\n",
    "By default, `Datalab` will detect certain types of issues by constructing a k-nearest neighbors graph of your dataset using the [scikit-learn](https://scikit-learn.org/stable/modules/neighbors.html) package. Here we demonstrate how to use your own pre-computed k-nearest neighbors (kNN) graphs with `Datalab`. This allows you to use more efficient approximate kNN graphs to scale to bigger datasets.\n",
    "\n",
    "Using pre-computed kNN graphs is optional and not required for `Datalab` to function. `Datalab` can automatically compute these graphs for you.\n",
    "\n",
    "While we use a toy dataset for demonstration, these steps can be applied to any dataset."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. Load and Prepare Your Dataset\n",
    "\n",
    "Here we'll generate a synthetic dataset, but you should replace this with your own dataset loading process."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from sklearn.datasets import make_classification\n",
    "\n",
    "# Set seed for reproducibility\n",
    "np.random.seed(0)\n",
    "\n",
    "# Replace this section with your own dataset loading\n",
    "# For demonstration, we create a synthetic classification dataset\n",
    "X, y = make_classification(\n",
    "    n_samples=5000,\n",
    "    n_features=5,\n",
    "    n_informative=5,\n",
    "    n_redundant=0,\n",
    "    n_repeated=0,\n",
    "    n_classes=2,\n",
    "    n_clusters_per_class=2,\n",
    "    flip_y=0.02,\n",
    "    class_sep=2.0,\n",
    "    shuffle=False,\n",
    "    random_state=0,\n",
    ")\n",
    "\n",
    "\n",
    "# Example: Add a duplicate example to the dataset\n",
    "X[-1] = X[-2] + np.random.rand(5) * 0.001"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. Compute kNN Graph\n",
    "\n",
    "We will compute the kNN graph using [FAISS](https://github.com/facebookresearch/faiss), a library for efficient similarity search. This step involves creating a kNN graph that represents the nearest neighbors for each point in your dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import faiss\n",
    "import numpy as np\n",
    "\n",
    "# Faiss uses single precision, so we need to convert the data type\n",
    "X_faiss = np.float32(X)\n",
    "\n",
    "# Normalize the vectors for inner product similarity (effectively cosine similarity)\n",
    "faiss.normalize_L2(X_faiss)\n",
    "\n",
    "# Build the index using FAISS\n",
    "index = faiss.index_factory(X_faiss.shape[1], \"HNSW32,Flat\", faiss.METRIC_INNER_PRODUCT)\n",
    "\n",
    "# Add the dataset to the index\n",
    "index.add(X_faiss)\n",
    "\n",
    "# Perform the search to find k-nearest neighbors\n",
    "k = 10  # Number of neighbors to consider\n",
    "D, I = index.search(X_faiss, k + 1)  # Include the point itself during search\n",
    "\n",
    "# Remove the first column (self-distances)\n",
    "D, I = D[:, 1:], I[:, 1:]\n",
    "\n",
    "# Convert cosine similarity to cosine distance\n",
    "np.clip(1 - D, a_min=0, a_max=None, out=D)\n",
    "\n",
    "# Create the kNN graph\n",
    "from scipy.sparse import csr_matrix\n",
    "\n",
    "\n",
    "def create_knn_graph(distances: np.ndarray, indices: np.ndarray) -> csr_matrix:\n",
    "    \"\"\"\n",
    "    Create a K-nearest neighbors (KNN) graph in CSR format from provided distances and indices.\n",
    "\n",
    "    Parameters:\n",
    "    distances (np.ndarray): 2D array of shape (n_samples, n_neighbors) containing distances to nearest neighbors.\n",
    "    indices (np.ndarray): 2D array of shape (n_samples, n_neighbors) containing indices of nearest neighbors.\n",
    "\n",
    "    Returns:\n",
    "    scipy.sparse.csr_matrix: KNN graph in CSR format.\n",
    "    \"\"\"\n",
    "    assert distances.shape == indices.shape, \"distances and indices must have the same shape\"\n",
    "\n",
    "    n_samples, n_neighbors = distances.shape\n",
    "\n",
    "    # Convert to 1D arrays for CSR matrix creation\n",
    "    indices_1d = indices.ravel()\n",
    "    distances_1d = distances.ravel()\n",
    "    indptr = np.arange(0, n_samples * n_neighbors + 1, n_neighbors)\n",
    "\n",
    "    # Create the CSR matrix\n",
    "    return csr_matrix((distances_1d, indices_1d, indptr), shape=(n_samples, n_samples))\n",
    "\n",
    "\n",
    "knn_graph = create_knn_graph(D, I)\n",
    "\n",
    "# Ensure the kNN graph is sorted by row values\n",
    "from sklearn.neighbors import sort_graph_by_row_values\n",
    "sort_graph_by_row_values(knn_graph, copy=False, warn_when_not_sorted=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. Train a Classifier and Obtain Predicted Probabilities\n",
    "\n",
    "Predicted class probabilities from a model trained on your dataset are used to identify label issues."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.model_selection import cross_val_predict\n",
    "\n",
    "# Obtain predicted probabilities using cross-validation\n",
    "clf = LogisticRegression()\n",
    "pred_probs = cross_val_predict(clf, X, y, cv=3, method=\"predict_proba\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. Identify Data Issues Using Datalab\n",
    "Use the pre-computed kNN graph and predicted probabilities to find issues in the dataset using `Datalab`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from cleanlab import Datalab\n",
    "\n",
    "# Initialize Datalab with the dataset\n",
    "lab = Datalab(data={\"X\": X, \"y\": y}, label_name=\"y\", task=\"classification\")\n",
    "\n",
    "# Perform issue detection using the kNN graph and predicted probabilities, when possible\n",
    "lab.find_issues(knn_graph=knn_graph, pred_probs=pred_probs, features=X)\n",
    "\n",
    "# Collect the identified issues and a summary\n",
    "issues = lab.get_issues()\n",
    "issue_summary = lab.get_issue_summary()\n",
    "\n",
    "# Display the issues and summary\n",
    "display(issue_summary)\n",
    "display(issues)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Explanation:\n",
    "\n",
    "**Creating the kNN Graph:**\n",
    "\n",
    "- Compute the kNN graph using FAISS or another library, ensuring the self-points (points referring to themselves) are omitted from the neighbors.\n",
    "  - Some distance kernels or search algorithms (like those in FAISS) may return negative distances or suffer from numerical instability when comparing\n",
    "    points that are extremely close to each other. This can lead to incorrect results when constructing the kNN graph.\n",
    "  - **Note**: kNN graphs are generally poorly suited for detecting exact duplicates, especially when the number of exact duplicates exceeds the number of requested neighbors. The strengths of this data structure lie in the assumption that data points are similar but not identical, allowing efficient similarity searches and proximity-based analyses.\n",
    "  - If you are comfortable with exploring non-public API functions in the library, you can use the following helper function to ensure that exact duplicate sets are correctly represented in the kNN graph. Please note, this function is not officially supported and is not part of the public API:\n",
    "\n",
    "    ```python\n",
    "    from cleanlab.internal.neighbor.knn_graph import correct_knn_graph\n",
    "\n",
    "    knn_graph = correct_knn_graph(features=X_faiss, knn_graph=knn_graph)\n",
    "    ```\n",
    "- You may need to handle self-points yourself with third-party libraries.\n",
    "- Construct the CSR (Compressed Sparse Row) matrix from the distances and indices arrays.\n",
    "  - `Datalab` can automatically construct a kNN graph from a numerical `features` array if one is not provided, in an accurate and reliable manner.\n",
    "- Sort the kNN graph by row values.\n",
    "\n",
    "When using approximate kNN graphs, it is important to understand their strengths and limitations to apply them effectively."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## Data Valuation\n",
    "\n",
    "In this section, we will show how to use `Datalab` to estimate how much each data point contributes to a trained classifier model. Data valuation helps you understand the importance of each data point, where you can identify more/less valuable data points for your machine learning models.\n",
    "\n",
    "We will use a text dataset for this example, but this approach can be applied to any dataset."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. Load and Prepare the Dataset\n",
    "We will use a subset of the 20 Newsgroups dataset, which is a collection of newsgroup documents suitable for text classification tasks.\n",
    "For demonstration purposes, we'll classify documents from two categories: \"alt.atheism\" and \"sci.space\"."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import fetch_20newsgroups\n",
    "import pandas as pd\n",
    "\n",
    "# Load the 20 Newsgroups dataset\n",
    "newsgroups_train = fetch_20newsgroups(subset='train', categories=['alt.atheism', 'sci.space'], remove=('headers', 'footers', 'quotes'))\n",
    "\n",
    "# Create a DataFrame with the text data and labels\n",
    "df_text = pd.DataFrame({\"Text\": newsgroups_train.data, \"Label\": newsgroups_train.target})\n",
    "df_text[\"Label\"] = df_text[\"Label\"].map({i: category for (i, category) in enumerate(newsgroups_train.target_names)})\n",
    "\n",
    "# Display the first few samples\n",
    "df_text.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. Vectorize the Text Data\n",
    "We will use a `TfidfVectorizer` to convert the text data into a numerical format suitable for machine learning models."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.feature_extraction.text import TfidfVectorizer\n",
    "\n",
    "# Initialize the TfidfVectorizer\n",
    "vectorizer = TfidfVectorizer()\n",
    "\n",
    "# Transform the text data into a feature matrix\n",
    "X_vectorized = vectorizer.fit_transform(df_text[\"Text\"])\n",
    "\n",
    "# Convert the sparse matrix to a dense matrix\n",
    "X = X_vectorized.toarray()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. Perform Data Valuation with Datalab\n",
    "\n",
    "Next, we will initialize `Datalab` and perform data valuation to assess the value of each data point in the dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from cleanlab import Datalab\n",
    "\n",
    "# Initialize Datalab with the dataset\n",
    "lab = Datalab(data=df_text, label_name=\"Label\", task=\"classification\")\n",
    "\n",
    "# Perform data valuation\n",
    "lab.find_issues(features=X, issue_types={\"data_valuation\": {}})\n",
    "\n",
    "# Collect the identified issues\n",
    "data_valuation_issues = lab.get_issues(\"data_valuation\")\n",
    "\n",
    "# Display the data valuation issues\n",
    "display(data_valuation_issues)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. (Optional) Visualize Data Valuation Scores\n",
    "Let's visualize the data valuation scores across our dataset.\n",
    "\n",
    "Cleanlab's Shapely scores are transformed to lie between 0 and 1 such that: a score below 0.5 indicates a negative contribution to the model's training performance, while a score above 0.5 indicates a positive contribution.\n",
    "\n",
    "By examining the scores across different classes, we can identify whether positive or negative contributions are disproportionately concentrated in a single class. This can help detect biases in the training data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import seaborn as sns\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "# Prepare the data for plotting\n",
    "plot_data = (\n",
    "    data_valuation_issues\n",
    "    # Optionally, add a 'given_label' column to distinguish between labels in the histogram\n",
    "    .join(pd.DataFrame({\"given_label\": df_text[\"Label\"]}))\n",
    ")\n",
    "\n",
    "# Plot strip plots of data valuation scores for each label\n",
    "sns.stripplot(\n",
    "    data=plot_data,\n",
    "    x=\"data_valuation_score\",\n",
    "    hue=\"given_label\",  # Comment out if no labels should be used in the visualization\n",
    "    dodge=True,\n",
    "    jitter=0.3,\n",
    "    alpha=0.5,\n",
    ")\n",
    "\n",
    "plt.axvline(lab.info[\"data_valuation\"][\"threshold\"], color=\"red\", linestyle=\"--\", label=\"Issue Threshold\")\n",
    "\n",
    "plt.title(\"Strip plot of Data Valuation Scores by Label\")\n",
    "plt.xlabel(\"Data Valuation Score\")\n",
    "plt.legend()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Learn more about the data valuation issue type in the [Issue Type Guide](../../cleanlab/datalab/guide/issue_type_description.html#data-valuation-issue)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## Find Underperforming Groups in a Dataset\n",
    "\n",
    "Here we will demonstrate how to use `Datalab` to identify subgroups in a dataset over which the ML model is producing consistently worse predictions than for the overall dataset.\n",
    "\n",
    "`Datalab` will automatically find underperforming groups if you provide numerical embeddings and predicted probabilities from any model.\n",
    "For this section, we'll determine which data subgroups to consider ourselves, such as by using clustering.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. Generate a Synthetic Dataset\n",
    "\n",
    "First, we will generate a synthetic dataset with blobs. This dataset will include some noisy labels in one of the blobs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import make_blobs\n",
    "import numpy as np\n",
    "\n",
    "# Generate synthetic data with blobs\n",
    "X, y = make_blobs(n_samples=100, centers=3, n_features=2, random_state=42, cluster_std=1.0, shuffle=False)\n",
    "\n",
    "# Add noise to the labels\n",
    "n_noisy_labels = 30\n",
    "y[:n_noisy_labels] = np.random.randint(0, 2, n_noisy_labels)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. Train a Classifier and Obtain Predicted Probabilities\n",
    "\n",
    "Next, we will train a basic classifier (you can use any type of model) and obtain predicted probabilities for the dataset using cross-validation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.model_selection import cross_val_predict\n",
    "\n",
    "# Obtain predicted probabilities using cross-validation\n",
    "clf = LogisticRegression(random_state=0)\n",
    "pred_probs = cross_val_predict(clf, X, y, cv=3, method=\"predict_proba\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. (Optional) Cluster the Data\n",
    "\n",
    "Datalab identifies meaningful data subgroups by automatically clustering your dataset.\n",
    "You can optionally provide your own clusters to control this process. Here we show how to use KMeans clustering, but this manual clustering is entirely optional."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.cluster import KMeans\n",
    "from sklearn.metrics import silhouette_score\n",
    "from sklearn.model_selection import GridSearchCV\n",
    "\n",
    "\n",
    "# Function to use in GridSearchCV for silhouette score\n",
    "def silhouette_scorer(estimator, X):\n",
    "    cluster_labels = estimator.fit_predict(X)\n",
    "    return silhouette_score(X, cluster_labels)\n",
    "\n",
    "\n",
    "# Use GridSearchCV to determine the optimal number of clusters\n",
    "param_grid = {\"n_clusters\": range(2, 10)}\n",
    "grid_search = GridSearchCV(KMeans(random_state=0), param_grid, cv=3, scoring=silhouette_scorer)\n",
    "grid_search.fit(X)\n",
    "\n",
    "# Get the best estimator and predict clusters\n",
    "best_kmeans = grid_search.best_estimator_\n",
    "cluster_ids = best_kmeans.fit_predict(X)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. Identify Underperforming Groups with Datalab\n",
    "\n",
    "We will use `Datalab` to find underperforming groups in the dataset based on the predicted probabilities and optionally the cluster assignments."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from cleanlab import Datalab\n",
    "import pandas as pd\n",
    "\n",
    "# Initialize Datalab with the dataset\n",
    "lab = Datalab(data={\"X\": X, \"y\": y}, label_name=\"y\", task=\"classification\")\n",
    "\n",
    "# Find issues related to underperforming groups, optionally using cluster_ids\n",
    "lab.find_issues(\n",
    "    # features=X  # Uncomment this line if 'cluster_ids' is not provided to allow Datalab to run clustering automatically.\n",
    "    pred_probs=pred_probs,\n",
    "    issue_types={\n",
    "        \"underperforming_group\": {\n",
    "            \"threshold\": 0.75,          # Set a custom threshold for identifying underperforming groups.\n",
    "                                        # The default threshold is lower, optimized for higher precision (fewer false positives),\n",
    "                                        # but for this toy example, a higher threshold increases sensitivity to underperforming groups.\n",
    "            \"cluster_ids\": cluster_ids  # Optional: Provide cluster IDs if clustering is used.\n",
    "                                        # If not provided, Datalab will automatically run clustering under the hood.\n",
    "                                        # In that case, you need to provide the 'features' array as an additional argument.\n",
    "            },\n",
    "    },\n",
    ")\n",
    "\n",
    "# Collect the identified issues\n",
    "underperforming_group_issues = lab.get_issues(\"underperforming_group\").query(\"is_underperforming_group_issue\")\n",
    "\n",
    "# Display the issues along with given and predicted labels\n",
    "display(underperforming_group_issues.join(pd.DataFrame({\"given_label\": y, \"predicted_label\": pred_probs.argmax(axis=1)})))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5. (Optional) Visualize the Results\n",
    "\n",
    "Finally, we will optionally visualize the dataset, highlighting the underperforming groups identified by `Datalab`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "\n",
    "# Plot the original data points\n",
    "plt.scatter(X[:, 0], X[:, 1], c=y, cmap=\"tab10\")\n",
    "\n",
    "# Highlight the underperforming group (if any issues are detected)\n",
    "if not underperforming_group_issues.empty:\n",
    "    plt.scatter(\n",
    "        X[underperforming_group_issues.index, 0], X[underperforming_group_issues.index, 1],\n",
    "        s=100, facecolors='none', edgecolors='r', alpha=0.3, label=\"Underperforming Group\", linewidths=2.0\n",
    "    )\n",
    "else:\n",
    "    print(\"No underperforming group issues detected.\")\n",
    "\n",
    "# Add title and legend\n",
    "plt.title(\"Underperforming Groups in the Dataset\")\n",
    "plt.legend()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Learn more about the underperforming group issue type in the [Issue Type Guide](../../cleanlab/datalab/guide/issue_type_description.html#underperforming-group-issue)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Predefining Data Slices for Detecting Underperforming Groups\n",
    "\n",
    "Instead of clustering the data to determine what data slices are considered when detecting underperforming groups, you can define these slices yourself.\n",
    "For say a tabular dataset, you can use the values of a categorical column as cluster IDs to predefine the relevant data subgroups/slices to consider. This allows you to focus on meaningful slices of your data defined by domain knowledge or specific attributes."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. Load and Prepare the Dataset\n",
    "\n",
    "We'll work with a toy tabular dataset with several categorical and numerical columns, just to illustrate how to use predefined data slices for detecting underperforming groups."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define the dataset as a multi-line string\n",
    "dataset_tsv = \"\"\"\n",
    "Age\tGender\tLocation\tEducation\tExperience\tHighSalary\n",
    "60\tOther\tIndiana\tPhD\t21\t0\n",
    "50\tMale\tIndiana\tBachelor's\t21\t0\n",
    "36\tFemale\tIndiana\tPhD\t21\t0\n",
    "64\tMale\tKansas\tHigh School\t37\t1\n",
    "29\tMale\tKansas\tPhD\t14\t0\n",
    "42\tMale\tOhio\tPhD\t7\t0\n",
    "60\tMale\tKansas\tHigh School\t26\t0\n",
    "40\tOther\tOhio\tBachelor's\t25\t0\n",
    "44\tMale\tIndiana\tHigh School\t29\t0\n",
    "32\tMale\tOhio\tPhD\t17\t0\n",
    "32\tMale\tKansas\tBachelor's\t17\t0\n",
    "45\tOther\tOhio\tPhD\t30\t0\n",
    "57\tMale\tCalifornia\tHigh School\t27\t1\n",
    "61\tMale\tKansas\tHigh School\t32\t0\n",
    "45\tOther\tIndiana\tPhD\t4\t0\n",
    "24\tOther\tKansas\tBachelor's\t9\t0\n",
    "43\tOther\tOhio\tMaster's\t3\t0\n",
    "23\tMale\tOhio\tHigh School\t8\t0\n",
    "45\tOther\tKansas\tHigh School\t16\t0\n",
    "51\tOther\tOhio\tMaster's\t27\t0\n",
    "59\tMale\tOhio\tMaster's\t29\t0\n",
    "23\tOther\tIndiana\tBachelor's\t8\t0\n",
    "42\tMale\tKansas\tPhD\t5\t0\n",
    "54\tFemale\tKansas\tMaster's\t34\t0\n",
    "33\tOther\tKansas\tPhD\t18\t0\n",
    "43\tFemale\tKansas\tPhD\t23\t0\n",
    "46\tMale\tOhio\tBachelor's\t28\t0\n",
    "48\tOther\tOhio\tPhD\t30\t0\n",
    "63\tMale\tKansas\tHigh School\t34\t0\n",
    "49\tFemale\tKansas\tPhD\t32\t1\n",
    "37\tMale\tKansas\tPhD\t20\t0\n",
    "36\tOther\tIndiana\tMaster's\t21\t1\n",
    "24\tOther\tIndiana\tHigh School\t9\t0\n",
    "58\tFemale\tKansas\tPhD\t32\t0\n",
    "28\tMale\tCalifornia\tMaster's\t2\t0\n",
    "42\tOther\tKansas\tBachelor's\t17\t0\n",
    "30\tFemale\tCalifornia\tPhD\t15\t1\n",
    "60\tOther\tOhio\tPhD\t30\t0\n",
    "39\tOther\tKansas\tBachelor's\t2\t0\n",
    "25\tMale\tOhio\tMaster's\t10\t0\n",
    "46\tOther\tIndiana\tPhD\t23\t0\n",
    "35\tMale\tIndiana\tBachelor's\t20\t0\n",
    "30\tOther\tOhio\tHigh School\t15\t0\n",
    "47\tFemale\tOhio\tMaster's\t22\t0\n",
    "23\tOther\tOhio\tHigh School\t1\t0\n",
    "41\tMale\tOhio\tHigh School\t26\t0\n",
    "49\tMale\tKansas\tBachelor's\t1\t0\n",
    "28\tFemale\tOhio\tMaster's\t13\t0\n",
    "29\tOther\tKansas\tBachelor's\t14\t0\n",
    "56\tOther\tIndiana\tBachelor's\t39\t1\n",
    "35\tFemale\tOhio\tBachelor's\t20\t0\n",
    "38\tOther\tCalifornia\tBachelor's\t8\t1\n",
    "57\tOther\tOhio\tMaster's\t38\t1\n",
    "61\tMale\tIndiana\tPhD\t28\t0\n",
    "25\tOther\tIndiana\tHigh School\t10\t0\n",
    "23\tOther\tKansas\tHigh School\t8\t0\n",
    "27\tFemale\tOhio\tMaster's\t12\t0\n",
    "63\tFemale\tIndiana\tHigh School\t23\t0\n",
    "25\tMale\tIndiana\tMaster's\t10\t0\n",
    "50\tOther\tOhio\tHigh School\t6\t0\n",
    "39\tOther\tKansas\tBachelor's\t24\t0\n",
    "47\tOther\tIndiana\tHigh School\t19\t0\n",
    "55\tMale\tIndiana\tPhD\t0\t0\n",
    "31\tMale\tOhio\tPhD\t7\t0\n",
    "57\tFemale\tKansas\tPhD\t15\t0\n",
    "35\tMale\tCalifornia\tPhD\t13\t0\n",
    "52\tOther\tOhio\tPhD\t11\t0\n",
    "36\tOther\tOhio\tMaster's\t21\t0\n",
    "29\tMale\tIndiana\tMaster's\t14\t0\n",
    "35\tOther\tIndiana\tHigh School\t20\t0\n",
    "44\tOther\tIndiana\tPhD\t29\t1\n",
    "61\tMale\tKansas\tHigh School\t1\t0\n",
    "42\tMale\tOhio\tPhD\t27\t0\n",
    "37\tOther\tIndiana\tPhD\t22\t0\n",
    "39\tOther\tKansas\tMaster's\t21\t0\n",
    "\"\"\"\n",
    "\n",
    "# Import necessary libraries\n",
    "from io import StringIO\n",
    "import pandas as pd\n",
    "\n",
    "# Load the dataset into a DataFrame\n",
    "df = pd.read_csv(\n",
    "    StringIO(dataset_tsv),\n",
    "    sep='\\t',\n",
    ")\n",
    "\n",
    "# Display the original DataFrame\n",
    "display(df)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Optional**: The categorical features of the dataset can encoded to numerical values for easier. For simplicity, y, we will use `OrdinalEncoder` from [scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.preprocessing import OrdinalEncoder\n",
    "\n",
    "# Encode the categorical columns\n",
    "columns_to_encode = [\"Gender\", \"Location\", \"Education\"]\n",
    "encoded_df = df.copy()\n",
    "encoder = OrdinalEncoder(dtype=int)\n",
    "encoded_df[columns_to_encode] = encoder.fit_transform(encoded_df[columns_to_encode])\n",
    "# encoded_df.drop(columns=[\"Salary\"], inplace=True)\n",
    "\n",
    "# Display the encoded DataFrame\n",
    "display(encoded_df)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. Train a Classifier and Obtain Predicted Probabilities\n",
    "\n",
    "Next, we will train a basic classifier (you can use any type of model) and obtain predicted probabilities for the dataset using cross-validation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.model_selection import cross_val_predict\n",
    "\n",
    "# Split data\n",
    "X = encoded_df.drop(columns=[\"HighSalary\"])\n",
    "y = encoded_df[\"HighSalary\"]\n",
    "\n",
    "# Obtain predicted probabilities using cross-validation\n",
    "clf = LogisticRegression(random_state=0)\n",
    "pred_probs = cross_val_predict(clf, X, y, cv=3, method=\"predict_proba\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. Define a Data Slice\n",
    "\n",
    "For a tabular dataset, you can use a categorical column’s values as pre-computed data slices, so that Datalab skips its default clustering step and directly uses the encoded values for each row in the\n",
    "dataset.\n",
    "\n",
    "For this example, we'll focus our attention to the `\"Location\"` column which has 4 unique categorical values."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cluster_ids = encoded_df[\"Location\"].to_numpy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. Identify Underperforming Groups with Datalab\n",
    "\n",
    "Now use `Datalab` to detect underperforming groups in the dataset based on the model predicted probabilities and our predefined data slices."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from cleanlab import Datalab\n",
    "\n",
    "# Initialize Datalab with the dataset\n",
    "lab = Datalab(data=df, label_name=\"HighSalary\", task=\"classification\")\n",
    "\n",
    "# Find issues related to underperforming groups, optionally using cluster_ids\n",
    "lab.find_issues(\n",
    "    # features=X  # Uncomment this line if 'cluster_ids' is not provided to allow Datalab to run clustering automatically.\n",
    "    pred_probs=pred_probs,\n",
    "    issue_types={\n",
    "        \"underperforming_group\": {\n",
    "            \"threshold\": 0.75,          # Set a custom threshold for identifying underperforming groups.\n",
    "                                        # The default threshold is lower, optimized for higher precision (fewer false positives),\n",
    "                                        # but for this toy example, a higher threshold increases sensitivity to underperforming groups.\n",
    "            \"cluster_ids\": cluster_ids  # Optional: Provide cluster IDs if manual data-slicing is used.\n",
    "                                        # If not provided, Datalab will automatically run clustering under the hood.\n",
    "                                        # In that case, you need to provide the 'features' array as an additional argument.\n",
    "            },\n",
    "    },\n",
    ")\n",
    "\n",
    "# Collect the identified issues\n",
    "underperforming_group_issues = lab.get_issues(\"underperforming_group\").query(\"is_underperforming_group_issue\")\n",
    "\n",
    "# Display the issues along with given and predicted labels\n",
    "display(underperforming_group_issues.join(pd.DataFrame({\"given_label\": y, \"predicted_label\": pred_probs.argmax(axis=1)})))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## Detect if your dataset is non-IID\n",
    "\n",
    "Here we demonstrate how to discover when your data violates the foundational IID assumption that underpins most machine learning and analytics.\n",
    "Common violations (that can be caught with `Datalab`) include: data drift, or lack of statistical independence where different data points affect one another.\n",
    "This demonstration uses a toy 2D dataset."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. Load Dataset\n",
    "\n",
    "For simplicity, we'll use a numerical dataset. If your data are not numerical, we recommend providing numeric representations of the data (neural network embeddings, or featurization like one-hot encoding, etc).\n",
    "\n",
    "By default, the non-IID issue check is automatically run by `Datalab` whenever you provide numerical data embeddings or predicted probabilities."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "np.random.seed(0)  # Set seed for reproducibility\n",
    "\n",
    "\n",
    "def generate_data_dependent(num_samples):\n",
    "    a1, a2, a3 = 0.6, 0.375, -0.975\n",
    "    X = [np.random.normal(1, 1, 2) for _ in range(3)]\n",
    "    X.extend(a1 * X[i-1] + a2 * X[i-2] + a3 * X[i-3] for i in range(3, num_samples))\n",
    "    return np.array(X)\n",
    "\n",
    "\n",
    "X = generate_data_dependent(50)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. Run Datalab to test the IID assumption"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`Datalab` computes a p-value to test whether your data violates the IID assumption. A low p-value (close to 0) indicates strong evidence against the null hypothesis that the data was sampled IID, either because the data appear to be drifting in distribution or inter-dependent across samples."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from cleanlab import Datalab\n",
    "\n",
    "# Initialize Datalab with the dataset\n",
    "lab = Datalab(data={\"X\": X})\n",
    "\n",
    "# Check only for the non-IID issue, not other types of data issues\n",
    "lab.find_issues(features=X, issue_types={\"non_iid\": {}})\n",
    "\n",
    "print(\"p-value of the non-IID test:\", lab.get_issue_summary(\"non_iid\")[\"score\"].item())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Unlike certain issue types detected by `Datalab`, the non-IID issue is a property of the overall dataset as opposed to individual data points. As with other issue types, an overall issue score for the dataset is available via `get_issue_summary()`. For the non-IID issue type, this overall score is the p-value of a statistical test for violations of the IID assumption. The lower the p-value, the more evidence there is that your data are not IID.\n",
    "\n",
    "### 3. (Optional) Understand the nature of IID violations in your dataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To understand why our data appear non-IID, we can optionally investigate non-IID issues at the level of individual data points. But note that the IID assumption applies to the overall dataset, not to any individual data point. This individual data point analysis should only be used for further investigation, rather than to draw definitive conclusions about specific data points."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Per data point issues\n",
    "non_iid_issues = lab.get_issues(\"non_iid\")\n",
    "\n",
    "display(non_iid_issues.head(10))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's visualize the non-IID issues detected by `Datalab`. Remember: the individual per data point non-IID scores are not particularly meaningful, but their trends across the dataset may reveal how the dataset is non-IID.\n",
    "If your overall dataset is detected to be non-IID, then the data point with the lowest non-IID score is automatically assigned the `is_non_iid_issue` flag (but do not focus on this specific data point and instead try to understand your dataset as a whole)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "\n",
    "non_iid_issues[\"non_iid_score\"].plot()\n",
    "\n",
    "# Highlight the point assigned as a non-iid issue\n",
    "idx = non_iid_issues.query(\"is_non_iid_issue\").index\n",
    "plt.scatter(idx, non_iid_issues.loc[idx, \"non_iid_score\"], color='red', label='Non-iid Issue', s=100)\n",
    "plt.title(\"Non-iid Scores\")\n",
    "plt.xlabel(\"Sample Index\")\n",
    "plt.ylabel(\"Non-iid Score\")\n",
    "plt.legend()\n",
    "plt.show()\n",
    "\n",
    "# Visualize dataset ordering\n",
    "plt.scatter(X[:, 0], X[:, 1], c=range(len(X)), cmap='coolwarm', s=100)\n",
    "plt.title(\"Dataset with data-dependent ordering\")\n",
    "plt.xlabel('Feature 1')\n",
    "plt.ylabel('Feature 2')\n",
    "\n",
    "plt.colorbar(label='Sample Index')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Plotting the non-IID scores for each data point vs. the ordering of these data points in the dataset (index) may reveal: distribution drift, statistical dependence, or other concerns regarding how the dataset was collected.\n",
    "\n",
    "Learn more about the non-IID issue type in the [Issue Type Guide](../../cleanlab/datalab/guide/issue_type_description.html#non-iid-issue)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div class=\"alert alert-warning\">\n",
    "Important\n",
    "<br/>\n",
    "    \n",
    "The non-IID issue is a property of the overall dataset rather than individual data points. Use `get_issues()` scores to glean additional insights about the dataset rather than conclusions about specific data points.\n",
    "    \n",
    "</div>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## Catch Null Values in a Dataset\n",
    "\n",
    "Here we demonstrate how to use `Datalab` to catch null values in a dataset and visualize them. Models may learn incorrect patterns if null values are present, and may even error during model training. Dealing with null values can mitigate those risks. \n",
    "\n",
    "While `Datalab` automatically runs this check by default, this section dives deeper into how to detect the effect of null values in your dataset."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. Load the Dataset\n",
    "\n",
    "First, we will load the dataset into a Pandas DataFrame. For simplicity, we will use a dataset in TSV (tab-separated values) format.\n",
    "Some care is needed when loading the dataset to ensure that the data is correctly parsed.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define the dataset as a multi-line string\n",
    "dataset_tsv = \"\"\"\n",
    "Age\tGender\tLocation\tAnnual_Spending\tNumber_of_Transactions\tLast_Purchase_Date\n",
    "56.0\tOther\tRural\t4099.62\t3\t2024-01-03\n",
    "NaN\tFemale\tRural\t6421.16\t5\tNaT\n",
    "46.0\tMale\tSuburban\t5436.55\t3\t2024-02-26\n",
    "32.0\tFemale\tRural\t4046.66\t3\t2024-03-23\n",
    "60.0\tFemale\tSuburban\t3467.67\t6\t2024-03-01\n",
    "25.0\tFemale\tSuburban\t4757.37\t4\t2024-01-03\n",
    "38.0\tFemale\tRural\t4199.53\t6\t2024-01-03\n",
    "56.0\tMale\tSuburban\t4991.71\t6\t2024-04-03\n",
    "NaN\n",
    "NaN\tMale\tRural\t4655.82\t1\tNaT\n",
    "40.0\tFemale\tRural\t5584.02\t7\t2024-03-29\n",
    "28.0\tFemale\tUrban\t3102.32\t2\t2024-04-07\n",
    "28.0\tMale\tRural\t6637.99\t11\t2024-04-08\n",
    "NaN\tMale\tUrban\t9167.47\t4\t2024-01-02\n",
    "NaN\tMale\tRural\t6790.46\t3\tNaT\n",
    "NaN\tOther\tRural\t5327.96\t8\t2024-01-03\n",
    "\"\"\"\n",
    "\n",
    "# Import necessary libraries\n",
    "from io import StringIO\n",
    "import pandas as pd\n",
    "\n",
    "# Load the dataset into a DataFrame\n",
    "df = pd.read_csv(\n",
    "    StringIO(dataset_tsv),\n",
    "    sep='\\t',\n",
    "    parse_dates=[\"Last_Purchase_Date\"],\n",
    ")\n",
    "\n",
    "# Display the original DataFrame\n",
    "display(df)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2: Encode Categorical Values\n",
    "\n",
    "The `features` argument to `Datalab.find_issues()` generally requires a numerical array.\n",
    "Therefore, we need to numerically encode any categorical values. A common workflow is to encode categorical values in the dataset before passing it to the `find_issues` method (or provide model embeddings of the data instead of the data values themselves).\n",
    "However, some encoding strategies may lose the original null values.\n",
    "\n",
    "Here's a strategy to encode categorical columns while keeping the original DataFrame structure intact:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define a function to encode categorical columns\n",
    "def encode_categorical_columns(df, columns, drop=True, inplace=False):\n",
    "    if not inplace:\n",
    "        df = df.copy()\n",
    "    for column in columns:\n",
    "        # Drop NaN values or replace them with a placeholder\n",
    "        categories = df[column].dropna().unique()\n",
    "\n",
    "        # Create a mapping from categories to numbers\n",
    "        category_to_number = {category: idx for idx, category in enumerate(categories)}\n",
    "\n",
    "        # Apply the mapping to the column\n",
    "        df[column + '_encoded'] = df[column].map(category_to_number)\n",
    "\n",
    "    if drop:\n",
    "        df = df.drop(columns=columns)\n",
    "\n",
    "    return df\n",
    "\n",
    "\n",
    "# Encode the categorical columns\n",
    "columns_to_encode = [\"Gender\", \"Location\"]\n",
    "encoded_df = encode_categorical_columns(df, columns=columns_to_encode)\n",
    "\n",
    "# Display the encoded DataFrame\n",
    "display(encoded_df)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. Initialize Datalab\n",
    "\n",
    "Next, we initialize `Datalab` with the original DataFrame, which will help us discover all kinds of data issues."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Import the Datalab class from cleanlab\n",
    "from cleanlab import Datalab\n",
    "\n",
    "# Initialize Datalab with the original DataFrame\n",
    "lab = Datalab(data=df)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. Detect Null Values\n",
    "We will use the find_issues method from `Datalab` to detect null values in our dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Detect issues in the dataset, focusing on null values\n",
    "lab.find_issues(features=encoded_df, issue_types={\"null\": {}})\n",
    "\n",
    "# Display the identified issues\n",
    "null_issues = lab.get_issues(\"null\")\n",
    "display(null_issues)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5. Sort the Dataset by Null Issues\n",
    "\n",
    "To better understand the impact of null values, we will sort the original DataFrame by the `null_score` from the `null_issues` DataFrame.\n",
    "\n",
    "This score indicates the severity of null issues for each row."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Sort the issues DataFrame by 'null_score' and get the sorted indices\n",
    "sorted_indices = (\n",
    "    null_issues\n",
    "    .sort_values(\"null_score\")\n",
    "    .index\n",
    ")\n",
    "\n",
    "# Sort the original DataFrame based on the sorted indices from the issues DataFrame\n",
    "sorted_df = df.loc[sorted_indices]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6. (Optional) Visualize the Results\n",
    "\n",
    "Finally, we will create a nicely formatted DataFrame that highlights the null values and the issues detected by `Datalab`.\n",
    "\n",
    "We will use Pandas' styler to add custom styles for better visualization."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create a column of separators\n",
    "separator = pd.DataFrame([''] * len(sorted_df), columns=['|'])\n",
    "\n",
    "# Join the sorted DataFrame, separator, and issues DataFrame\n",
    "combined_df = pd.concat([sorted_df, separator, null_issues], axis=1)\n",
    "\n",
    "\n",
    "# Define functions to highlight null values and Datalab columns\n",
    "def highlight_null_values(val):\n",
    "    if pd.isnull(val):\n",
    "        return 'background-color: yellow'\n",
    "    return ''\n",
    "\n",
    "\n",
    "def highlight_datalab_columns(column):\n",
    "    return 'background-color: lightblue'\n",
    "\n",
    "\n",
    "def highlight_is_null_issue(val):\n",
    "    if val:\n",
    "        return 'background-color: orange'\n",
    "    return ''\n",
    "\n",
    "\n",
    "# Apply styles to the combined DataFrame\n",
    "styled_df = (\n",
    "    combined_df\n",
    "    .style.map(highlight_null_values)  # Highlight null and NaT values\n",
    "    .map(highlight_datalab_columns, subset=null_issues.columns)  # Highlight columns provided by Datalab\n",
    "    .map(highlight_is_null_issue, subset=['is_null_issue'])  # Highlight rows with null issues\n",
    ")\n",
    "\n",
    "# Display the styled DataFrame\n",
    "display(styled_df)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Learn more about the null issue type in the [Issue Type Guide](../../cleanlab/datalab/guide/issue_type_description.html#null-issue)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## Detect class imbalance in your dataset\n",
    "\n",
    "Here we consider class imbalance, a common issue when working with datasets where one or more classes is significantly rarer than the others. Class imbalance can cause models to become biased towards frequent classes, but detecting this issue can help inform adjustments for fairer and more reliable predictions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. Prepare data\n",
    "\n",
    "Here work with a fixed toy dataset with randomly generated labels. For this issue type, it is enough to provide labels without any additional features of the dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "labels = np.array(\n",
    "   ['c', 'c', 'c', 'b', 'b', 'c', 'c', 'b', 'c', 'b', 'b', 'b', 'b',\n",
    "    'c', 'c', 'b', 'c', 'b', 'c', 'b', 'b', 'b', 'a', 'c', 'b', 'c',\n",
    "    'c', 'b', 'b', 'b', 'b', 'b', 'b', 'c', 'c', 'b', 'c', 'a', 'b',\n",
    "    'c', 'b', 'b', 'b', 'c', 'b', 'c', 'b', 'c', 'b', 'b', 'c', 'c',\n",
    "    'b', 'c', 'b', 'b', 'b', 'b', 'c', 'c', 'b', 'b', 'b', 'b', 'b',\n",
    "    'c', 'c', 'c', 'b', 'b', 'c', 'b', 'b', 'c', 'b', 'c', 'c', 'b',\n",
    "    'c', 'c', 'c', 'b', 'c', 'b', 'b', 'b', 'c', 'b', 'b', 'c', 'b',\n",
    "    'b', 'b', 'b', 'c', 'b', 'b', 'c', 'b', 'c', 'b', 'b', 'b', 'b',\n",
    "    'c', 'c', 'c', 'c', 'c', 'b', 'c', 'b', 'b', 'a', 'b', 'c', 'b',\n",
    "    'c', 'b', 'c', 'c', 'b', 'b', 'c', 'c', 'b', 'c', 'c', 'b', 'b',\n",
    "    'c', 'c', 'c', 'c', 'c', 'b', 'b', 'c', 'c', 'b', 'c', 'c', 'b',\n",
    "    'c', 'b', 'b', 'b', 'c', 'b', 'b', 'c', 'b', 'b', 'c', 'b', 'b',\n",
    "    'b', 'b', 'b', 'c', 'c', 'b', 'b', 'b', 'c', 'a', 'b', 'b', 'c',\n",
    "    'c', 'c', 'c', 'b', 'b', 'c', 'b', 'c', 'c', 'c', 'c', 'c', 'c',\n",
    "    'c', 'c', 'b', 'c', 'c', 'c', 'c', 'b', 'c', 'b', 'b', 'c', 'b',\n",
    "    'b', 'b', 'b', 'b', 'c'],\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. Detect class imbalance with Datalab"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from cleanlab import Datalab\n",
    "\n",
    "lab = Datalab(data={\"label\": labels}, label_name=\"label\", task=\"classification\")\n",
    "\n",
    "lab.find_issues(issue_types={\"class_imbalance\": {}})\n",
    "\n",
    "class_imbalance_issues = lab.get_issues(\"class_imbalance\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. (Optional) Visualize class imbalance issues"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import seaborn as sns\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "plt.figure(figsize=(8, 6))\n",
    "\n",
    "# Plot the distribution of labels in the dataset\n",
    "ax = sns.countplot(x=\"given_label\", data=class_imbalance_issues, order=[\"a\", \"b\", \"c\"], hue=\"is_class_imbalance_issue\")\n",
    "plt.title(\"Distribution of Labels\", fontsize=16)\n",
    "plt.ylabel(\"Count\", fontsize=14)\n",
    "plt.xlabel(\"Given Label\", fontsize=14)\n",
    "plt.xticks(fontsize=14, rotation=0)\n",
    "plt.yticks(fontsize=14, rotation=0)\n",
    "\n",
    "# Annotate plot with score of each issue class\n",
    "for i, given_label in enumerate([\"a\", \"b\", \"c\"]):\n",
    "    filtered_df = class_imbalance_issues.query(\"given_label == @given_label\")\n",
    "    score = filtered_df[\"class_imbalance_score\"].mean()\n",
    "    y = len(filtered_df)\n",
    "    plt.annotate(f\"{round(score, 5)}\", xy=(i, y), ha=\"center\", va=\"bottom\", fontsize=14, color=\"red\")\n",
    "\n",
    "# Add textual annotation to explain the scores\n",
    "plt.text(0.1, max(ax.get_yticks()) * 0.35, \"Numbers on top of\\nbars indicate class\\nimbalance scores\", ha='center', fontsize=12, color='red')\n",
    "\n",
    "# Adjust the legend\n",
    "handles, labels = ax.get_legend_handles_labels()\n",
    "ax.legend(handles, [\"No Class Imbalance\", \"Class Imbalance\"], title=\"Class Imbalance Issue\", fontsize=12, title_fontsize='14')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "nbsphinx": "hidden"
   },
   "outputs": [],
   "source": [
    "# Note: This cell is only for docs.cleanlab.ai, if running on local Jupyter or Colab, please ignore it.\n",
    "\n",
    "\n",
    "# Only one example should suffer from null issues (other just have low scores)\n",
    "assert set(null_issues.query(\"is_null_issue\").index) == {8}, \"Null issues are not as expected.\"\n",
    "\n",
    "# Ensure that the tutorial dataset finds underperforming group based on clustering results\n",
    "assert underperforming_group_issues[\"is_underperforming_group_issue\"].sum() > 0, \"No underperforming group issues detected.\"\n",
    "\n",
    "# Top of non-iid issues show a flag\n",
    "assert non_iid_issues.head(10).is_non_iid_issue.sum() > 0, \"No non-iid issues detected at the top of the non-iid issues.\"\n",
    "\n",
    "# Pre-computed knn-graph section looks for the following issue types, except non-iid\n",
    "assert {\"null\", \"label\", \"outlier\", \"near_duplicate\", \"non_iid\", \"class_imbalance\", \"underperforming_group\"}.issuperset(issue_summary[\"issue_type\"]), \"Issue types are not as expected.\"\n",
    "\n",
    "# Ensure that class imbalance score is correct\n",
    "assert all(class_imbalance_issues.query(\"is_class_imbalance_issue\")[\"class_imbalance_score\"] == 0.02), \"Class imbalance issue scores are not as expected\"\n",
    "assert all(class_imbalance_issues.query(\"not is_class_imbalance_issue\")[\"class_imbalance_score\"] == 1.0), \"Class imbalance issue scores are not as expected\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Identify Spurious Correlations in Image Datasets\n",
    "\n",
    "This section demonstrates how to detect spurious correlations in image datasets by measuring how strongly individual image properties correlate with class labels.\n",
    "These correlations could lead to unreliable model predictions and poor generalization.\n",
    "\n",
    "`Datalab` automatically analyzes image-specific attributes such as:\n",
    "\n",
    "- Darkness\n",
    "- Blurriness\n",
    "- Aspect ratio anomalies\n",
    "- More image-specific features from [CleanVision](https://cleanvision.readthedocs.io/en/latest/tutorials/tutorial.html#What-is-CleanVision?)\n",
    "\n",
    "This analysis helps identify unintended biases in datasets and guides steps to enhance the robustness of machine learning models.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. Load the Dataset\n",
    "\n",
    "For this tutorial, we'll use a subset of the CIFAR-10 dataset with artificially introduced biases to illustrate how Datalab detects spurious correlations. We'll assume you have a directory of images organized into subdirectories by class."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To fetch the data for this tutorial, make sure you have `wget` and `zip` installed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Download the dataset\n",
    "!wget -nc https://s.cleanlab.ai/CIFAR-10-subset.zip\n",
    "!unzip -q CIFAR-10-subset.zip"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from datasets import Dataset\n",
    "from torchvision.datasets import ImageFolder\n",
    "\n",
    "def load_image_dataset(data_dir: str):\n",
    "    \"\"\"\n",
    "    Load images from a directory structure and create a datasets.Dataset object.\n",
    "    \n",
    "    Parameters\n",
    "    ----------\n",
    "    data_dir : str\n",
    "        Path to the root directory containing class subdirectories.\n",
    "    \n",
    "    Returns\n",
    "    -------\n",
    "    datasets.Dataset\n",
    "        A Dataset object containing 'image' and 'label' columns.\n",
    "    \"\"\"\n",
    "    image_dataset = ImageFolder(data_dir)\n",
    "    images = [img for img, _ in image_dataset]\n",
    "    labels = [label for _, label in image_dataset]\n",
    "    return Dataset.from_dict({\"image\": images, \"label\": labels})\n",
    "\n",
    "# Load the dataset\n",
    "data_dir = \"CIFAR-10-subset/darkened_images\"\n",
    "dataset = load_image_dataset(data_dir)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. Run Datalab Analysis\n",
    "\n",
    "Now that we have loaded our dataset, let's use `Datalab` to analyze it for potential spurious correlations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from cleanlab import Datalab\n",
    "\n",
    "# Initialize Datalab with the dataset\n",
    "lab = Datalab(data=dataset, label_name=\"label\", image_key=\"image\")\n",
    "\n",
    "# Run the analysis\n",
    "lab.find_issues()\n",
    "\n",
    "# Generate and display the report\n",
    "lab.report()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. Interpret the Results\n",
    "\n",
    "While the `lab.report()` output is comprehensive, we can use more targeted methods to examine the results:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import display\n",
    "\n",
    "# Get scores for label uncorrelatedness with image properties\n",
    "label_uncorrelatedness_scores = lab.get_info(\"spurious_correlations\")[\"correlations_df\"]\n",
    "print(\"Label uncorrelatedness scores for image properties:\")\n",
    "display(label_uncorrelatedness_scores)\n",
    "\n",
    "# Get image-specific issues\n",
    "issue_name = \"dark\"\n",
    "image_issues = lab.get_issues(issue_name)\n",
    "print(\"\\nImage-specific issues:\")\n",
    "display(image_issues)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Interpreting the results:\n",
    "\n",
    "1. **Label Uncorrelatedness Scores**: The `label_uncorrelatedness_scores` DataFrame shows scores for various image properties. Lower scores (closer to 0) indicate stronger correlations with class labels, suggesting potential spurious correlations.\n",
    "2. **Image-Specific Issues**: The `image_issues` DataFrame provides details on detected image-specific problems, including the issue type and affected samples.\n",
    "\n",
    "In our CIFAR-10 subset example, you should see that the 'dark' property has a low score in the label_uncorrelatedness_scores, indicating a strong correlation with one of the classes (likely the 'frog' class). This is due to our artificial darkening of these images to demonstrate the concept.\n",
    "\n",
    "For real-world datasets, pay attention to:\n",
    "\n",
    "- Properties with notably low scores in the label_uncorrelatedness_scores DataFrame\n",
    "- Prevalent issues in the image_issues DataFrame\n",
    "\n",
    "These may represent unintended biases in your data collection or preprocessing steps and warrant further investigation.\n",
    "\n",
    "> **Note**: Using these methods provides a more programmatic and focused way to analyze the results compared to the verbose output of `lab.report()`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def plot_scores_labels(lab, property=\"dark_score\"):\n",
    "    \"\"\"\n",
    "    Plots the scores of image-specific properties like 'dark_score', 'blurry_score', etc. \n",
    "    against labels for each instance in the dataset using 'Datalab' object.\n",
    "\n",
    "    Parameters:\n",
    "    -----------\n",
    "    lab : 'Datalab' object\n",
    "    \n",
    "    property : str, optional\n",
    "        The name of the property to be plotted against the labels.\n",
    "    \n",
    "    Returns:\n",
    "    --------\n",
    "    None\n",
    "        This function does not return any value. It generates a plot of the specified \n",
    "        property against the labels.\n",
    "    \"\"\"\n",
    "    issues_copy = lab.issues.copy()\n",
    "    issues_copy[\"label\"] = lab.labels\n",
    "    issues_copy.boxplot(column=[property], by=\"label\")\n",
    "\n",
    "# Plotting 'dark_score' value of each instance in the dataset against class label\n",
    "plot_scores_labels(lab, \"dark_score\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The above plot illustrates the distribution of dark scores across class labels. In this dataset, 100 images from the `Frog` class (Class 0 in the plot) have been darkened, while 100 images from the `Truck` class (Class 1 in the plot) remain unchanged, as in the CIFAR-10 dataset. This creates a clear spurious correlation between the 'darkness' feature and the class labels: `Frog` images are dark, whereas `Truck` images are not. We can see that the `dark_score` values between the two classes are non-overlapping. This characteristic of the dataset is identified by `Datalab`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "nbsphinx": "hidden"
   },
   "source": [
    "### 4. (Optional) Compare with a Dataset Without Spurious Correlations\n",
    "\n",
    "To understand the impact of spurious correlations, it can be helpful to compare our results with a dataset that doesn't have artificially introduced biases. In this case, we'll use the original CIFAR-10 subset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "nbsphinx": "hidden"
   },
   "outputs": [],
   "source": [
    "# Load the original dataset\n",
    "original_data_dir = \"CIFAR-10-subset/original_images\"\n",
    "original_dataset = load_image_dataset(original_data_dir)\n",
    "\n",
    "# Create a new Datalab instance and run analysis\n",
    "original_lab = Datalab(data=original_dataset, label_name=\"label\", image_key=\"image\")\n",
    "original_lab.find_issues()\n",
    "\n",
    "# Compare correlation scores\n",
    "original_scores = original_lab.get_info(\"spurious_correlations\")[\"correlations_df\"]\n",
    "print(\"Label uncorrelatedness scores for original dataset:\")\n",
    "display(original_scores)\n",
    "\n",
    "# Compare image-specific issues\n",
    "original_issues = original_lab.get_issues(\"dark\")\n",
    "print(\"\\nImage-specific issues in original dataset:\")\n",
    "display(original_issues)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "nbsphinx": "hidden"
   },
   "source": [
    "When comparing the results:\n",
    "\n",
    "1. Look for differences in the label uncorrelatedness scores, especially for the 'dark' property.\n",
    "2. Compare the number and types of image-specific issues detected.\n",
    "\n",
    "You should notice that the original dataset has more balanced correlation scores and fewer (or no) issues related to darkness. This comparison highlights how spurious correlations can be detected by `Datalab`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "nbsphinx": "hidden"
   },
   "outputs": [],
   "source": [
    "# Plotting 'dark_score' value of each instance in the original dataset against class label\n",
    "plot_scores_labels(original_lab, \"dark_score\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "nbsphinx": "hidden"
   },
   "source": [
    "The above plot illustrates the distribution of dark scores across class labels. In this dataset, 100 images each from the classes `Frog` (Class 0 in the plot) and `Truck` (Class 1 in the plot) remain unchanged, as in the CIFAR-10 dataset. There is no apparent spurious correlation with respect to the 'darkness' feature and class labels. We can see that the `dark_score` values between the two classes are highly overlapping."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
