{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "# Bagging classifiers using sampler\n",
    "\n",
    "In this example, we show how\n",
    ":class:`~imblearn.ensemble.BalancedBaggingClassifier` can be used to create a\n",
    "large variety of classifiers by giving different samplers.\n",
    "\n",
    "We will give several examples that have been published in the passed year.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Authors: Guillaume Lemaitre <g.lemaitre58@gmail.com>\n",
    "# License: MIT"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Automatically created module for IPython interactive environment\n"
     ]
    }
   ],
   "source": [
    "print(__doc__)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Generate an imbalanced dataset\n",
    "\n",
    "For this example, we will create a synthetic dataset using the function\n",
    ":func:`~sklearn.datasets.make_classification`. The problem will be a toy\n",
    "classification problem with a ratio of 1:9 between the two classes.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import make_classification\n",
    "\n",
    "X, y = make_classification(\n",
    "    n_samples=10_000,\n",
    "    n_features=10,\n",
    "    weights=[0.1, 0.9],\n",
    "    class_sep=0.5,\n",
    "    random_state=0,\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "1    0.8977\n",
       "0    0.1023\n",
       "dtype: float64"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import pandas as pd\n",
    "\n",
    "pd.Series(y).value_counts(normalize=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In the following sections, we will show a couple of algorithms that have\n",
    "been proposed over the years. We intend to illustrate how one can reuse the\n",
    ":class:`~imblearn.ensemble.BalancedBaggingClassifier` by passing different\n",
    "sampler.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.709 +/- 0.012\n"
     ]
    }
   ],
   "source": [
    "from sklearn.model_selection import cross_validate\n",
    "from sklearn.ensemble import BaggingClassifier\n",
    "\n",
    "ebb = BaggingClassifier()\n",
    "cv_results = cross_validate(ebb, X, y, scoring=\"balanced_accuracy\")\n",
    "\n",
    "print(f\"{cv_results['test_score'].mean():.3f} +/- {cv_results['test_score'].std():.3f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Exactly Balanced Bagging and Over-Bagging\n",
    "\n",
    "The :class:`~imblearn.ensemble.BalancedBaggingClassifier` can use in\n",
    "conjunction with a :class:`~imblearn.under_sampling.RandomUnderSampler` or\n",
    ":class:`~imblearn.over_sampling.RandomOverSampler`. These methods are\n",
    "referred as Exactly Balanced Bagging and Over-Bagging, respectively and have\n",
    "been proposed first in [1]_.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.756 +/- 0.014\n"
     ]
    }
   ],
   "source": [
    "from imblearn.ensemble import BalancedBaggingClassifier\n",
    "from imblearn.under_sampling import RandomUnderSampler\n",
    "\n",
    "# Exactly Balanced Bagging\n",
    "ebb = BalancedBaggingClassifier(sampler=RandomUnderSampler())\n",
    "cv_results = cross_validate(ebb, X, y, scoring=\"balanced_accuracy\")\n",
    "\n",
    "print(f\"{cv_results['test_score'].mean():.3f} +/- {cv_results['test_score'].std():.3f}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.706 +/- 0.013\n"
     ]
    }
   ],
   "source": [
    "from imblearn.over_sampling import RandomOverSampler\n",
    "\n",
    "# Over-bagging\n",
    "over_bagging = BalancedBaggingClassifier(sampler=RandomOverSampler())\n",
    "cv_results = cross_validate(over_bagging, X, y, scoring=\"balanced_accuracy\")\n",
    "\n",
    "print(f\"{cv_results['test_score'].mean():.3f} +/- {cv_results['test_score'].std():.3f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## SMOTE-Bagging\n",
    "\n",
    "Instead of using a :class:`~imblearn.over_sampling.RandomOverSampler` that\n",
    "make a bootstrap, an alternative is to use\n",
    ":class:`~imblearn.over_sampling.SMOTE` as an over-sampler. This is known as\n",
    "SMOTE-Bagging [2]_.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.744 +/- 0.014\n"
     ]
    }
   ],
   "source": [
    "from imblearn.over_sampling import SMOTE\n",
    "\n",
    "# SMOTE-Bagging\n",
    "smote_bagging = BalancedBaggingClassifier(sampler=SMOTE())\n",
    "cv_results = cross_validate(smote_bagging, X, y, scoring=\"balanced_accuracy\")\n",
    "\n",
    "print(f\"{cv_results['test_score'].mean():.3f} +/- {cv_results['test_score'].std():.3f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Roughly Balanced Bagging\n",
    "While using a :class:`~imblearn.under_sampling.RandomUnderSampler` or\n",
    ":class:`~imblearn.over_sampling.RandomOverSampler` will create exactly the\n",
    "desired number of samples, it does not follow the statistical spirit wanted\n",
    "in the bagging framework. The authors in [3]_ proposes to use a negative\n",
    "binomial distribution to compute the number of samples of the majority\n",
    "class to be selected and then perform a random under-sampling.\n",
    "\n",
    "Here, we illustrate this method by implementing a function in charge of\n",
    "resampling and use the :class:`~imblearn.FunctionSampler` to integrate it\n",
    "within a :class:`~imblearn.pipeline.Pipeline` and\n",
    ":class:`~sklearn.model_selection.cross_validate`.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.754 +/- 0.016\n"
     ]
    }
   ],
   "source": [
    "from collections import Counter\n",
    "import numpy as np\n",
    "from imblearn import FunctionSampler\n",
    "\n",
    "\n",
    "def roughly_balanced_bagging(X, y, replace=False):\n",
    "    \"\"\"Implementation of Roughly Balanced Bagging for binary problem.\"\"\"\n",
    "    # find the minority and majority classes\n",
    "    class_counts = Counter(y)\n",
    "    majority_class = max(class_counts, key=class_counts.get)\n",
    "    minority_class = min(class_counts, key=class_counts.get)\n",
    "\n",
    "    # compute the number of sample to draw from the majority class using\n",
    "    # a negative binomial distribution\n",
    "    n_minority_class = class_counts[minority_class]\n",
    "    n_majority_resampled = np.random.negative_binomial(n=n_minority_class, p=0.5)\n",
    "\n",
    "    # draw randomly with or without replacement\n",
    "    majority_indices = np.random.choice(\n",
    "        np.flatnonzero(y == majority_class),\n",
    "        size=n_majority_resampled,\n",
    "        replace=replace,\n",
    "    )\n",
    "    minority_indices = np.random.choice(\n",
    "        np.flatnonzero(y == minority_class),\n",
    "        size=n_minority_class,\n",
    "        replace=replace,\n",
    "    )\n",
    "    indices = np.hstack([majority_indices, minority_indices])\n",
    "\n",
    "    return X[indices], y[indices]\n",
    "\n",
    "\n",
    "# Roughly Balanced Bagging\n",
    "rbb = BalancedBaggingClassifier(\n",
    "    sampler=FunctionSampler(func=roughly_balanced_bagging, kw_args={\"replace\": True})\n",
    ")\n",
    "cv_results = cross_validate(rbb, X, y, scoring=\"balanced_accuracy\")\n",
    "\n",
    "print(f\"{cv_results['test_score'].mean():.3f} +/- {cv_results['test_score'].std():.3f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    ".. topic:: References:\n",
    "\n",
    "   .. [1] R. Maclin, and D. Opitz. \"An empirical evaluation of bagging and\n",
    "          boosting.\" AAAI/IAAI 1997 (1997): 546-551.\n",
    "\n",
    "   .. [2] S. Wang, and X. Yao. \"Diversity analysis on imbalanced data sets by\n",
    "          using ensemble models.\" 2009 IEEE symposium on computational\n",
    "          intelligence and data mining. IEEE, 2009.\n",
    "\n",
    "   .. [3] S. Hido, H. Kashima, and Y. Takahashi. \"Roughly balanced bagging\n",
    "         for imbalanced data.\" Statistical Analysis and Data Mining: The ASA\n",
    "         Data Science Journal 2.5‐6 (2009): 412-426.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
