{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "XjaM_R6LeV5S"
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/jeffheaton/app_deep_learning/blob/main/t81_558_class_08_2_keras_ensembles.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Fnpz-3gAeV5T"
   },
   "source": [
    "# T81-558: Applications of Deep Neural Networks\n",
    "**Module 8: Kaggle Data Sets**\n",
    "* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)\n",
    "* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "DpUhdx0FeV5T"
   },
   "source": [
    "# Module 8 Material\n",
    "\n",
    "* Part 8.1: Introduction to Kaggle [[Video]](https://www.youtube.com/watch?v=7Mk46fb0Ayg&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_1_kaggle_intro.ipynb)\n",
    "* **Part 8.2: Building Ensembles with Scikit-Learn and PyTorch** [[Video]](https://www.youtube.com/watch?v=przbLRCRL24&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_2_pytorch_ensembles.ipynb)\n",
    "* Part 8.3: How Should you Architect Your PyTorch Neural Network: Hyperparameters [[Video]](https://www.youtube.com/watch?v=YTL2BR4U2Ng&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_3_pytorch_hyperparameters.ipynb)\n",
    "* Part 8.4: Bayesian Hyperparameter Optimization for PyTorch [[Video]](https://www.youtube.com/watch?v=1f4psgAcefU&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_4_bayesian_hyperparameter_opt.ipynb)\n",
    "* Part 8.5: Current Semester's Kaggle [[Video]] [[Notebook]](t81_558_class_08_5_kaggle_project.ipynb)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "DaG7LEHQeV5U"
   },
   "source": [
    "# Google CoLab Instructions\n",
    "\n",
    "The following code ensures that Google CoLab is running the correct version of TensorFlow.\n",
    "  Running the following code will map your GDrive to ```/content/drive```."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "wmJ4sdveeV5U",
    "outputId": "ccade7c3-5c27-46e5-ea44-cb4ca00934ac"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Note: not using Google CoLab\n",
      "Using device: mps\n"
     ]
    }
   ],
   "source": [
    "try:\n",
    "    from google.colab import drive\n",
    "    drive.mount('/content/drive', force_remount=True)\n",
    "    COLAB = True\n",
    "    print(\"Note: using Google CoLab\")\n",
    "except:\n",
    "    print(\"Note: not using Google CoLab\")\n",
    "    COLAB = False\n",
    "\n",
    "# Nicely formatted time string\n",
    "def hms_string(sec_elapsed):\n",
    "    h = int(sec_elapsed / (60 * 60))\n",
    "    m = int((sec_elapsed % (60 * 60)) / 60)\n",
    "    s = sec_elapsed % 60\n",
    "    return \"{}:{:>02}:{:>05.2f}\".format(h, m, s)\n",
    "\n",
    "# Early stopping (see module 3.4)\n",
    "import copy\n",
    "class EarlyStopping:\n",
    "    def __init__(self, patience=5, min_delta=0, restore_best_weights=True):\n",
    "        self.patience = patience\n",
    "        self.min_delta = min_delta\n",
    "        self.restore_best_weights = restore_best_weights\n",
    "        self.best_model = None\n",
    "        self.best_loss = None\n",
    "        self.counter = 0\n",
    "        self.status = \"\"\n",
    "\n",
    "    def __call__(self, model, val_loss):\n",
    "        if self.best_loss is None:\n",
    "            self.best_loss = val_loss\n",
    "            self.best_model = copy.deepcopy(model.state_dict())\n",
    "        elif self.best_loss - val_loss >= self.min_delta:\n",
    "            self.best_model = copy.deepcopy(model.state_dict())\n",
    "            self.best_loss = val_loss\n",
    "            self.counter = 0\n",
    "            self.status = f\"Improvement found, counter reset to {self.counter}\"\n",
    "        else:\n",
    "            self.counter += 1\n",
    "            self.status = f\"No improvement in the last {self.counter} epochs\"\n",
    "            if self.counter >= self.patience:\n",
    "                self.status = f\"Early stopping triggered after {self.counter} epochs.\"\n",
    "                if self.restore_best_weights:\n",
    "                    model.load_state_dict(self.best_model)\n",
    "                return True\n",
    "        return False\n",
    "\n",
    "# Make use of a GPU or MPS (Apple) if one is available.  (see module 3.2)\n",
    "import torch\n",
    "\n",
    "device = (\n",
    "    \"mps\"\n",
    "    if getattr(torch, \"has_mps\", False)\n",
    "    else \"cuda\"\n",
    "    if torch.cuda.is_available()\n",
    "    else \"cpu\"\n",
    ")\n",
    "print(f\"Using device: {device}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "FHx0cbrxeV5U"
   },
   "source": [
    "# Part 8.2: Building Ensembles with Scikit-Learn and PyTorch\n",
    "\n",
    "### Evaluating Feature Importance\n",
    "\n",
    "Feature importance tells us how important each feature (from the feature/import vector) is to predicting a neural network or another model. There are many different ways to evaluate the feature importance of neural networks. The following paper presents an excellent (and readable) overview of the various means of assessing the significance of neural network inputs/features.\n",
    "\n",
    "* An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data [[Cite:olden2004accurate]](http://depts.washington.edu/oldenlab/wordpress/wp-content/uploads/2013/03/EcologicalModelling_2004.pdf). *Ecological Modelling*, 178(3), 389-397.\n",
    "\n",
    "In summary, the following methods are available to neural networks:\n",
    "\n",
    "* Connection Weights Algorithm\n",
    "* Partial Derivatives\n",
    "* Input Perturbation\n",
    "* Sensitivity Analysis\n",
    "* Forward Stepwise Addition \n",
    "* Improved Stepwise Selection 1\n",
    "* Backward Stepwise Elimination\n",
    "* Improved Stepwise Selection\n",
    "\n",
    "For this chapter, we will use the input Perturbation feature ranking algorithm. This algorithm will work with any regression or classification network. In the next section, I provide an implementation of the input perturbation algorithm for scikit-learn. This code implements a function below that will work with any scikit-learn model.\n",
    "\n",
    "[Leo Breiman](https://en.wikipedia.org/wiki/Leo_Breiman) provided this algorithm in his seminal paper on random forests. [[Citebreiman2001random:]](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf)  Although he presented this algorithm in conjunction with random forests, it is model-independent and appropriate for any supervised learning model.  This algorithm, known as the input perturbation algorithm, works by evaluating a trained model’s accuracy with each input individually shuffled from a data set. Shuffling an input causes it to become useless—effectively removing it from the model. More important inputs will produce a less accurate score when they are removed by shuffling them. This process makes sense because important features will contribute to the model's accuracy. I first presented the TensorFlow implementation of this algorithm in the following paper.\n",
    "\n",
    "* Early stabilizing feature importance for TensorFlow deep neural networks[[Cite:heaton2017early]](https://www.heatonresearch.com/dload/phd/IJCNN%202017-v2-final.pdf)\n",
    "\n",
    "This algorithm will use log loss to evaluate a classification problem and RMSE for regression."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "id": "EjMqkygLeV5a"
   },
   "outputs": [],
   "source": [
    "from sklearn import metrics\n",
    "import scipy as sp\n",
    "import numpy as np\n",
    "import math\n",
    "from sklearn import metrics\n",
    "\n",
    "import torch\n",
    "import torch.nn.functional as F\n",
    "import pandas as pd\n",
    "\n",
    "def perturbation_rank(device, model, x, y, names, regression):\n",
    "    model.to(device)\n",
    "    model.eval() # set the model to evaluation mode\n",
    "\n",
    "    #x = torch.tensor(x).float().to(device)\n",
    "    #y = torch.tensor(y).float().to(device)\n",
    "    \n",
    "    errors = []\n",
    "\n",
    "    for i in range(x.shape[1]):\n",
    "        hold = x[:, i].clone()\n",
    "        x[:, i] = torch.randperm(x.shape[0]).to(device)  # shuffling\n",
    "        \n",
    "        with torch.no_grad():\n",
    "            pred = model(x)\n",
    "\n",
    "        if regression:\n",
    "            loss_fn = torch.nn.MSELoss()\n",
    "            error = loss_fn(y, pred).item()\n",
    "        else:\n",
    "            # pred should be probabilities; apply softmax if not done in model's forward method\n",
    "            if len(pred.shape) == 2 and pred.shape[1] > 1:\n",
    "                pred = F.softmax(pred, dim=1)\n",
    "                loss_fn = torch.nn.CrossEntropyLoss()\n",
    "                error = loss_fn(pred, y.long()).item()\n",
    "            else:\n",
    "                loss_fn = nn.MSELoss()\n",
    "                error = loss_fn(y, pred).item()\n",
    "            \n",
    "            \n",
    "        errors.append(error)\n",
    "        x[:, i] = hold\n",
    "        \n",
    "    max_error = max(errors)\n",
    "    importance = [e/max_error for e in errors]\n",
    "\n",
    "    data = {'name':names, 'error':errors, 'importance':importance}\n",
    "    result = pd.DataFrame(data, columns=['name', 'error', 'importance'])\n",
    "    result.sort_values(by=['importance'], ascending=[0], inplace=True)\n",
    "    result.reset_index(inplace=True, drop=True)\n",
    "    return result"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "0dnNa09QeV5a"
   },
   "source": [
    "## Classification and Input Perturbation Ranking\n",
    "\n",
    "We now look at the code to perform perturbation ranking for a classification neural network.  The implementation technique is slightly different for classification vs. regression, so I must provide two different implementations.  The primary difference between classification and regression is how we evaluate the accuracy of the neural network in each of these two network types.  We will use the Root Mean Square (RMSE) error calculation, whereas we will use log loss for classification.\n",
    "\n",
    "The code presented below creates a classification neural network that will predict the classic iris dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "lEY1hZigeV5a",
    "outputId": "80673c5c-3264-4d54-e19b-02742b563e12"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch: 1, tloss: 0.6026307344436646, vloss: 0.536555, : 100%|██████████| 7/7 [00:00<00:00, 14.55it/s]\n",
      "Epoch: 2, tloss: 0.36586475372314453, vloss: 0.277725, Improvement found, counter reset to 0: 100%|██████████| 7/7 [00:00<00:00, 172.42it/s]\n",
      "Epoch: 3, tloss: 0.15603026747703552, vloss: 0.187535, Improvement found, counter reset to 0: 100%|██████████| 7/7 [00:00<00:00, 133.04it/s]\n",
      "Epoch: 4, tloss: 0.05794892832636833, vloss: 0.154333, Improvement found, counter reset to 0: 100%|██████████| 7/7 [00:00<00:00, 229.42it/s]\n",
      "Epoch: 5, tloss: 0.18528980016708374, vloss: 0.076723, Improvement found, counter reset to 0: 100%|██████████| 7/7 [00:00<00:00, 240.43it/s]\n",
      "Epoch: 6, tloss: 0.12420052289962769, vloss: 0.061499, Improvement found, counter reset to 0: 100%|██████████| 7/7 [00:00<00:00, 237.66it/s]\n",
      "Epoch: 7, tloss: 0.0334041602909565, vloss: 0.045322, Improvement found, counter reset to 0: 100%|██████████| 7/7 [00:00<00:00, 226.48it/s]\n",
      "Epoch: 8, tloss: 0.09452516585588455, vloss: 0.032975, Improvement found, counter reset to 0: 100%|██████████| 7/7 [00:00<00:00, 218.18it/s]\n",
      "Epoch: 9, tloss: 0.005208518821746111, vloss: 0.023963, Improvement found, counter reset to 0: 100%|██████████| 7/7 [00:00<00:00, 231.11it/s]\n",
      "Epoch: 10, tloss: 0.06230875477194786, vloss: 0.015515, Improvement found, counter reset to 0: 100%|██████████| 7/7 [00:00<00:00, 220.90it/s]\n",
      "Epoch: 11, tloss: 0.08908901363611221, vloss: 0.038119, No improvement in the last 1 epochs: 100%|██████████| 7/7 [00:00<00:00, 132.65it/s]\n",
      "Epoch: 12, tloss: 0.03496554493904114, vloss: 0.026789, No improvement in the last 2 epochs: 100%|██████████| 7/7 [00:00<00:00, 244.19it/s]\n",
      "Epoch: 13, tloss: 0.06976647675037384, vloss: 0.018425, No improvement in the last 3 epochs: 100%|██████████| 7/7 [00:00<00:00, 227.50it/s]\n",
      "Epoch: 14, tloss: 0.013938352465629578, vloss: 0.010584, Improvement found, counter reset to 0: 100%|██████████| 7/7 [00:00<00:00, 235.20it/s]\n",
      "Epoch: 15, tloss: 0.03223251923918724, vloss: 0.008987, Improvement found, counter reset to 0: 100%|██████████| 7/7 [00:00<00:00, 237.92it/s]\n",
      "Epoch: 16, tloss: 0.009036802686750889, vloss: 0.016889, No improvement in the last 1 epochs: 100%|██████████| 7/7 [00:00<00:00, 230.95it/s]\n",
      "Epoch: 17, tloss: 0.009504597634077072, vloss: 0.014087, No improvement in the last 2 epochs: 100%|██████████| 7/7 [00:00<00:00, 233.24it/s]\n",
      "Epoch: 18, tloss: 0.05779396370053291, vloss: 0.012317, No improvement in the last 3 epochs: 100%|██████████| 7/7 [00:00<00:00, 235.06it/s]\n",
      "Epoch: 19, tloss: 0.001863101962953806, vloss: 0.012097, No improvement in the last 4 epochs: 100%|██████████| 7/7 [00:00<00:00, 149.85it/s]\n",
      "Epoch: 20, tloss: 0.010492893867194653, vloss: 0.011086, Early stopping triggered after 5 epochs.: 100%|██████████| 7/7 [00:00<00:00, 222.97it/s]\n"
     ]
    }
   ],
   "source": [
    "# HIDE OUTPUT\n",
    "import time\n",
    "\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import torch\n",
    "import tqdm\n",
    "from sklearn.metrics import accuracy_score\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.preprocessing import LabelEncoder, StandardScaler\n",
    "from torch import nn\n",
    "from torch.autograd import Variable\n",
    "from torch.utils.data import DataLoader, TensorDataset\n",
    "\n",
    "# Set random seed for reproducibility\n",
    "np.random.seed(42)\n",
    "torch.manual_seed(42)\n",
    "\n",
    "def load_data():\n",
    "    df = pd.read_csv(\n",
    "        \"https://data.heatonresearch.com/data/t81-558/iris.csv\", na_values=[\"NA\", \"?\"]\n",
    "    )\n",
    "\n",
    "    le = LabelEncoder()\n",
    "\n",
    "    x = df[[\"sepal_l\", \"sepal_w\", \"petal_l\", \"petal_w\"]].values\n",
    "    y = le.fit_transform(df[\"species\"])\n",
    "    species = le.classes_\n",
    "\n",
    "    # Split into validation and training sets\n",
    "    x_train, x_test, y_train, y_test = train_test_split(\n",
    "        x, y, test_size=0.25, random_state=42\n",
    "    )\n",
    "\n",
    "    scaler = StandardScaler()\n",
    "    x_train = scaler.fit_transform(x_train)\n",
    "    x_test = scaler.transform(x_test)\n",
    "\n",
    "    # Numpy to Torch Tensor\n",
    "    x_train = torch.tensor(x_train, device=device, dtype=torch.float32)\n",
    "    y_train = torch.tensor(y_train, device=device, dtype=torch.long)\n",
    "\n",
    "    x_test = torch.tensor(x_test, device=device, dtype=torch.float32)\n",
    "    y_test = torch.tensor(y_test, device=device, dtype=torch.long)\n",
    "\n",
    "    return x_train, x_test, y_train, y_test, species, df.columns\n",
    "\n",
    "\n",
    "x_train, x_test, y_train, y_test, species, columns = load_data()\n",
    "columns = list(columns)\n",
    "columns.remove(\"species\") # remove the target(y)\n",
    "\n",
    "# Create datasets\n",
    "BATCH_SIZE = 16\n",
    "\n",
    "dataset_train = TensorDataset(x_train, y_train)\n",
    "dataloader_train = DataLoader(\n",
    "    dataset_train, batch_size=BATCH_SIZE, shuffle=True)\n",
    "\n",
    "dataset_test = TensorDataset(x_test, y_test)\n",
    "dataloader_test = DataLoader(dataset_test, batch_size=BATCH_SIZE, shuffle=True)\n",
    "\n",
    "# Create model using nn.Sequential\n",
    "model = nn.Sequential(\n",
    "    nn.Linear(x_train.shape[1], 50),\n",
    "    nn.ReLU(),\n",
    "    nn.Linear(50, 25),\n",
    "    nn.ReLU(),\n",
    "    nn.Linear(25, len(species)),\n",
    "    nn.LogSoftmax(dim=1),\n",
    ")\n",
    "\n",
    "model = torch.compile(model,backend=\"aot_eager\").to(device)\n",
    "\n",
    "loss_fn = nn.CrossEntropyLoss()  # cross entropy loss\n",
    "\n",
    "optimizer = torch.optim.Adam(model.parameters(), lr=0.01)\n",
    "es = EarlyStopping()\n",
    "\n",
    "epoch = 0\n",
    "done = False\n",
    "while epoch < 1000 and not done:\n",
    "    epoch += 1\n",
    "    steps = list(enumerate(dataloader_train))\n",
    "    pbar = tqdm.tqdm(steps)\n",
    "    model.train()\n",
    "    for i, (x_batch, y_batch) in pbar:\n",
    "        y_batch_pred = model(x_batch.to(device))\n",
    "        loss = loss_fn(y_batch_pred, y_batch.to(device))\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "\n",
    "        loss, current = loss.item(), (i + 1) * len(x_batch)\n",
    "        if i == len(steps) - 1:\n",
    "            model.eval()\n",
    "            pred = model(x_test)\n",
    "            vloss = loss_fn(pred, y_test)\n",
    "            if es(model, vloss):\n",
    "                done = True\n",
    "            pbar.set_description(\n",
    "                f\"Epoch: {epoch}, tloss: {loss}, vloss: {vloss:>7f}, {es.status}\"\n",
    "            )\n",
    "        else:\n",
    "            pbar.set_description(f\"Epoch: {epoch}, tloss {loss:}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "vd-Y5j9MeV5b"
   },
   "source": [
    "Next, we evaluate the accuracy of the trained model.  Here we see that the neural network performs great, with an accuracy of 1.0.  We might fear overfitting with such high accuracy for a more complex dataset.  However, for this example, we are more interested in determining the importance of each column."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "2-FOcsWieV5b",
    "outputId": "b5380db0-cf09-4973-c0e5-91a2041892d8"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Accuracy: 1.0\n"
     ]
    }
   ],
   "source": [
    "from sklearn.metrics import accuracy_score\n",
    "\n",
    "pred = model(x_test)\n",
    "_, predict_classes = torch.max(pred, 1)\n",
    "correct = accuracy_score(y_test.cpu(), predict_classes.cpu())\n",
    "print(f\"Accuracy: {correct}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "IhMMwAhzeV5b"
   },
   "source": [
    "We are now ready to call the input perturbation algorithm.  First, we extract the column names and remove the target column.  The target column is not important, as it is the objective, not one of the inputs.  In supervised learning, the target is of the utmost importance.\n",
    "\n",
    "We can see the importance displayed in the following table.  The most important column is always 1.0, and lessor columns will continue in a downward trend.  The least important column will have the lowest rank."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 175
    },
    "id": "OTUe2xOZeV5b",
    "outputId": "0d9610d9-1fa2-4438-ed3b-77e44029aa84"
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>name</th>\n",
       "      <th>error</th>\n",
       "      <th>importance</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>petal_w</td>\n",
       "      <td>1.229601</td>\n",
       "      <td>1.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>petal_l</td>\n",
       "      <td>1.228287</td>\n",
       "      <td>0.998932</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>sepal_w</td>\n",
       "      <td>1.155053</td>\n",
       "      <td>0.939373</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>sepal_l</td>\n",
       "      <td>0.976901</td>\n",
       "      <td>0.794486</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "      name     error  importance\n",
       "0  petal_w  1.229601    1.000000\n",
       "1  petal_l  1.228287    0.998932\n",
       "2  sepal_w  1.155053    0.939373\n",
       "3  sepal_l  0.976901    0.794486"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Rank the features\n",
    "from IPython.display import display, HTML\n",
    "\n",
    "rank = perturbation_rank(device, model, x_test, y_test, columns, False)\n",
    "display(rank)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "5YUQdraleV5b"
   },
   "source": [
    "## Regression and Input Perturbation Ranking\n",
    "\n",
    "We now see how to use input perturbation ranking for a regression neural network.  We will use the MPG dataset as a demonstration.  The code below loads the MPG dataset and creates a regression neural network for this dataset.  The code trains the neural network and calculates an RMSE evaluation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "kB0AMVAneV5b",
    "outputId": "94e06bc1-6028-4a0f-b76e-76df20d7925d"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch: 1, tloss: 224.4937286376953, vloss: 275.311096, EStop:[]: 100%|██████████| 19/19 [00:00<00:00, 63.33it/s]\n",
      "Epoch: 2, tloss: 221.47691345214844, vloss: 186.099442, EStop:[Improvement found, counter reset to 0]: 100%|██████████| 19/19 [00:00<00:00, 248.06it/s]\n",
      "Epoch: 3, tloss: 238.82725524902344, vloss: 150.277847, EStop:[Improvement found, counter reset to 0]: 100%|██████████| 19/19 [00:00<00:00, 255.10it/s]\n",
      "Epoch: 4, tloss: 120.80052947998047, vloss: 131.800980, EStop:[Improvement found, counter reset to 0]: 100%|██████████| 19/19 [00:00<00:00, 196.25it/s]\n",
      "Epoch: 5, tloss: 134.80111694335938, vloss: 154.462509, EStop:[No improvement in the last 1 epochs]: 100%|██████████| 19/19 [00:00<00:00, 242.83it/s]\n",
      "Epoch: 6, tloss: 88.8158187866211, vloss: 101.267807, EStop:[Improvement found, counter reset to 0]: 100%|██████████| 19/19 [00:00<00:00, 234.08it/s]\n",
      "Epoch: 7, tloss: 54.8061408996582, vloss: 73.606964, EStop:[Improvement found, counter reset to 0]: 100%|██████████| 19/19 [00:00<00:00, 169.65it/s]\n",
      "Epoch: 8, tloss: 225.72427368164062, vloss: 58.412155, EStop:[Improvement found, counter reset to 0]: 100%|██████████| 19/19 [00:00<00:00, 233.00it/s]\n",
      "Epoch: 9, tloss: 54.85736083984375, vloss: 102.703369, EStop:[No improvement in the last 1 epochs]: 100%|██████████| 19/19 [00:00<00:00, 259.50it/s]\n",
      "Epoch: 10, tloss: 96.88406372070312, vloss: 153.170746, EStop:[No improvement in the last 2 epochs]: 100%|██████████| 19/19 [00:00<00:00, 284.69it/s]\n",
      "Epoch: 11, tloss: 132.5380859375, vloss: 174.671051, EStop:[No improvement in the last 3 epochs]: 100%|██████████| 19/19 [00:00<00:00, 217.43it/s]\n",
      "Epoch: 12, tloss: 26.772262573242188, vloss: 38.733757, EStop:[Improvement found, counter reset to 0]: 100%|██████████| 19/19 [00:00<00:00, 286.26it/s]\n",
      "Epoch: 13, tloss: 88.36762237548828, vloss: 53.886150, EStop:[No improvement in the last 1 epochs]: 100%|██████████| 19/19 [00:00<00:00, 289.16it/s]\n",
      "Epoch: 14, tloss: 5.331306457519531, vloss: 39.598259, EStop:[No improvement in the last 2 epochs]: 100%|██████████| 19/19 [00:00<00:00, 284.13it/s]\n",
      "Epoch: 15, tloss: 15.255352973937988, vloss: 31.032362, EStop:[Improvement found, counter reset to 0]: 100%|██████████| 19/19 [00:00<00:00, 247.71it/s]\n",
      "Epoch: 16, tloss: 69.40774536132812, vloss: 60.687420, EStop:[No improvement in the last 1 epochs]: 100%|██████████| 19/19 [00:00<00:00, 275.80it/s]\n",
      "Epoch: 17, tloss: 62.72461700439453, vloss: 29.359751, EStop:[Improvement found, counter reset to 0]: 100%|██████████| 19/19 [00:00<00:00, 280.52it/s]\n",
      "Epoch: 18, tloss: 40.34188461303711, vloss: 74.954048, EStop:[No improvement in the last 1 epochs]: 100%|██████████| 19/19 [00:00<00:00, 218.84it/s]\n",
      "Epoch: 19, tloss: 6.228485107421875, vloss: 29.418280, EStop:[No improvement in the last 2 epochs]: 100%|██████████| 19/19 [00:00<00:00, 262.39it/s]\n",
      "Epoch: 20, tloss: 31.204084396362305, vloss: 26.105883, EStop:[Improvement found, counter reset to 0]: 100%|██████████| 19/19 [00:00<00:00, 264.54it/s]\n",
      "Epoch: 21, tloss: 48.054866790771484, vloss: 34.739605, EStop:[No improvement in the last 1 epochs]: 100%|██████████| 19/19 [00:00<00:00, 267.47it/s]\n",
      "Epoch: 22, tloss: 9.83228588104248, vloss: 37.114239, EStop:[No improvement in the last 2 epochs]: 100%|██████████| 19/19 [00:00<00:00, 173.49it/s]\n",
      "Epoch: 23, tloss: 22.503273010253906, vloss: 20.216787, EStop:[Improvement found, counter reset to 0]: 100%|██████████| 19/19 [00:00<00:00, 257.30it/s]\n",
      "Epoch: 24, tloss: 117.02864837646484, vloss: 27.819130, EStop:[No improvement in the last 1 epochs]: 100%|██████████| 19/19 [00:00<00:00, 258.24it/s]\n",
      "Epoch: 25, tloss: 39.34961700439453, vloss: 15.984626, EStop:[Improvement found, counter reset to 0]: 100%|██████████| 19/19 [00:00<00:00, 215.05it/s]\n",
      "Epoch: 26, tloss: 47.71119689941406, vloss: 16.369045, EStop:[No improvement in the last 1 epochs]: 100%|██████████| 19/19 [00:00<00:00, 250.09it/s]\n",
      "Epoch: 27, tloss: 20.3905086517334, vloss: 19.024775, EStop:[No improvement in the last 2 epochs]: 100%|██████████| 19/19 [00:00<00:00, 256.58it/s]\n",
      "Epoch: 28, tloss: 33.29762649536133, vloss: 14.360082, EStop:[Improvement found, counter reset to 0]: 100%|██████████| 19/19 [00:00<00:00, 275.01it/s]\n",
      "Epoch: 29, tloss: 24.121702194213867, vloss: 25.037008, EStop:[No improvement in the last 1 epochs]: 100%|██████████| 19/19 [00:00<00:00, 211.48it/s]\n",
      "Epoch: 30, tloss: 8.614834785461426, vloss: 66.583679, EStop:[No improvement in the last 2 epochs]: 100%|██████████| 19/19 [00:00<00:00, 273.63it/s]\n",
      "Epoch: 31, tloss: 49.61734390258789, vloss: 55.757587, EStop:[No improvement in the last 3 epochs]: 100%|██████████| 19/19 [00:00<00:00, 258.27it/s]\n",
      "Epoch: 32, tloss: 35.82011413574219, vloss: 53.088127, EStop:[No improvement in the last 4 epochs]: 100%|██████████| 19/19 [00:00<00:00, 273.73it/s]\n",
      "Epoch: 33, tloss: 15.681909561157227, vloss: 15.318961, EStop:[Early stopping triggered after 5 epochs.]: 100%|██████████| 19/19 [00:00<00:00, 262.51it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Final score (RMSE): 3.7894699573516846\n"
     ]
    }
   ],
   "source": [
    "# HIDE OUTPUT\n",
    "import time\n",
    "\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import tqdm\n",
    "from sklearn import preprocessing\n",
    "from sklearn.metrics import accuracy_score\n",
    "from sklearn.model_selection import train_test_split\n",
    "from torch.autograd import Variable\n",
    "from torch.utils.data import DataLoader, TensorDataset\n",
    "\n",
    "# Read the MPG dataset.\n",
    "df = pd.read_csv(\n",
    "    \"https://data.heatonresearch.com/data/t81-558/auto-mpg.csv\", na_values=[\"NA\", \"?\"]\n",
    ")\n",
    "\n",
    "cars = df[\"name\"]\n",
    "\n",
    "# Handle missing value\n",
    "df[\"horsepower\"] = df[\"horsepower\"].fillna(df[\"horsepower\"].median())\n",
    "\n",
    "# Pandas to Numpy\n",
    "x = df[\n",
    "    [\n",
    "        \"cylinders\",\n",
    "        \"displacement\",\n",
    "        \"horsepower\",\n",
    "        \"weight\",\n",
    "        \"acceleration\",\n",
    "        \"year\",\n",
    "        \"origin\",\n",
    "    ]\n",
    "].values\n",
    "y = df[\"mpg\"].values  # regression\n",
    "\n",
    "# Split into validation and training sets\n",
    "x_train, x_test, y_train, y_test = train_test_split(\n",
    "    x, y, test_size=0.25, random_state=42\n",
    ")\n",
    "\n",
    "# Numpy to Torch Tensor\n",
    "x_train = torch.tensor(x_train, device=device, dtype=torch.float32)\n",
    "y_train = torch.tensor(y_train, device=device, dtype=torch.float32)\n",
    "\n",
    "x_test = torch.tensor(x_test, device=device, dtype=torch.float32)\n",
    "y_test = torch.tensor(y_test, device=device, dtype=torch.float32)\n",
    "\n",
    "\n",
    "# Create datasets\n",
    "BATCH_SIZE = 16\n",
    "\n",
    "dataset_train = TensorDataset(x_train, y_train)\n",
    "dataloader_train = DataLoader(dataset_train, batch_size=BATCH_SIZE, shuffle=True)\n",
    "\n",
    "dataset_test = TensorDataset(x_test, y_test)\n",
    "dataloader_test = DataLoader(dataset_test, batch_size=BATCH_SIZE, shuffle=True)\n",
    "\n",
    "\n",
    "# Create model\n",
    "\n",
    "model = nn.Sequential(\n",
    "    nn.Linear(x_train.shape[1], 50), \n",
    "    nn.ReLU(), \n",
    "    nn.Linear(50, 25), \n",
    "    nn.ReLU(), \n",
    "    nn.Linear(25, 1)\n",
    ")\n",
    "\n",
    "model = torch.compile(model, backend=\"aot_eager\").to(device)\n",
    "\n",
    "# Define the loss function for regression\n",
    "loss_fn = nn.MSELoss()\n",
    "\n",
    "# Define the optimizer\n",
    "optimizer = torch.optim.Adam(model.parameters(), lr=0.01)\n",
    "\n",
    "es = EarlyStopping()\n",
    "\n",
    "epoch = 0\n",
    "done = False\n",
    "while epoch < 1000 and not done:\n",
    "    epoch += 1\n",
    "    steps = list(enumerate(dataloader_train))\n",
    "    pbar = tqdm.tqdm(steps)\n",
    "    model.train()\n",
    "    for i, (x_batch, y_batch) in pbar:\n",
    "        y_batch_pred = model(x_batch).flatten()  #\n",
    "        loss = loss_fn(y_batch_pred, y_batch)\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "\n",
    "        loss, current = loss.item(), (i + 1) * len(x_batch)\n",
    "        if i == len(steps) - 1:\n",
    "            model.eval()\n",
    "            pred = model(x_test).flatten()\n",
    "            vloss = loss_fn(pred, y_test)\n",
    "            if es(model, vloss):\n",
    "                done = True\n",
    "            pbar.set_description(\n",
    "                f\"Epoch: {epoch}, tloss: {loss}, vloss: {vloss:>7f}, EStop:[{es.status}]\"\n",
    "            )\n",
    "        else:\n",
    "            pbar.set_description(f\"Epoch: {epoch}, tloss {loss:}\")\n",
    "\n",
    "from sklearn import metrics\n",
    "\n",
    "# Measure RMSE error.  RMSE is common for regression.\n",
    "pred = model(x_test)\n",
    "score = torch.sqrt(torch.nn.functional.mse_loss(pred.flatten(), y_test))\n",
    "print(f\"Final score (RMSE): {score}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "9ynzp9RTeV5c"
   },
   "source": [
    "Just as before, we extract the column names and discard the target.  We can now create a ranking of the importance of each of the input features.  The feature with a ranking of 1.0 is the most important."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 269
    },
    "id": "nm3PeQckeV5c",
    "outputId": "47179baf-9747-4ef9-9174-706c346cfe07"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/jeff/miniconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/loss.py:536: UserWarning: Using a target size (torch.Size([100, 1])) that is different to the input size (torch.Size([100])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.\n",
      "  return F.mse_loss(input, target, reduction=self.reduction)\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>name</th>\n",
       "      <th>error</th>\n",
       "      <th>importance</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>origin</td>\n",
       "      <td>718.869507</td>\n",
       "      <td>1.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>weight</td>\n",
       "      <td>376.961060</td>\n",
       "      <td>0.524380</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>year</td>\n",
       "      <td>278.980316</td>\n",
       "      <td>0.388082</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>acceleration</td>\n",
       "      <td>208.227646</td>\n",
       "      <td>0.289660</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>displacement</td>\n",
       "      <td>192.715042</td>\n",
       "      <td>0.268081</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>horsepower</td>\n",
       "      <td>128.381210</td>\n",
       "      <td>0.178588</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6</th>\n",
       "      <td>cylinders</td>\n",
       "      <td>120.705498</td>\n",
       "      <td>0.167910</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "           name       error  importance\n",
       "0        origin  718.869507    1.000000\n",
       "1        weight  376.961060    0.524380\n",
       "2          year  278.980316    0.388082\n",
       "3  acceleration  208.227646    0.289660\n",
       "4  displacement  192.715042    0.268081\n",
       "5    horsepower  128.381210    0.178588\n",
       "6     cylinders  120.705498    0.167910"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Rank the features\n",
    "from IPython.display import display, HTML\n",
    "\n",
    "names = list(df.columns) # x+y column names\n",
    "names.remove(\"name\")\n",
    "names.remove(\"mpg\") # remove the target(y)\n",
    "rank = perturbation_rank(device, model, x_test, y_test, names, True)\n",
    "display(rank)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "o581oa2ceV5c"
   },
   "source": [
    "## Biological Response with Neural Network\n",
    "\n",
    "The following sections will demonstrate how to use feature importance ranking and ensembling with a more complex dataset. Ensembling is the process where you combine multiple models for greater accuracy. Kaggle competition winners frequently make use of ensembling for high-ranking solutions.\n",
    "\n",
    "We will use the biological response dataset, a Kaggle dataset, where there is an unusually high number of columns. Because of the large number of columns, it is essential to use feature ranking to determine the importance of these columns. We begin by loading the dataset and preprocessing. This Kaggle dataset is a binary classification problem. You must predict if certain conditions will cause a biological response.\n",
    "\n",
    "* [Predicting a Biological Response](https://www.kaggle.com/c/bioresponse)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "id": "O9qJ0tqueV5c"
   },
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "from sklearn import metrics\n",
    "from scipy.stats import zscore\n",
    "from sklearn.model_selection import KFold\n",
    "from IPython.display import HTML, display\n",
    "\n",
    "URL = \"https://data.heatonresearch.com/data/t81-558/kaggle/\"\n",
    "\n",
    "df_train = pd.read_csv(\n",
    "    URL+\"bio_train.csv\", \n",
    "    na_values=['NA', '?'])\n",
    "\n",
    "df_test = pd.read_csv(\n",
    "    URL+\"bio_test.csv\", \n",
    "    na_values=['NA', '?'])\n",
    "\n",
    "activity_classes = df_train['Activity']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "7cbGqaUIeV5c"
   },
   "source": [
    "A large number of columns is evident when we display the shape of the dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "xkVUrKiVeV5c",
    "outputId": "14cb3834-33f0-4393-dd69-c7dd3aeef9ba"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(3751, 1777)\n"
     ]
    }
   ],
   "source": [
    "print(df_train.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "tk-HF1CieV5c"
   },
   "source": [
    "The following code constructs a classification neural network and trains it for the biological response dataset.  Once trained, the accuracy is measured."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "J_miQaHneV5c",
    "outputId": "6bebb475-b6da-4991-b077-ac2a76f96138"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Early stopping\n",
      "Validation logloss: 0.5384844082050355\n",
      "Validation accuracy score: 0.7750533049040512\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import pandas as pd\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "from sklearn.model_selection import train_test_split\n",
    "import numpy as np\n",
    "import sklearn\n",
    "from sklearn import metrics\n",
    "from torch.utils.data import DataLoader, TensorDataset\n",
    "\n",
    "# Assuming df_train and df_test are predefined\n",
    "x_columns = df_train.columns.drop('Activity')\n",
    "x = torch.tensor(df_train[x_columns].values, dtype=torch.float32)\n",
    "y = torch.tensor(df_train['Activity'].values, dtype=torch.float32).view(-1, 1) # For binary cross entropy\n",
    "x_submit = torch.tensor(df_test[x_columns].values, dtype=torch.float32)\n",
    "\n",
    "# Split into train/test\n",
    "x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=42)\n",
    "\n",
    "# Move to GPU if available\n",
    "x_train, y_train, x_test, y_test = map(lambda t: t.clone().detach().to(device), (x_train, y_train, x_test, y_test))\n",
    "\n",
    "train_dataset = TensorDataset(x_train, y_train)\n",
    "test_dataset = TensorDataset(x_test, y_test)\n",
    "train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)\n",
    "test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)\n",
    "\n",
    "# Define model using Sequential\n",
    "model = nn.Sequential(\n",
    "    nn.Linear(x_train.shape[1], 25),\n",
    "    nn.ReLU(),\n",
    "    nn.Linear(25, 10),\n",
    "    nn.Linear(10, 1),\n",
    "    nn.Sigmoid()\n",
    ").to(device)\n",
    "\n",
    "# Loss and optimizer\n",
    "criterion = nn.BCELoss()\n",
    "optimizer = optim.Adam(model.parameters())\n",
    "\n",
    "# Training with early stopping\n",
    "best_loss = float('inf')\n",
    "patience = 5\n",
    "no_improvement = 0\n",
    "\n",
    "for epoch in range(1000):\n",
    "    model.train()\n",
    "    for batch in train_loader:\n",
    "        inputs, labels = batch\n",
    "\n",
    "        optimizer.zero_grad()\n",
    "        outputs = model(inputs)\n",
    "        loss = criterion(outputs, labels)\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "\n",
    "    model.eval()\n",
    "    with torch.no_grad():\n",
    "        val_loss = sum(criterion(model(inputs), labels) for inputs, labels in test_loader)\n",
    "        if val_loss < best_loss - 1e-3:\n",
    "            best_loss = val_loss\n",
    "            no_improvement = 0\n",
    "        else:\n",
    "            no_improvement += 1\n",
    "\n",
    "        if no_improvement >= patience:\n",
    "            print(\"Early stopping\")\n",
    "            break\n",
    "\n",
    "# Prediction\n",
    "with torch.no_grad():\n",
    "    pred = model(x_test).cpu().numpy().flatten()\n",
    "    pred = np.clip(pred, a_min=1e-6, a_max=1-1e-6)\n",
    "\n",
    "    print(\"Validation logloss: {}\".format(sklearn.metrics.log_loss(y_test.cpu(), pred)))\n",
    "    \n",
    "    pred_binary = (pred > 0.5).astype(int)\n",
    "    score = metrics.accuracy_score(y_test.cpu().numpy(), pred_binary)\n",
    "    print(\"Validation accuracy score: {}\".format(score))\n",
    "    \n",
    "    pred_submit = model(x_submit.to(device)).cpu().numpy().flatten()\n",
    "    pred_submit = np.clip(pred_submit, a_min=1e-6, a_max=1-1e-6)\n",
    "    \n",
    "    submit_df = pd.DataFrame({'MoleculeId': [x+1 for x in range(len(pred_submit))], 'PredictedProbability': pred_submit})\n",
    "    submit_df.to_csv(\"submit.csv\", index=False)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "wluUEn10eV5d"
   },
   "source": [
    "## What Features/Columns are Important\n",
    "The following uses perturbation ranking to evaluate the neural network."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 363
    },
    "id": "204BmljWeV5d",
    "outputId": "619dd1de-7b67-4cb8-82a7-ccb1dcf9deaa"
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>name</th>\n",
       "      <th>error</th>\n",
       "      <th>importance</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>D860</td>\n",
       "      <td>0.570163</td>\n",
       "      <td>1.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>D288</td>\n",
       "      <td>0.570031</td>\n",
       "      <td>0.999768</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>D663</td>\n",
       "      <td>0.569986</td>\n",
       "      <td>0.999690</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>D671</td>\n",
       "      <td>0.569706</td>\n",
       "      <td>0.999198</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>D101</td>\n",
       "      <td>0.569513</td>\n",
       "      <td>0.998860</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>D1576</td>\n",
       "      <td>0.569312</td>\n",
       "      <td>0.998508</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6</th>\n",
       "      <td>D206</td>\n",
       "      <td>0.569083</td>\n",
       "      <td>0.998106</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7</th>\n",
       "      <td>D655</td>\n",
       "      <td>0.568983</td>\n",
       "      <td>0.997929</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>8</th>\n",
       "      <td>D1480</td>\n",
       "      <td>0.568912</td>\n",
       "      <td>0.997806</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>9</th>\n",
       "      <td>D179</td>\n",
       "      <td>0.568880</td>\n",
       "      <td>0.997750</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "    name     error  importance\n",
       "0   D860  0.570163    1.000000\n",
       "1   D288  0.570031    0.999768\n",
       "2   D663  0.569986    0.999690\n",
       "3   D671  0.569706    0.999198\n",
       "4   D101  0.569513    0.998860\n",
       "5  D1576  0.569312    0.998508\n",
       "6   D206  0.569083    0.998106\n",
       "7   D655  0.568983    0.997929\n",
       "8  D1480  0.568912    0.997806\n",
       "9   D179  0.568880    0.997750"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# Rank the features\n",
    "from IPython.display import display, HTML\n",
    "\n",
    "names = list(df_train.columns) # x+y column names\n",
    "names.remove(\"Activity\") # remove the target(y)\n",
    "rank = perturbation_rank(device, model, x_test, y_test, names, False)\n",
    "display(rank[0:10])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "_EejOUAleV5d"
   },
   "source": [
    "## Neural Network Ensemble\n",
    "\n",
    "A neural network ensemble combines neural network predictions with other models. The program determines the exact blend of these models by logistic regression. The following code performs this blend for a classification.  If you present the final predictions from the ensemble to Kaggle, you will see that the result is very accurate."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "id": "dBfgUuateV5d"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Loading data...\n",
      "Model: 0 : Sequential(\n",
      "  (0): Linear(in_features=1776, out_features=20, bias=True)\n",
      "  (1): ReLU()\n",
      "  (2): Linear(in_features=20, out_features=1, bias=True)\n",
      "  (3): Linear(in_features=1, out_features=2, bias=True)\n",
      "  (4): Softmax(dim=1)\n",
      ")\n",
      "Fold #0: loss=0.7723120821060854\n",
      "Fold #1: loss=0.7621972254727901\n",
      "Fold #2: loss=0.7543882373575242\n",
      "Fold #3: loss=0.7438802241516417\n",
      "Fold #4: loss=0.7359048127781634\n",
      "Fold #5: loss=0.7294116013880749\n",
      "Fold #6: loss=0.7212477197135376\n",
      "Fold #7: loss=0.7070651846400796\n",
      "Fold #8: loss=0.7078284621313083\n",
      "Fold #9: loss=0.7054422140402271\n",
      "Sequential: Mean loss=0.7339677763779433\n",
      "Model: 1 : KNeighborsClassifier(n_neighbors=3)\n",
      "Fold #0: loss=3.606678388314123\n",
      "Fold #1: loss=2.2256421551487593\n",
      "Fold #2: loss=3.6815437059542186\n",
      "Fold #3: loss=2.416161292225968\n",
      "Fold #4: loss=4.442472310149748\n",
      "Fold #5: loss=4.321350530738247\n",
      "Fold #6: loss=3.400455469543658\n",
      "Fold #7: loss=3.1724147110842513\n",
      "Fold #8: loss=2.117356283193681\n",
      "Fold #9: loss=3.0532135963322586\n",
      "KNeighborsClassifier: Mean loss=3.243728844268491\n",
      "Model: 2 : RandomForestClassifier(n_jobs=-1)\n",
      "Fold #0: loss=0.4633095512408882\n",
      "Fold #1: loss=0.4365455229012823\n",
      "Fold #2: loss=0.45974059171583104\n",
      "Fold #3: loss=0.41637006822946454\n",
      "Fold #4: loss=0.48463027413657767\n",
      "Fold #5: loss=0.4847422192101618\n",
      "Fold #6: loss=0.41396931554951744\n",
      "Fold #7: loss=0.4740979089443885\n",
      "Fold #8: loss=0.44991063269870396\n",
      "Fold #9: loss=0.46259992355345486\n",
      "RandomForestClassifier: Mean loss=0.4545916008180271\n",
      "Model: 3 : RandomForestClassifier(criterion='entropy', n_jobs=-1)\n",
      "Fold #0: loss=0.4506839758585797\n",
      "Fold #1: loss=0.42564378954598314\n",
      "Fold #2: loss=0.5543664390948644\n",
      "Fold #3: loss=0.4224255905601883\n",
      "Fold #4: loss=0.4771466345724782\n",
      "Fold #5: loss=0.4761053058826825\n",
      "Fold #6: loss=0.4112294148940531\n",
      "Fold #7: loss=0.46653109786630986\n",
      "Fold #8: loss=0.45061982469359513\n",
      "Fold #9: loss=0.46644264760408993\n",
      "RandomForestClassifier: Mean loss=0.4601194720572825\n",
      "Model: 4 : ExtraTreesClassifier(n_jobs=-1)\n",
      "Fold #0: loss=0.45251829311996583\n",
      "Fold #1: loss=0.5006511176507963\n",
      "Fold #2: loss=0.5876801640711192\n",
      "Fold #3: loss=0.4123843597098163\n",
      "Fold #4: loss=0.49522553531080815\n",
      "Fold #5: loss=0.4816449326882984\n",
      "Fold #6: loss=0.42030070529540503\n",
      "Fold #7: loss=0.492285403345311\n",
      "Fold #8: loss=0.536331631546791\n",
      "Fold #9: loss=0.6223339873068282\n",
      "ExtraTreesClassifier: Mean loss=0.5001356130045139\n",
      "Model: 5 : ExtraTreesClassifier(criterion='entropy', n_jobs=-1)\n",
      "Fold #0: loss=0.4481291708338513\n",
      "Fold #1: loss=0.4125470512529128\n",
      "Fold #2: loss=0.6616568013505356\n",
      "Fold #3: loss=0.40990114174139536\n",
      "Fold #4: loss=0.4938975798612953\n",
      "Fold #5: loss=0.5777179442894952\n",
      "Fold #6: loss=0.42155732934440765\n",
      "Fold #7: loss=0.643161626428383\n",
      "Fold #8: loss=0.4554165363133463\n",
      "Fold #9: loss=0.6310855642110212\n",
      "ExtraTreesClassifier: Mean loss=0.5155070745626644\n",
      "Model: 6 : GradientBoostingClassifier(learning_rate=0.05, max_depth=6, n_estimators=50,\n",
      "                           subsample=0.5)\n",
      "Fold #0: loss=0.4866821743425205\n",
      "Fold #1: loss=0.45632687767603136\n",
      "Fold #2: loss=0.4732843460996861\n",
      "Fold #3: loss=0.441226765531682\n",
      "Fold #4: loss=0.4878683446505234\n",
      "Fold #5: loss=0.4862234098685488\n",
      "Fold #6: loss=0.4502639553820823\n",
      "Fold #7: loss=0.45853135345237456\n",
      "Fold #8: loss=0.4620960376091599\n",
      "Fold #9: loss=0.46945946167067926\n",
      "GradientBoostingClassifier: Mean loss=0.46719627262832886\n",
      "\n",
      "Blending models.\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import os\n",
    "import pandas as pd\n",
    "import math\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "from sklearn.neighbors import KNeighborsClassifier\n",
    "from sklearn.model_selection import StratifiedKFold\n",
    "from sklearn.ensemble import RandomForestClassifier \n",
    "from sklearn.ensemble import ExtraTreesClassifier\n",
    "from sklearn.ensemble import GradientBoostingClassifier\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "\n",
    "SHUFFLE = False\n",
    "FOLDS = 10\n",
    "\n",
    "# Using nn.Sequential to define the model\n",
    "def build_ann(input_size, classes, neurons):\n",
    "    model = nn.Sequential(\n",
    "        nn.Linear(input_size, neurons),\n",
    "        nn.ReLU(),\n",
    "        nn.Linear(neurons, 1),\n",
    "        nn.Linear(1, classes),\n",
    "        nn.Softmax(dim=1)\n",
    "    )\n",
    "    return model\n",
    "\n",
    "def mlogloss(y_test, preds):\n",
    "    epsilon = 1e-15\n",
    "    sum = 0\n",
    "    for row in zip(preds,y_test):\n",
    "        x = row[0][row[1]]\n",
    "        x = max(epsilon,x)\n",
    "        x = min(1-epsilon,x)\n",
    "        sum+=math.log(x)\n",
    "    return( (-1/len(preds))*sum)\n",
    "\n",
    "def stretch(y):\n",
    "    return (y - y.min()) / (y.max() - y.min())\n",
    "\n",
    "def blend_ensemble(x, y, x_submit):\n",
    "    kf = StratifiedKFold(FOLDS)\n",
    "    folds = list(kf.split(x,y))\n",
    "\n",
    "    models = [\n",
    "        build_ann(x.shape[1], 2, 20),\n",
    "        KNeighborsClassifier(n_neighbors=3),\n",
    "        RandomForestClassifier(n_estimators=100, n_jobs=-1, criterion='gini'),\n",
    "        RandomForestClassifier(n_estimators=100, n_jobs=-1, criterion='entropy'),\n",
    "        ExtraTreesClassifier(n_estimators=100, n_jobs=-1, criterion='gini'),\n",
    "        ExtraTreesClassifier(n_estimators=100, n_jobs=-1, criterion='entropy'),\n",
    "        GradientBoostingClassifier(learning_rate=0.05, subsample=0.5, max_depth=6, n_estimators=50)\n",
    "    ]\n",
    "\n",
    "    dataset_blend_train = np.zeros((x.shape[0], len(models)))\n",
    "    dataset_blend_test = np.zeros((x_submit.shape[0], len(models)))\n",
    "\n",
    "    for j, model in enumerate(models):\n",
    "        print(\"Model: {} : {}\".format(j, model))\n",
    "        fold_sums = np.zeros((x_submit.shape[0], len(folds)))\n",
    "        total_loss = 0\n",
    "        for i, (train, test) in enumerate(folds):\n",
    "            x_train = torch.tensor(x[train], dtype=torch.float32)\n",
    "            y_train = torch.tensor(y[train].values, dtype=torch.int64)\n",
    "            x_test = torch.tensor(x[test], dtype=torch.float32)\n",
    "            y_test = torch.tensor(y[test].values, dtype=torch.int64)\n",
    "            \n",
    "            if isinstance(model, nn.Module):  # Check if the model is a PyTorch model\n",
    "                optimizer = optim.Adam(model.parameters())\n",
    "                criterion = nn.CrossEntropyLoss()\n",
    "\n",
    "                # Training\n",
    "                optimizer.zero_grad()\n",
    "                outputs = model(x_train)\n",
    "                loss = criterion(outputs, y_train)\n",
    "                loss.backward()\n",
    "                optimizer.step()\n",
    "\n",
    "                # Prediction\n",
    "                with torch.no_grad():\n",
    "                    outputs_test = model(x_test)\n",
    "                    _, predicted = outputs_test.max(1)\n",
    "                    pred = F.softmax(outputs_test, dim=1).numpy()\n",
    "                    outputs_submit = model(torch.tensor(x_submit, dtype=torch.float32))\n",
    "                    pred2 = F.softmax(outputs_submit, dim=1).numpy()\n",
    "            else:\n",
    "                model.fit(x_train, y_train)\n",
    "                pred = np.array(model.predict_proba(x_test))\n",
    "                pred2 = np.array(model.predict_proba(x_submit))\n",
    "                \n",
    "            dataset_blend_train[test, j] = pred[:, 1]\n",
    "            fold_sums[:, i] = pred2[:, 1]\n",
    "            loss = mlogloss(y_test, pred)\n",
    "            total_loss+=loss\n",
    "            print(\"Fold #{}: loss={}\".format(i,loss))\n",
    "        print(\"{}: Mean loss={}\".format(model.__class__.__name__, total_loss/len(folds)))\n",
    "        dataset_blend_test[:, j] = fold_sums.mean(1)\n",
    "\n",
    "    print()\n",
    "    print(\"Blending models.\")\n",
    "    blend = LogisticRegression(solver='lbfgs')\n",
    "    blend.fit(dataset_blend_train, y)\n",
    "    return blend.predict_proba(dataset_blend_test)\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    np.random.seed(42)  # seed to shuffle the train set\n",
    "\n",
    "    print(\"Loading data...\")\n",
    "    URL = \"https://data.heatonresearch.com/data/t81-558/kaggle/\"\n",
    "\n",
    "    df_train = pd.read_csv(URL+\"bio_train.csv\", na_values=['NA', '?'])\n",
    "    df_submit = pd.read_csv(URL+\"bio_test.csv\", na_values=['NA', '?'])\n",
    "\n",
    "    predictors = list(df_train.columns.values)\n",
    "    predictors.remove('Activity')\n",
    "    x = df_train[predictors].values\n",
    "    y = df_train['Activity']\n",
    "    x_submit = df_submit.values\n",
    "\n",
    "    if SHUFFLE:\n",
    "        idx = np.random.permutation(y.size)\n",
    "        x = x[idx]\n",
    "        y = y[idx]\n",
    "\n",
    "    submit_data = blend_ensemble(x, y, x_submit)\n",
    "    submit_data = stretch(submit_data)\n",
    "\n",
    "    # Build submit file\n",
    "    ids = [id+1 for id in range(submit_data.shape[0])]\n",
    "    submit_df = pd.DataFrame({'MoleculeId': ids, 'PredictedProbability': submit_data[:, 1]}, columns=['MoleculeId','PredictedProbability'])\n",
    "    submit_df.to_csv(\"submit.csv\", index=False)\n"
   ]
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "colab": {
   "collapsed_sections": [],
   "name": "Copy of t81_558_class_08_2_keras_ensembles.ipynb",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3.9 (torch)",
   "language": "python",
   "name": "pytorch"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
