{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<div style=\"background-color: #C8E6C9; padding: 10px; color: #1b7678\">\n",
    "<b>Pre-requisites</b>: Basic knowledge of Deep Learning and Tabular Problems like Regression and Classification. Also go through the <i>Approaching Any Tabular Problem with PyTorch Tabular</i> tutorial.  <br></br>\n",
    "<b>Level</b>: Intermediate\n",
    "</div>\n",
    "\n",
    "In this tutorial, we will look at an easy way to assess the performance different Deep Learning models in PyTorch Tabular on a dataset. Sort of a `pycaret` style sweep of models. In PyTorch Tabular, we call this `Model Sweep`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from rich import print\n",
    "from rich.pretty import pprint"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Data\n",
    "\n",
    "We will use the Covertype dataset from UCI ML Repository and split it into train and test. We can split into val as well, but even if we don't PyTorch Tabular will automatically do it for us out of the train set."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">Train Shape: <span style=\"font-weight: bold\">(</span><span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">464809</span>, <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">13</span><span style=\"font-weight: bold\">)</span> | Test Shape: <span style=\"font-weight: bold\">(</span><span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">116203</span>, <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">13</span><span style=\"font-weight: bold\">)</span>\n",
       "</pre>\n"
      ],
      "text/plain": [
       "Train Shape: \u001b[1m(\u001b[0m\u001b[1;36m464809\u001b[0m, \u001b[1;36m13\u001b[0m\u001b[1m)\u001b[0m | Test Shape: \u001b[1m(\u001b[0m\u001b[1;36m116203\u001b[0m, \u001b[1;36m13\u001b[0m\u001b[1m)\u001b[0m\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from pytorch_tabular.utils import load_covertype_dataset\n",
    "from sklearn.model_selection import train_test_split\n",
    "\n",
    "data, cat_col_names, num_col_names, target_col = load_covertype_dataset()\n",
    "train, test = train_test_split(data, random_state=42, test_size=0.2)\n",
    "print(f\"Train Shape: {train.shape} | Test Shape: {test.shape}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "Collapsed": "false"
   },
   "source": [
    "## Defining the Config\n",
    "\n",
    "As you saw in the basic tutorial, we need to define a set of configs. Even for model sweep, we need to define all configs except the `ModelConfig`. We will keep most of it defaults, but set some congis to control the training process:\n",
    "- Automatic Learning Rate Finding\n",
    "- Batch Size\n",
    "- Max Epochs\n",
    "- Turning off Progress Bar and Model Summary so taht it won't clutter the output."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pytorch_tabular.config import (\n",
    "    DataConfig,\n",
    "    OptimizerConfig,\n",
    "    TrainerConfig,\n",
    ")\n",
    "from pytorch_tabular.models.common.heads import LinearHeadConfig\n",
    "\n",
    "data_config = DataConfig(\n",
    "    target=[target_col],\n",
    "    continuous_cols=num_col_names,\n",
    "    categorical_cols=cat_col_names,\n",
    ")\n",
    "trainer_config = TrainerConfig(\n",
    "    batch_size=1024,\n",
    "    max_epochs=25,\n",
    "    auto_lr_find=True,\n",
    "    early_stopping=None,  # Monitor valid_loss for early stopping\n",
    "    # early_stopping_mode=\"min\",  # Set the mode as min because for val_loss, lower is better\n",
    "    # early_stopping_patience=5,  # No. of epochs of degradation training will wait before terminating\n",
    "    checkpoints=\"valid_loss\",  # Save best checkpoint monitoring val_loss\n",
    "    load_best=True,  # After training, load the best checkpoint\n",
    "    progress_bar=\"none\",  # Turning off Progress bar\n",
    "    trainer_kwargs=dict(enable_model_summary=False),  # Turning off model summary\n",
    "    accelerator=\"cpu\",\n",
    ")\n",
    "optimizer_config = OptimizerConfig()\n",
    "\n",
    "head_config = LinearHeadConfig(\n",
    "    layers=\"\",\n",
    "    dropout=0.1,\n",
    "    initialization=(  # No additional layer in head, just a mapping layer to output_dim\n",
    "        \"kaiming\"\n",
    "    ),\n",
    ").__dict__  # Convert to dict to pass to the model config (OmegaConf doesn't accept objects)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Model Sweep\n",
    "\n",
    "The model sweep enables you to quickly sweep thorugh different models and configurations. It takes in a list of model configs or one of the presets defined in ``pytorch_tabular.MODEL_PRESETS`` and trains them on the data. It then ranks the models based on the metric provided and returns the best model.\n",
    "\n",
    "These are the major arguments to the ``model_sweep`` function:\n",
    "- ``task``: The type of prediction task. Either 'classification' or 'regression'\n",
    "- ``train``: The training data\n",
    "- ``test``: The test data on which performance is evaluated\n",
    "- `Configs`: All the config objects can be passed as either the object or the path to the yaml file.\n",
    "- ``model_list``: The list of models to compare. This can be one of the presets defined in ``pytorch_tabular.MODEL_SWEEP_PRESETS`` or a list of ``ModelConfig`` objects.\n",
    "\n",
    "There are three presets defined in ``pytorch_tabular.MODEL_SWEEP_PRESETS``:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">[</span><span style=\"color: #008000; text-decoration-color: #008000\">'lite'</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'standard'</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'full'</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'high_memory'</span><span style=\"font-weight: bold\">]</span>\n",
       "</pre>\n"
      ],
      "text/plain": [
       "\u001b[1m[\u001b[0m\u001b[32m'lite'\u001b[0m, \u001b[32m'standard'\u001b[0m, \u001b[32m'full'\u001b[0m, \u001b[32m'high_memory'\u001b[0m\u001b[1m]\u001b[0m\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from pytorch_tabular import MODEL_SWEEP_PRESETS\n",
    "\n",
    "print(list(MODEL_SWEEP_PRESETS.keys()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. `lite` : This is a set of models that are fast to train. This is the default value for ``model_list``. The models and its hyperparameters parameters are carefully chosen such that they have comparable # of parameters, trains relatively faster, and gives good results. The models included are:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">(</span>\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"font-weight: bold\">(</span><span style=\"color: #008000; text-decoration-color: #008000\">'CategoryEmbeddingModelConfig'</span>, <span style=\"font-weight: bold\">{</span><span style=\"color: #008000; text-decoration-color: #008000\">'layers'</span>: <span style=\"color: #008000; text-decoration-color: #008000\">'256-128-64'</span><span style=\"font-weight: bold\">})</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"font-weight: bold\">(</span><span style=\"color: #008000; text-decoration-color: #008000\">'GANDALFConfig'</span>, <span style=\"font-weight: bold\">{</span><span style=\"color: #008000; text-decoration-color: #008000\">'gflu_stages'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">6</span><span style=\"font-weight: bold\">})</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"font-weight: bold\">(</span><span style=\"color: #008000; text-decoration-color: #008000\">'TabNetModelConfig'</span>, <span style=\"font-weight: bold\">{</span><span style=\"color: #008000; text-decoration-color: #008000\">'n_d'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">32</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'n_a'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">32</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'n_steps'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">3</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'gamma'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1.5</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'n_independent'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'n_shared'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">2</span><span style=\"font-weight: bold\">})</span>\n",
       "<span style=\"font-weight: bold\">)</span>\n",
       "</pre>\n"
      ],
      "text/plain": [
       "\u001b[1m(\u001b[0m\n",
       "\u001b[2;32m│   \u001b[0m\u001b[1m(\u001b[0m\u001b[32m'CategoryEmbeddingModelConfig'\u001b[0m, \u001b[1m{\u001b[0m\u001b[32m'layers'\u001b[0m: \u001b[32m'256-128-64'\u001b[0m\u001b[1m}\u001b[0m\u001b[1m)\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[1m(\u001b[0m\u001b[32m'GANDALFConfig'\u001b[0m, \u001b[1m{\u001b[0m\u001b[32m'gflu_stages'\u001b[0m: \u001b[1;36m6\u001b[0m\u001b[1m}\u001b[0m\u001b[1m)\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[1m(\u001b[0m\u001b[32m'TabNetModelConfig'\u001b[0m, \u001b[1m{\u001b[0m\u001b[32m'n_d'\u001b[0m: \u001b[1;36m32\u001b[0m, \u001b[32m'n_a'\u001b[0m: \u001b[1;36m32\u001b[0m, \u001b[32m'n_steps'\u001b[0m: \u001b[1;36m3\u001b[0m, \u001b[32m'gamma'\u001b[0m: \u001b[1;36m1.5\u001b[0m, \u001b[32m'n_independent'\u001b[0m: \u001b[1;36m1\u001b[0m, \u001b[32m'n_shared'\u001b[0m: \u001b[1;36m2\u001b[0m\u001b[1m}\u001b[0m\u001b[1m)\u001b[0m\n",
       "\u001b[1m)\u001b[0m\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "pprint(MODEL_SWEEP_PRESETS[\"lite\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2. `standard` : This is a set of models that have less than or around a 100 thousand learnable parameters so that it's still not high memory requirement. All the models from the `lite` presets are also included. The models and its hyperparameters parameters are carefully chosen such that they have comparable # of parameters, and gives good results. The models included are:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">(</span>\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"font-weight: bold\">(</span><span style=\"color: #008000; text-decoration-color: #008000\">'CategoryEmbeddingModelConfig'</span>, <span style=\"font-weight: bold\">{</span><span style=\"color: #008000; text-decoration-color: #008000\">'layers'</span>: <span style=\"color: #008000; text-decoration-color: #008000\">'256-128-64'</span><span style=\"font-weight: bold\">})</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"font-weight: bold\">(</span><span style=\"color: #008000; text-decoration-color: #008000\">'CategoryEmbeddingModelConfig'</span>, <span style=\"font-weight: bold\">{</span><span style=\"color: #008000; text-decoration-color: #008000\">'layers'</span>: <span style=\"color: #008000; text-decoration-color: #008000\">'512-128-64'</span><span style=\"font-weight: bold\">})</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"font-weight: bold\">(</span><span style=\"color: #008000; text-decoration-color: #008000\">'GANDALFConfig'</span>, <span style=\"font-weight: bold\">{</span><span style=\"color: #008000; text-decoration-color: #008000\">'gflu_stages'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">6</span><span style=\"font-weight: bold\">})</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"font-weight: bold\">(</span><span style=\"color: #008000; text-decoration-color: #008000\">'GANDALFConfig'</span>, <span style=\"font-weight: bold\">{</span><span style=\"color: #008000; text-decoration-color: #008000\">'gflu_stages'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">15</span><span style=\"font-weight: bold\">})</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"font-weight: bold\">(</span><span style=\"color: #008000; text-decoration-color: #008000\">'TabNetModelConfig'</span>, <span style=\"font-weight: bold\">{</span><span style=\"color: #008000; text-decoration-color: #008000\">'n_d'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">32</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'n_a'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">32</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'n_steps'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">3</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'gamma'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1.5</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'n_independent'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'n_shared'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">2</span><span style=\"font-weight: bold\">})</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"font-weight: bold\">(</span><span style=\"color: #008000; text-decoration-color: #008000\">'TabNetModelConfig'</span>, <span style=\"font-weight: bold\">{</span><span style=\"color: #008000; text-decoration-color: #008000\">'n_d'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">32</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'n_a'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">32</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'n_steps'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">5</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'gamma'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1.5</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'n_independent'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">2</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'n_shared'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">3</span><span style=\"font-weight: bold\">})</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"font-weight: bold\">(</span><span style=\"color: #008000; text-decoration-color: #008000\">'FTTransformerConfig'</span>, <span style=\"font-weight: bold\">{</span><span style=\"color: #008000; text-decoration-color: #008000\">'num_heads'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">4</span>, <span style=\"color: #008000; text-decoration-color: #008000\">'num_attn_blocks'</span>: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">4</span><span style=\"font-weight: bold\">})</span>\n",
       "<span style=\"font-weight: bold\">)</span>\n",
       "</pre>\n"
      ],
      "text/plain": [
       "\u001b[1m(\u001b[0m\n",
       "\u001b[2;32m│   \u001b[0m\u001b[1m(\u001b[0m\u001b[32m'CategoryEmbeddingModelConfig'\u001b[0m, \u001b[1m{\u001b[0m\u001b[32m'layers'\u001b[0m: \u001b[32m'256-128-64'\u001b[0m\u001b[1m}\u001b[0m\u001b[1m)\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[1m(\u001b[0m\u001b[32m'CategoryEmbeddingModelConfig'\u001b[0m, \u001b[1m{\u001b[0m\u001b[32m'layers'\u001b[0m: \u001b[32m'512-128-64'\u001b[0m\u001b[1m}\u001b[0m\u001b[1m)\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[1m(\u001b[0m\u001b[32m'GANDALFConfig'\u001b[0m, \u001b[1m{\u001b[0m\u001b[32m'gflu_stages'\u001b[0m: \u001b[1;36m6\u001b[0m\u001b[1m}\u001b[0m\u001b[1m)\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[1m(\u001b[0m\u001b[32m'GANDALFConfig'\u001b[0m, \u001b[1m{\u001b[0m\u001b[32m'gflu_stages'\u001b[0m: \u001b[1;36m15\u001b[0m\u001b[1m}\u001b[0m\u001b[1m)\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[1m(\u001b[0m\u001b[32m'TabNetModelConfig'\u001b[0m, \u001b[1m{\u001b[0m\u001b[32m'n_d'\u001b[0m: \u001b[1;36m32\u001b[0m, \u001b[32m'n_a'\u001b[0m: \u001b[1;36m32\u001b[0m, \u001b[32m'n_steps'\u001b[0m: \u001b[1;36m3\u001b[0m, \u001b[32m'gamma'\u001b[0m: \u001b[1;36m1.5\u001b[0m, \u001b[32m'n_independent'\u001b[0m: \u001b[1;36m1\u001b[0m, \u001b[32m'n_shared'\u001b[0m: \u001b[1;36m2\u001b[0m\u001b[1m}\u001b[0m\u001b[1m)\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[1m(\u001b[0m\u001b[32m'TabNetModelConfig'\u001b[0m, \u001b[1m{\u001b[0m\u001b[32m'n_d'\u001b[0m: \u001b[1;36m32\u001b[0m, \u001b[32m'n_a'\u001b[0m: \u001b[1;36m32\u001b[0m, \u001b[32m'n_steps'\u001b[0m: \u001b[1;36m5\u001b[0m, \u001b[32m'gamma'\u001b[0m: \u001b[1;36m1.5\u001b[0m, \u001b[32m'n_independent'\u001b[0m: \u001b[1;36m2\u001b[0m, \u001b[32m'n_shared'\u001b[0m: \u001b[1;36m3\u001b[0m\u001b[1m}\u001b[0m\u001b[1m)\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[1m(\u001b[0m\u001b[32m'FTTransformerConfig'\u001b[0m, \u001b[1m{\u001b[0m\u001b[32m'num_heads'\u001b[0m: \u001b[1;36m4\u001b[0m, \u001b[32m'num_attn_blocks'\u001b[0m: \u001b[1;36m4\u001b[0m\u001b[1m}\u001b[0m\u001b[1m)\u001b[0m\n",
       "\u001b[1m)\u001b[0m\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "pprint(MODEL_SWEEP_PRESETS[\"standard\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3. `full`: This is a full sweep of the models, with default hyperparameters, implemented in PyTorch Tabular, except for Mixed Density Networks (which is a specialized model for probabilistic regression) and NODE (which is a model which require high compute and memory). The models included are: "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">[</span>\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'AutoIntConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'CategoryEmbeddingModelConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'DANetConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'FTTransformerConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'GANDALFConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'GatedAdditiveTreeEnsembleConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'TabNetModelConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'TabTransformerConfig'</span>\n",
       "<span style=\"font-weight: bold\">]</span>\n",
       "</pre>\n"
      ],
      "text/plain": [
       "\u001b[1m[\u001b[0m\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'AutoIntConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'CategoryEmbeddingModelConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'DANetConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'FTTransformerConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'GANDALFConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'GatedAdditiveTreeEnsembleConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'TabNetModelConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'TabTransformerConfig'\u001b[0m\n",
       "\u001b[1m]\u001b[0m\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "pprint(list(MODEL_SWEEP_PRESETS[\"full\"]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "4. `high_memory`: This is a full sweep of the models, with default hyperparameters, implemented in PyTorch Tabular, except for Mixed Density Networks (which is a specialized model for probabilistic regression). This option is only recommended if you have ample memory to hold the model and data in your CPU/GPU. The models included are: "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">[</span>\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'AutoIntConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'CategoryEmbeddingModelConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'DANetConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'FTTransformerConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'GANDALFConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'GatedAdditiveTreeEnsembleConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'NodeConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'TabNetModelConfig'</span>,\n",
       "<span style=\"color: #7fbf7f; text-decoration-color: #7fbf7f\">│   </span><span style=\"color: #008000; text-decoration-color: #008000\">'TabTransformerConfig'</span>\n",
       "<span style=\"font-weight: bold\">]</span>\n",
       "</pre>\n"
      ],
      "text/plain": [
       "\u001b[1m[\u001b[0m\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'AutoIntConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'CategoryEmbeddingModelConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'DANetConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'FTTransformerConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'GANDALFConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'GatedAdditiveTreeEnsembleConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'NodeConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'TabNetModelConfig'\u001b[0m,\n",
       "\u001b[2;32m│   \u001b[0m\u001b[32m'TabTransformerConfig'\u001b[0m\n",
       "\u001b[1m]\u001b[0m\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "pprint(list(MODEL_SWEEP_PRESETS[\"high_memory\"]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "- ``metrics, metrics_params, metrics_prob_input``: The metrics to use for evaluation. These parameters hold the same meaning as in the `ModelConfig`.\n",
    "- ``rank_metric``: This is the metric to use for ranking the models. This is a Tuple with the first element as the metric name and the second element is the direction (if it is `lower_the_better` or `hgher_the_better`). Defaults to ('loss', \"lower_is_better\").\n",
    "- ``return_best_model``: If True, will return the best model. Defaults to True.\n",
    "\n",
    "Now let's try and run the sweep on the Covertype dataset, using the `lite` preset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "fcec362e14dc4ded9df0c23dba90bbcb",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Output()"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "030f8b6520bb4d258983c598230fc666",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Finding best initial lr:   0%|          | 0/100 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "73a7308fb4a446e3a1f452047a2eae88",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Finding best initial lr:   0%|          | 0/100 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "b07e31394da245fc804186f1a1f71bdd",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Finding best initial lr:   0%|          | 0/100 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"></pre>\n"
      ],
      "text/plain": []
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
       "</pre>\n"
      ],
      "text/plain": [
       "\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CPU times: user 2h 29min 42s, sys: 15.8 s, total: 2h 29min 58s\n",
      "Wall time: 16min 37s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "from pytorch_tabular import model_sweep\n",
    "import warnings\n",
    "\n",
    "# Filtering out the warnings\n",
    "with warnings.catch_warnings():\n",
    "    warnings.simplefilter(\"ignore\")\n",
    "    sweep_df, best_model = model_sweep(\n",
    "        task=\"classification\",  # One of \"classification\", \"regression\"\n",
    "        train=train,\n",
    "        test=test,\n",
    "        data_config=data_config,\n",
    "        optimizer_config=optimizer_config,\n",
    "        trainer_config=trainer_config,\n",
    "        model_list=\"lite\",\n",
    "        common_model_args=dict(head=\"LinearHead\", head_config=head_config),\n",
    "        metrics=[\"accuracy\", \"f1_score\"],\n",
    "        metrics_params=[{}, {\"average\": \"macro\"}],\n",
    "        metrics_prob_input=[False, True],\n",
    "        rank_metric=(\"accuracy\", \"higher_is_better\"),\n",
    "        progress_bar=True,\n",
    "        verbose=False,\n",
    "        suppress_lightning_logger=True,\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The output, `sweep_df` is a pandas dataframe with the following columns:\n",
    "- `model` : The name of the model\n",
    "- `# Params` : The number of trainable parameters in the model\n",
    "- `test_loss` : The loss on the test set\n",
    "- `test_<metric>` : The metric value on the test set\n",
    "- `time_taken` : The time taken to train the model\n",
    "- `epochs` : The number of epochs trained\n",
    "- `time_taken_per_epoch` : The time taken per epoch\n",
    "- `params` : The config used to train the model\n",
    "\n",
    "Let's check which model performed the best."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<style type=\"text/css\">\n",
       "#T_34b25_row0_col2, #T_34b25_row0_col3, #T_34b25_row0_col4, #T_34b25_row2_col5 {\n",
       "  background-color: #006837;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_34b25_row0_col5 {\n",
       "  background-color: #96d268;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_34b25_row1_col2, #T_34b25_row1_col4 {\n",
       "  background-color: #fed683;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_34b25_row1_col3 {\n",
       "  background-color: #fed07e;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_34b25_row1_col5, #T_34b25_row2_col2, #T_34b25_row2_col3, #T_34b25_row2_col4 {\n",
       "  background-color: #a50026;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "</style>\n",
       "<table id=\"T_34b25\">\n",
       "  <thead>\n",
       "    <tr>\n",
       "      <th class=\"blank level0\" >&nbsp;</th>\n",
       "      <th id=\"T_34b25_level0_col0\" class=\"col_heading level0 col0\" >model</th>\n",
       "      <th id=\"T_34b25_level0_col1\" class=\"col_heading level0 col1\" ># Params</th>\n",
       "      <th id=\"T_34b25_level0_col2\" class=\"col_heading level0 col2\" >test_loss</th>\n",
       "      <th id=\"T_34b25_level0_col3\" class=\"col_heading level0 col3\" >test_accuracy</th>\n",
       "      <th id=\"T_34b25_level0_col4\" class=\"col_heading level0 col4\" >test_f1_score</th>\n",
       "      <th id=\"T_34b25_level0_col5\" class=\"col_heading level0 col5\" >time_taken_per_epoch</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th id=\"T_34b25_level0_row0\" class=\"row_heading level0 row0\" >1</th>\n",
       "      <td id=\"T_34b25_row0_col0\" class=\"data row0 col0\" >GANDALFModel</td>\n",
       "      <td id=\"T_34b25_row0_col1\" class=\"data row0 col1\" >43 T</td>\n",
       "      <td id=\"T_34b25_row0_col2\" class=\"data row0 col2\" >0.189933</td>\n",
       "      <td id=\"T_34b25_row0_col3\" class=\"data row0 col3\" >0.924494</td>\n",
       "      <td id=\"T_34b25_row0_col4\" class=\"data row0 col4\" >0.924418</td>\n",
       "      <td id=\"T_34b25_row0_col5\" class=\"data row0 col5\" >10.985013</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_34b25_level0_row1\" class=\"row_heading level0 row1\" >2</th>\n",
       "      <td id=\"T_34b25_row1_col0\" class=\"data row1 col0\" >TabNetModel</td>\n",
       "      <td id=\"T_34b25_row1_col1\" class=\"data row1 col1\" >50 T</td>\n",
       "      <td id=\"T_34b25_row1_col2\" class=\"data row1 col2\" >0.259448</td>\n",
       "      <td id=\"T_34b25_row1_col3\" class=\"data row1 col3\" >0.895175</td>\n",
       "      <td id=\"T_34b25_row1_col4\" class=\"data row1 col4\" >0.894817</td>\n",
       "      <td id=\"T_34b25_row1_col5\" class=\"data row1 col5\" >19.809555</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_34b25_level0_row2\" class=\"row_heading level0 row2\" >0</th>\n",
       "      <td id=\"T_34b25_row2_col0\" class=\"data row2 col0\" >CategoryEmbeddingModel</td>\n",
       "      <td id=\"T_34b25_row2_col1\" class=\"data row2 col1\" >51 T</td>\n",
       "      <td id=\"T_34b25_row2_col2\" class=\"data row2 col2\" >0.302084</td>\n",
       "      <td id=\"T_34b25_row2_col3\" class=\"data row2 col3\" >0.878024</td>\n",
       "      <td id=\"T_34b25_row2_col4\" class=\"data row2 col4\" >0.876729</td>\n",
       "      <td id=\"T_34b25_row2_col5\" class=\"data row2 col5\" >7.634541</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n"
      ],
      "text/plain": [
       "<pandas.io.formats.style.Styler at 0x7fa52ef4f990>"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sweep_df.drop(columns=[\"params\", \"time_taken\", \"epochs\"]).style.background_gradient(\n",
    "    subset=[\"test_accuracy\", \"test_f1_score\"], cmap=\"RdYlGn\"\n",
    ").background_gradient(subset=[\"time_taken_per_epoch\", \"test_loss\"], cmap=\"RdYlGn_r\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We have trained three fast models on the dataset in ~15 mins on CPU. That is pretty fast. We can see that the GANDALF model performed the best in terms of accuracy, loss and f1 score. We can also see that the training time is comparable to regular MLP. A natural next step would be to tune the model a bit more and find the best parameters.\n",
    "\n",
    "Or, if you have more time, access to a decent size GPU, and want to try out more models, you can try the `standard` preset. Even on a CPU, it may run for a couple of hours only. But it will give you a good idea of the performance of different models.\n",
    "\n",
    "Let' try and run the `standard` preset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d0b427e79d674397bf91e4d5037e800d",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Output()"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f11cc6d5f7da4d87b8d4a58f201ba729",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Finding best initial lr:   0%|          | 0/100 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "16aa477e002e4fd1a2c52f5c4dc2ff6d",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Finding best initial lr:   0%|          | 0/100 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "6b23545e74e64a0fadac6c31dcf071b9",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Finding best initial lr:   0%|          | 0/100 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e4ed0c75c536464fa14da1567483fe4c",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Finding best initial lr:   0%|          | 0/100 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "cf0f10fd89fc4c228771d2a9fa1270d4",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Finding best initial lr:   0%|          | 0/100 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5d0351014ff043a7a12f62d84acdc340",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Finding best initial lr:   0%|          | 0/100 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9f227054e32e47a19bd02834562e6ae5",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Finding best initial lr:   0%|          | 0/100 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"></pre>\n"
      ],
      "text/plain": []
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
       "</pre>\n"
      ],
      "text/plain": [
       "\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CPU times: user 10h 11min 4s, sys: 2min 16s, total: 10h 13min 20s\n",
      "Wall time: 1h 6min 18s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "# Filtering out the warnings\n",
    "with warnings.catch_warnings():\n",
    "    warnings.simplefilter(\"ignore\")\n",
    "    sweep_df, best_model = model_sweep(\n",
    "        task=\"classification\",  # One of \"classification\", \"regression\"\n",
    "        train=train,\n",
    "        test=test,\n",
    "        data_config=data_config,\n",
    "        optimizer_config=optimizer_config,\n",
    "        trainer_config=trainer_config,\n",
    "        model_list=\"standard\",\n",
    "        common_model_args=dict(head=\"LinearHead\", head_config=head_config),\n",
    "        metrics=[\"accuracy\", \"f1_score\"],\n",
    "        metrics_params=[{}, {\"average\": \"macro\"}],\n",
    "        metrics_prob_input=[False, True],\n",
    "        rank_metric=(\"accuracy\", \"higher_is_better\"),\n",
    "        progress_bar=True,\n",
    "        verbose=False,\n",
    "        suppress_lightning_logger=True,\n",
    "    )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<style type=\"text/css\">\n",
       "#T_541e0_row0_col2, #T_541e0_row0_col3, #T_541e0_row0_col4, #T_541e0_row5_col5 {\n",
       "  background-color: #006837;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_541e0_row0_col5 {\n",
       "  background-color: #39a758;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_541e0_row1_col2 {\n",
       "  background-color: #5db961;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_541e0_row1_col3, #T_541e0_row1_col4 {\n",
       "  background-color: #4eb15d;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_541e0_row1_col5 {\n",
       "  background-color: #05713c;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_541e0_row2_col2 {\n",
       "  background-color: #70c164;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_541e0_row2_col3 {\n",
       "  background-color: #69be63;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_541e0_row2_col4 {\n",
       "  background-color: #66bd63;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_541e0_row2_col5, #T_541e0_row6_col2, #T_541e0_row6_col3, #T_541e0_row6_col4 {\n",
       "  background-color: #a50026;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_541e0_row3_col2 {\n",
       "  background-color: #87cb67;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_541e0_row3_col3 {\n",
       "  background-color: #73c264;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_541e0_row3_col4 {\n",
       "  background-color: #6ec064;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_541e0_row3_col5 {\n",
       "  background-color: #0d8044;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_541e0_row4_col2 {\n",
       "  background-color: #8ecf67;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_541e0_row4_col3 {\n",
       "  background-color: #7fc866;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_541e0_row4_col4 {\n",
       "  background-color: #7dc765;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_541e0_row4_col5 {\n",
       "  background-color: #60ba62;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_541e0_row5_col2 {\n",
       "  background-color: #93d168;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_541e0_row5_col3 {\n",
       "  background-color: #82c966;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_541e0_row5_col4 {\n",
       "  background-color: #7ac665;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_541e0_row6_col5 {\n",
       "  background-color: #ebf7a3;\n",
       "  color: #000000;\n",
       "}\n",
       "</style>\n",
       "<table id=\"T_541e0\">\n",
       "  <thead>\n",
       "    <tr>\n",
       "      <th class=\"blank level0\" >&nbsp;</th>\n",
       "      <th id=\"T_541e0_level0_col0\" class=\"col_heading level0 col0\" >model</th>\n",
       "      <th id=\"T_541e0_level0_col1\" class=\"col_heading level0 col1\" ># Params</th>\n",
       "      <th id=\"T_541e0_level0_col2\" class=\"col_heading level0 col2\" >test_loss</th>\n",
       "      <th id=\"T_541e0_level0_col3\" class=\"col_heading level0 col3\" >test_accuracy</th>\n",
       "      <th id=\"T_541e0_level0_col4\" class=\"col_heading level0 col4\" >test_f1_score</th>\n",
       "      <th id=\"T_541e0_level0_col5\" class=\"col_heading level0 col5\" >time_taken_per_epoch</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th id=\"T_541e0_level0_row0\" class=\"row_heading level0 row0\" >3</th>\n",
       "      <td id=\"T_541e0_row0_col0\" class=\"data row0 col0\" >GANDALFModel</td>\n",
       "      <td id=\"T_541e0_row0_col1\" class=\"data row0 col1\" >107 T</td>\n",
       "      <td id=\"T_541e0_row0_col2\" class=\"data row0 col2\" >0.163602</td>\n",
       "      <td id=\"T_541e0_row0_col3\" class=\"data row0 col3\" >0.935071</td>\n",
       "      <td id=\"T_541e0_row0_col4\" class=\"data row0 col4\" >0.935061</td>\n",
       "      <td id=\"T_541e0_row0_col5\" class=\"data row0 col5\" >15.870558</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_541e0_level0_row1\" class=\"row_heading level0 row1\" >1</th>\n",
       "      <td id=\"T_541e0_row1_col0\" class=\"data row1 col0\" >CategoryEmbeddingModel</td>\n",
       "      <td id=\"T_541e0_row1_col1\" class=\"data row1 col1\" >93 T</td>\n",
       "      <td id=\"T_541e0_row1_col2\" class=\"data row1 col2\" >0.233573</td>\n",
       "      <td id=\"T_541e0_row1_col3\" class=\"data row1 col3\" >0.906560</td>\n",
       "      <td id=\"T_541e0_row1_col4\" class=\"data row1 col4\" >0.905311</td>\n",
       "      <td id=\"T_541e0_row1_col5\" class=\"data row1 col5\" >9.128509</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_541e0_level0_row2\" class=\"row_heading level0 row2\" >6</th>\n",
       "      <td id=\"T_541e0_row2_col0\" class=\"data row2 col0\" >FTTransformerModel</td>\n",
       "      <td id=\"T_541e0_row2_col1\" class=\"data row2 col1\" >117 T</td>\n",
       "      <td id=\"T_541e0_row2_col2\" class=\"data row2 col2\" >0.243499</td>\n",
       "      <td id=\"T_541e0_row2_col3\" class=\"data row2 col3\" >0.900330</td>\n",
       "      <td id=\"T_541e0_row2_col4\" class=\"data row2 col4\" >0.900065</td>\n",
       "      <td id=\"T_541e0_row2_col5\" class=\"data row2 col5\" >63.771070</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_541e0_level0_row3\" class=\"row_heading level0 row3\" >2</th>\n",
       "      <td id=\"T_541e0_row3_col0\" class=\"data row3 col0\" >GANDALFModel</td>\n",
       "      <td id=\"T_541e0_row3_col1\" class=\"data row3 col1\" >43 T</td>\n",
       "      <td id=\"T_541e0_row3_col2\" class=\"data row3 col2\" >0.257583</td>\n",
       "      <td id=\"T_541e0_row3_col3\" class=\"data row3 col3\" >0.898075</td>\n",
       "      <td id=\"T_541e0_row3_col4\" class=\"data row3 col4\" >0.897640</td>\n",
       "      <td id=\"T_541e0_row3_col5\" class=\"data row3 col5\" >10.899241</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_541e0_level0_row4\" class=\"row_heading level0 row4\" >4</th>\n",
       "      <td id=\"T_541e0_row4_col0\" class=\"data row4 col0\" >TabNetModel</td>\n",
       "      <td id=\"T_541e0_row4_col1\" class=\"data row4 col1\" >50 T</td>\n",
       "      <td id=\"T_541e0_row4_col2\" class=\"data row4 col2\" >0.260693</td>\n",
       "      <td id=\"T_541e0_row4_col3\" class=\"data row4 col3\" >0.894461</td>\n",
       "      <td id=\"T_541e0_row4_col4\" class=\"data row4 col4\" >0.894012</td>\n",
       "      <td id=\"T_541e0_row4_col5\" class=\"data row4 col5\" >18.629878</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_541e0_level0_row5\" class=\"row_heading level0 row5\" >0</th>\n",
       "      <td id=\"T_541e0_row5_col0\" class=\"data row5 col0\" >CategoryEmbeddingModel</td>\n",
       "      <td id=\"T_541e0_row5_col1\" class=\"data row5 col1\" >51 T</td>\n",
       "      <td id=\"T_541e0_row5_col2\" class=\"data row5 col2\" >0.263826</td>\n",
       "      <td id=\"T_541e0_row5_col3\" class=\"data row5 col3\" >0.893875</td>\n",
       "      <td id=\"T_541e0_row5_col4\" class=\"data row5 col4\" >0.894207</td>\n",
       "      <td id=\"T_541e0_row5_col5\" class=\"data row5 col5\" >7.868230</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_541e0_level0_row6\" class=\"row_heading level0 row6\" >5</th>\n",
       "      <td id=\"T_541e0_row6_col0\" class=\"data row6 col0\" >TabNetModel</td>\n",
       "      <td id=\"T_541e0_row6_col1\" class=\"data row6 col1\" >129 T</td>\n",
       "      <td id=\"T_541e0_row6_col2\" class=\"data row6 col2\" >0.534261</td>\n",
       "      <td id=\"T_541e0_row6_col3\" class=\"data row6 col3\" >0.766813</td>\n",
       "      <td id=\"T_541e0_row6_col4\" class=\"data row6 col4\" >0.760403</td>\n",
       "      <td id=\"T_541e0_row6_col5\" class=\"data row6 col5\" >32.926586</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n"
      ],
      "text/plain": [
       "<pandas.io.formats.style.Styler at 0x7fdae4703290>"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sweep_df.drop(columns=[\"params\", \"time_taken\", \"epochs\"]).style.background_gradient(\n",
    "    subset=[\"test_accuracy\", \"test_f1_score\"], cmap=\"RdYlGn\"\n",
    ").background_gradient(subset=[\"time_taken_per_epoch\", \"test_loss\"], cmap=\"RdYlGn_r\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The larger GANDALF model performed the best in terms of accuracy, loss and f1 score. Although the training time is slightly higher than the comparable MLP, it is still pretty fast. \n",
    "\n",
    "\n",
    "Now, apart from using the presets, you can also pass a list of `ModelConfig` objects. Let's try and run a sweep with a list of `ModelConfig` objects."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "bd94b8b6250f4cfa9e11b26934118517",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Output()"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "ed1cbf48841b4fb091c36b1d27c989b4",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Finding best initial lr:   0%|          | 0/100 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7ca80356e1274986a88af460782fba71",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Finding best initial lr:   0%|          | 0/100 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"></pre>\n"
      ],
      "text/plain": []
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">\n",
       "</pre>\n"
      ],
      "text/plain": [
       "\n"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from pytorch_tabular.models import CategoryEmbeddingModelConfig, GANDALFConfig\n",
    "common_params = {\n",
    "    \"task\": \"classification\",\n",
    "    \"head\":\"LinearHead\", \"head_config\":head_config\n",
    "}\n",
    "model_list = [\n",
    "    CategoryEmbeddingModelConfig(layers=\"1024-512-256\", **common_params),\n",
    "    GANDALFConfig(gflu_stages=2, **common_params),\n",
    "    GANDALFConfig(gflu_stages=6, learnable_sparsity=False, **common_params),\n",
    "]\n",
    "\n",
    "# Filtering out the warnings\n",
    "with warnings.catch_warnings():\n",
    "    warnings.simplefilter(\"ignore\")\n",
    "    sweep_df, best_model = model_sweep(\n",
    "        task=\"classification\",  # One of \"classification\", \"regression\"\n",
    "        train=train,\n",
    "        test=test,\n",
    "        data_config=data_config,\n",
    "        optimizer_config=optimizer_config,\n",
    "        trainer_config=trainer_config,\n",
    "        model_list=model_list,\n",
    "        metrics=[\"accuracy\", \"f1_score\"],\n",
    "        metrics_params=[{}, {\"average\": \"macro\"}],\n",
    "        metrics_prob_input=[False, True],\n",
    "        rank_metric=(\"accuracy\", \"higher_is_better\"),\n",
    "        progress_bar=True,\n",
    "        verbose=False,\n",
    "        suppress_lightning_logger=True,\n",
    "    )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<style type=\"text/css\">\n",
       "#T_3fffc_row0_col2, #T_3fffc_row0_col3, #T_3fffc_row1_col4, #T_3fffc_row1_col5 {\n",
       "  background-color: #006837;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_3fffc_row0_col4 {\n",
       "  background-color: #f2faae;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_3fffc_row0_col5, #T_3fffc_row2_col2, #T_3fffc_row2_col3, #T_3fffc_row2_col4 {\n",
       "  background-color: #a50026;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_3fffc_row1_col2 {\n",
       "  background-color: #f88c51;\n",
       "  color: #f1f1f1;\n",
       "}\n",
       "#T_3fffc_row1_col3 {\n",
       "  background-color: #fff3ac;\n",
       "  color: #000000;\n",
       "}\n",
       "#T_3fffc_row2_col5 {\n",
       "  background-color: #daf08d;\n",
       "  color: #000000;\n",
       "}\n",
       "</style>\n",
       "<table id=\"T_3fffc\">\n",
       "  <thead>\n",
       "    <tr>\n",
       "      <th class=\"blank level0\" >&nbsp;</th>\n",
       "      <th id=\"T_3fffc_level0_col0\" class=\"col_heading level0 col0\" >model</th>\n",
       "      <th id=\"T_3fffc_level0_col1\" class=\"col_heading level0 col1\" ># Params</th>\n",
       "      <th id=\"T_3fffc_level0_col2\" class=\"col_heading level0 col2\" >test_loss</th>\n",
       "      <th id=\"T_3fffc_level0_col3\" class=\"col_heading level0 col3\" >test_accuracy</th>\n",
       "      <th id=\"T_3fffc_level0_col4\" class=\"col_heading level0 col4\" >test_f1_score</th>\n",
       "      <th id=\"T_3fffc_level0_col5\" class=\"col_heading level0 col5\" >time_taken_per_epoch</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th id=\"T_3fffc_level0_row0\" class=\"row_heading level0 row0\" >0</th>\n",
       "      <td id=\"T_3fffc_row0_col0\" class=\"data row0 col0\" >CategoryEmbeddingModel</td>\n",
       "      <td id=\"T_3fffc_row0_col1\" class=\"data row0 col1\" >694 T</td>\n",
       "      <td id=\"T_3fffc_row0_col2\" class=\"data row0 col2\" >0.276405</td>\n",
       "      <td id=\"T_3fffc_row0_col3\" class=\"data row0 col3\" >0.888075</td>\n",
       "      <td id=\"T_3fffc_row0_col4\" class=\"data row0 col4\" >0.795560</td>\n",
       "      <td id=\"T_3fffc_row0_col5\" class=\"data row0 col5\" >14.553613</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_3fffc_level0_row1\" class=\"row_heading level0 row1\" >1</th>\n",
       "      <td id=\"T_3fffc_row1_col0\" class=\"data row1 col0\" >GANDALFModel</td>\n",
       "      <td id=\"T_3fffc_row1_col1\" class=\"data row1 col1\" >15 T</td>\n",
       "      <td id=\"T_3fffc_row1_col2\" class=\"data row1 col2\" >0.284878</td>\n",
       "      <td id=\"T_3fffc_row1_col3\" class=\"data row1 col3\" >0.885967</td>\n",
       "      <td id=\"T_3fffc_row1_col4\" class=\"data row1 col4\" >0.797202</td>\n",
       "      <td id=\"T_3fffc_row1_col5\" class=\"data row1 col5\" >8.369561</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th id=\"T_3fffc_level0_row2\" class=\"row_heading level0 row2\" >2</th>\n",
       "      <td id=\"T_3fffc_row2_col0\" class=\"data row2 col0\" >GANDALFModel</td>\n",
       "      <td id=\"T_3fffc_row2_col1\" class=\"data row2 col1\" >43 T</td>\n",
       "      <td id=\"T_3fffc_row2_col2\" class=\"data row2 col2\" >0.287677</td>\n",
       "      <td id=\"T_3fffc_row2_col3\" class=\"data row2 col3\" >0.884142</td>\n",
       "      <td id=\"T_3fffc_row2_col4\" class=\"data row2 col4\" >0.793678</td>\n",
       "      <td id=\"T_3fffc_row2_col5\" class=\"data row2 col5\" >10.864214</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n"
      ],
      "text/plain": [
       "<pandas.io.formats.style.Styler at 0x7fdae457a390>"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sweep_df.drop(columns=[\"params\", \"time_taken\", \"epochs\"]).style.background_gradient(\n",
    "    subset=[\"test_accuracy\", \"test_f1_score\"], cmap=\"RdYlGn\"\n",
    ").background_gradient(subset=[\"time_taken_per_epoch\", \"test_loss\"], cmap=\"RdYlGn_r\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Although we chose some random hyperparameters, we can see that the GANDALF model performed very close to the MLP, at a fraction of the Parameters and lower training time. \n",
    "\n",
    "<div style=\"background-color: #C8E6C9; padding: 10px; color: #1b7678\">\n",
    "<b>Congrats!</b>: You have learned how to use Model Sweep in PyTorch Tabular to check multiple models on a single dataset. This would be a very useful first step when deciding which models to use for your problem.<br></br>\n",
    "\n",
    "\n",
    "Now try to use this in your own dataset. You can also try to use the `full` preset and see how it performs. <br></br>\n",
    "\n",
    "</div>"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.5"
  },
  "vscode": {
   "interpreter": {
    "hash": "ad8d5d2789703c7b1c2f7bfaada1cbd3aa0ac53e2e4e1cae5da195f5520da229"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
