{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "a079f613",
   "metadata": {},
   "source": [
    "# Tutorial 9: Neural Networks"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "6bb2c0aa",
   "metadata": {
    "papermill": {
     "duration": 0.032379,
     "end_time": "2021-06-22T20:10:29.835505",
     "exception": false,
     "start_time": "2021-06-22T20:10:29.803126",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "<img src=\"../../imgs/lightautoml_logo_color.png\" alt=\"LightAutoML logo\" style=\"width:100%;\"/>"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "4ab03e9a",
   "metadata": {},
   "source": [
    "Official LightAutoML github repository is [here](https://github.com/sb-ai-lab/LightAutoML)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "72cea480",
   "metadata": {},
   "source": [
    "\n",
    "In this tutorial you will learn how to:\n",
    "* train neural networks (nn) with LightAutoML on tabualr data\n",
    "* customize model architecture and pipelines"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "26bb2f44",
   "metadata": {},
   "source": [
    "## 0. Prerequisites"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "bc8496d8",
   "metadata": {},
   "source": [
    "### 0.0 install LightAutoML"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "58201c72",
   "metadata": {
    "_kg_hide-output": true,
    "execution": {
     "iopub.execute_input": "2021-06-22T20:10:29.980264Z",
     "iopub.status.busy": "2021-06-22T20:10:29.979511Z",
     "iopub.status.idle": "2021-06-22T20:10:52.955439Z",
     "shell.execute_reply": "2021-06-22T20:10:52.953955Z",
     "shell.execute_reply.started": "2021-06-22T19:06:24.534180Z"
    },
    "papermill": {
     "duration": 23.023261,
     "end_time": "2021-06-22T20:10:52.955691",
     "exception": false,
     "start_time": "2021-06-22T20:10:29.932430",
     "status": "completed"
    },
    "scrolled": true,
    "tags": []
   },
   "outputs": [],
   "source": [
    "# !pip install -U lightautoml[all]"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "6e1606cb",
   "metadata": {
    "papermill": {
     "duration": 0.066681,
     "end_time": "2021-06-22T20:10:53.090975",
     "exception": false,
     "start_time": "2021-06-22T20:10:53.024294",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "### 0.1 Import libraries\n",
    "\n",
    "Here we will import the libraries we use in this kernel:\n",
    "- Standard python libraries for timing, working with OS etc.\n",
    "- Essential python DS libraries like numpy, pandas, scikit-learn and torch (the last we will use in the next cell)\n",
    "- LightAutoML modules: presets for AutoML, task and report generation module"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "2bea2ba9",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-22T20:10:53.233356Z",
     "iopub.status.busy": "2021-06-22T20:10:53.232675Z",
     "iopub.status.idle": "2021-06-22T20:11:01.486841Z",
     "shell.execute_reply": "2021-06-22T20:11:01.487566Z",
     "shell.execute_reply.started": "2021-06-22T19:06:43.597648Z"
    },
    "papermill": {
     "duration": 8.32949,
     "end_time": "2021-06-22T20:11:01.487788",
     "exception": false,
     "start_time": "2021-06-22T20:10:53.158298",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "# Standard python libraries\n",
    "import os\n",
    "\n",
    "# Essential DS libraries\n",
    "import optuna\n",
    "import requests\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "from sklearn.metrics import roc_auc_score\n",
    "from sklearn.model_selection import train_test_split\n",
    "import torch\n",
    "from copy import deepcopy as copy\n",
    "import torch.nn as nn\n",
    "from collections import OrderedDict\n",
    "\n",
    "# LightAutoML presets, task and report generation\n",
    "from lightautoml.automl.presets.tabular_presets import TabularAutoML\n",
    "from lightautoml.tasks import Task"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "486dff3d",
   "metadata": {
    "papermill": {
     "duration": 0.064234,
     "end_time": "2021-06-22T20:11:01.619010",
     "exception": false,
     "start_time": "2021-06-22T20:11:01.554776",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "### 0.2 Constants\n",
    "\n",
    "Here we setup the constants to use in the kernel:\n",
    "- `N_THREADS` - number of vCPUs for LightAutoML model creation\n",
    "- `N_FOLDS` - number of folds in LightAutoML inner CV\n",
    "- `RANDOM_STATE` - random seed for better reproducibility\n",
    "- `TEST_SIZE` - houldout data part size \n",
    "- `TIMEOUT` - limit in seconds for model to train\n",
    "- `TARGET_NAME` - target column name in dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "64dfd5d0",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-22T20:11:01.758476Z",
     "iopub.status.busy": "2021-06-22T20:11:01.757403Z",
     "iopub.status.idle": "2021-06-22T20:11:01.760870Z",
     "shell.execute_reply": "2021-06-22T20:11:01.760168Z",
     "shell.execute_reply.started": "2021-06-22T19:06:51.523697Z"
    },
    "papermill": {
     "duration": 0.077787,
     "end_time": "2021-06-22T20:11:01.761030",
     "exception": false,
     "start_time": "2021-06-22T20:11:01.683243",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "N_THREADS = 4\n",
    "N_FOLDS = 5\n",
    "RANDOM_STATE = 42\n",
    "TEST_SIZE = 0.2\n",
    "TIMEOUT = 300\n",
    "TARGET_NAME = 'TARGET'\n",
    "\n",
    "np.random.seed(RANDOM_STATE)\n",
    "torch.set_num_threads(N_THREADS)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "62e42740",
   "metadata": {},
   "source": [
    "### 0.3 Data loading"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "b8c3218d",
   "metadata": {},
   "outputs": [],
   "source": [
    "DATASET_DIR = '../data/'\n",
    "DATASET_NAME = 'sampled_app_train.csv'\n",
    "DATASET_FULLNAME = os.path.join(DATASET_DIR, DATASET_NAME)\n",
    "DATASET_URL = 'https://raw.githubusercontent.com/sb-ai-lab/LightAutoML/master/examples/data/sampled_app_train.csv'\n",
    "\n",
    "if not os.path.exists(DATASET_FULLNAME):\n",
    "    os.makedirs(DATASET_DIR, exist_ok=True)\n",
    "\n",
    "    dataset = requests.get(DATASET_URL).text\n",
    "    with open(DATASET_FULLNAME, 'w') as output:\n",
    "        output.write(dataset)\n",
    "\n",
    "data = pd.read_csv(DATASET_FULLNAME)\n",
    "data.head()\n",
    "\n",
    "tr_data, te_data = train_test_split(\n",
    "    data, \n",
    "    test_size=TEST_SIZE, \n",
    "    stratify=data[TARGET_NAME], \n",
    "    random_state=RANDOM_STATE\n",
    ")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "541535c0",
   "metadata": {},
   "source": [
    "## 1. Available built-in models"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8e78b11e",
   "metadata": {},
   "source": [
    "To use different model pass it to the list in `\"use_algo\"`. We support custom models inherited from `torch.nn.Module` class. For every model their parameters is listed below."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "eb1da404",
   "metadata": {},
   "source": [
    "### 1.1 MLP (`\"mlp\"`)\n",
    "- `hidden_size` - define hidden layer dimensions\n",
    "\n",
    "### 1.2 Dense Light (`\"denselight\"`)\n",
    "<img src=\"../../imgs/denselight.png\" style=\"width:25%;\"/>\n",
    "\n",
    "- `hidden_size` - define hidden layer dimensions\n",
    "\n",
    "### 1.3 Dense (`\"dense\"`)\n",
    "<img src=\"../../imgs/densenet.png\" style=\"width:60%;\"/>\n",
    "\n",
    "- `block_config` - set number of blocks and layers within each block\n",
    "- `compression` - portion of neuron to drop after `DenseBlock`\n",
    "- `growth_size` - output dim of every `DenseLayer`\n",
    "- `bn_factor` - size of intermediate fc is increased times this factor in layer\n",
    "\n",
    "### 1.4 Resnet (`\"resnet\"`)\n",
    "<img src=\"../../imgs/resnet.png\" style=\"width:50%;\"/>\n",
    "\n",
    "- `hid_factor` - size of intermediate fc is increased times this factor in layer\n",
    "\n",
    "### 1.5 SNN (`\"snn\"`)\n",
    "- `hidden_size` - define hidden layer dimensions\n",
    "\n",
    "### 1.5 NODE (`\"node\"`)\n",
    "<img src=\"../../imgs/node.png\" style=\"width:80%;\"/>\n",
    "\n",
    "### 1.5 AutoInt (`\"autoint\"`)\n",
    "<img src=\"../../imgs/autoint.png\" style=\"width:80%;\"/>\n",
    "\n",
    "### 1.5 FTTransformer (`\"fttransformer\"`)\n",
    "<img src=\"../../imgs/fttransformer.png\" style=\"width:80%;\"/>\n",
    "\n",
    "- `pooling` - Pooling used for the last step.\n",
    "- `n_out` - Output dimension, 1 for binary prediction.\n",
    "- `embedding_size` - Embeddings size.\n",
    "- `depth` - Number of Attention Blocks inside Transformer.\n",
    "- `heads` - Number of heads in Attention.\n",
    "- `attn_dropout` - Post-Attention dropout.\n",
    "- `ff_dropout` - Feed-Forward Dropout.\n",
    "- `dim_head` - Attention head dimension\n",
    "- `return_attn` - Return attention scores or not.\n",
    "- `num_enc_layers` - Number of Transformer layers.\n",
    "- `device` - Device to compute on.\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "7266e9d9",
   "metadata": {},
   "source": [
    "## 2. Example of usage\n",
    "### 2.1 Task definition"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "fc3bd7a7",
   "metadata": {
    "execution": {
     "iopub.execute_input": "2021-06-22T20:11:23.005952Z",
     "iopub.status.busy": "2021-06-22T20:11:23.002234Z",
     "iopub.status.idle": "2021-06-22T20:11:23.009732Z",
     "shell.execute_reply": "2021-06-22T20:11:23.010398Z",
     "shell.execute_reply.started": "2021-06-22T19:07:08.656347Z"
    },
    "papermill": {
     "duration": 0.086442,
     "end_time": "2021-06-22T20:11:23.010643",
     "exception": false,
     "start_time": "2021-06-22T20:11:22.924201",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "task = Task('binary')\n",
    "roles = {\n",
    "    'target': TARGET_NAME,\n",
    "    'drop': ['SK_ID_CURR']\n",
    "}"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "6f8b1439",
   "metadata": {
    "papermill": {
     "duration": 0.074284,
     "end_time": "2021-06-22T20:11:23.582462",
     "exception": false,
     "start_time": "2021-06-22T20:11:23.508178",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "### 2.2 LightAutoML model creation - TabularAutoML preset with neural network"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "56030975",
   "metadata": {
    "papermill": {
     "duration": 0.072649,
     "end_time": "2021-06-22T20:11:23.726154",
     "exception": false,
     "start_time": "2021-06-22T20:11:23.653505",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "In next the cell we are going to create LightAutoML model with `TabularAutoML` class.\n",
    "\n",
    "in just several lines. Let's discuss the params we can setup:\n",
    "- `task` - the type of the ML task (the only **must have** parameter)\n",
    "- `timeout` - time limit in seconds for model to train\n",
    "- `cpu_limit` - vCPU count for model to use\n",
    "- `nn_params` - network and training params, for example, `\"hidden_size\"`, `\"batch_size\"`, `\"lr\"`, etc.\n",
    "- `nn_pipeline_params` - data preprocessing params, which affect how data is fed to the model: use embeddings or target encoding for categorical columns, standard scalar or quantile transformer for numerical columns\n",
    "- `reader_params` - parameter change for Reader object inside preset, which works on the first step of data preparation: automatic feature typization, preliminary almost-constant features, correct CV setup etc."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "cf5d0510",
   "metadata": {},
   "outputs": [],
   "source": [
    "automl = TabularAutoML(\n",
    "    task = task, \n",
    "    timeout = TIMEOUT,\n",
    "    cpu_limit = N_THREADS,\n",
    "    general_params = {\"use_algos\": [[\"mlp\"]]}, # ['nn', 'mlp', 'dense', 'denselight', 'resnet', 'snn', 'node', 'autoint', 'fttransformer'] or custom torch model\n",
    "    nn_params = {\"n_epochs\": 10, \"bs\": 512, \"num_workers\": 0, \"path_to_save\": None, \"freeze_defaults\": True},\n",
    "    nn_pipeline_params = {\"use_qnt\": True, \"use_te\": False},\n",
    "    reader_params = {'n_jobs': N_THREADS, 'cv': N_FOLDS, 'random_state': RANDOM_STATE}\n",
    ")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "910ee822",
   "metadata": {},
   "source": [
    "### 2.3 AutoML training"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "45da4245",
   "metadata": {},
   "source": [
    "To run autoML training use fit_predict method:\n",
    "\n",
    "- `train_data` - Dataset to train.\n",
    "- `roles` - Roles dict.\n",
    "- `verbose` - Controls the verbosity: the higher, the more messages.\n",
    "        <1  : messages are not displayed;\n",
    "        >=1 : the computation process for layers is displayed;\n",
    "        >=2 : the information about folds processing is also displayed;\n",
    "        >=3 : the hyperparameters optimization process is also displayed;\n",
    "        >=4 : the training process for every algorithm is displayed;\n",
    "\n",
    "Note: out-of-fold prediction is calculated during training and returned from the fit_predict method"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "6ddc26e9",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[14:56:04] Stdout logging level is INFO.\n",
      "[14:56:04] Copying TaskTimer may affect the parent PipelineTimer, so copy will create new unlimited TaskTimer\n",
      "[14:56:04] Task: binary\n",
      "\n",
      "[14:56:04] Start automl preset with listed constraints:\n",
      "[14:56:04] - time: 300.00 seconds\n",
      "[14:56:04] - CPU: 4 cores\n",
      "[14:56:04] - memory: 16 GB\n",
      "\n",
      "[14:56:04] \u001b[1mTrain data shape: (8000, 122)\u001b[0m\n",
      "\n",
      "[14:56:08] Layer \u001b[1m1\u001b[0m train process start. Time left 296.45 secs\n",
      "[14:56:08] Start fitting \u001b[1mLvl_0_Pipe_0_Mod_0_TorchNN_mlp_0\u001b[0m ...\n",
      "[14:56:15] Fitting \u001b[1mLvl_0_Pipe_0_Mod_0_TorchNN_mlp_0\u001b[0m finished. score = \u001b[1m0.6035621265821923\u001b[0m\n",
      "[14:56:15] \u001b[1mLvl_0_Pipe_0_Mod_0_TorchNN_mlp_0\u001b[0m fitting and predicting completed\n",
      "[14:56:15] Time left 289.10 secs\n",
      "\n",
      "[14:56:15] \u001b[1mLayer 1 training completed.\u001b[0m\n",
      "\n",
      "[14:56:15] \u001b[1mAutoml preset training completed in 10.90 seconds\u001b[0m\n",
      "\n",
      "[14:56:15] Model description:\n",
      "Final prediction for new objects (level 0) = \n",
      "\t 1.00000 * (5 averaged models Lvl_0_Pipe_0_Mod_0_TorchNN_mlp_0) \n",
      "\n",
      "CPU times: user 10.9 s, sys: 822 ms, total: 11.8 s\n",
      "Wall time: 10.9 s\n"
     ]
    }
   ],
   "source": [
    "%%time \n",
    "oof_pred = automl.fit_predict(tr_data, roles = roles, verbose = 1)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "b3b0a58a",
   "metadata": {
    "papermill": {
     "duration": 0.145098,
     "end_time": "2021-06-22T20:34:32.530768",
     "exception": false,
     "start_time": "2021-06-22T20:34:32.385670",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "### 2.4 Prediction on holdout and model evaluation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "35e2b6e1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prediction for te_data:\n",
      "array([[0.09815434],\n",
      "       [0.08660936],\n",
      "       [0.060364  ],\n",
      "       ...,\n",
      "       [0.09103375],\n",
      "       [0.05593849],\n",
      "       [0.09817966]], dtype=float32)\n",
      "Shape = (2000, 1)\n",
      "CPU times: user 1.39 s, sys: 59.4 ms, total: 1.45 s\n",
      "Wall time: 1.35 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "\n",
    "te_pred = automl.predict(te_data)\n",
    "print(f'Prediction for te_data:\\n{te_pred}\\nShape = {te_pred.shape}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "5f93f9d0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "OOF score: 0.6035621265821923\n",
      "HOLDOUT score: 0.5970482336956522\n"
     ]
    }
   ],
   "source": [
    "print(f'OOF score: {roc_auc_score(tr_data[TARGET_NAME].values, oof_pred.data[:, 0])}')\n",
    "print(f'HOLDOUT score: {roc_auc_score(te_data[TARGET_NAME].values, te_pred.data[:, 0])}')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ba9846b4",
   "metadata": {},
   "source": [
    "You can obtain the description of the resulting pipeline:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "5ee6caca",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Final prediction for new objects (level 0) = \n",
      "\t 1.00000 * (5 averaged models Lvl_0_Pipe_0_Mod_0_TorchNN_mlp_0) \n"
     ]
    }
   ],
   "source": [
    "print(automl.create_model_str_desc())"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "41d07887",
   "metadata": {},
   "source": [
    "## 3. Main training loop and pipeline params"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "dc28b650",
   "metadata": {},
   "source": [
    "### 3.1 Training loop params"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "1b0633e5",
   "metadata": {},
   "source": [
    "<img src=\"../../imgs/swa.png\" style=\"width:70%;\"/>"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "8c7cd871",
   "metadata": {},
   "source": [
    "- `bs` - batch_size\n",
    "- `snap_params` - early stopping and checkpoint averaging params, stochastic weight averaging (swa)\n",
    "- `opt` - lr optimizer\n",
    "- `opt_params` - optimizer params\n",
    "- `clip_grad` - use grad clipping for regularization\n",
    "- `clip_grad_params`\n",
    "- `emb_dropout` - embedding dropout for categorical columns"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "401c164c",
   "metadata": {},
   "source": [
    "This set of params should be passed in `nn_params` as well."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "dc28b650",
   "metadata": {},
   "source": [
    "### 3.2 Pipeline params"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "449bd024",
   "metadata": {},
   "source": [
    "Transformation for numerical columns"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "09b79443",
   "metadata": {},
   "source": [
    "- `use_qnt` - uses quantile transformation for numerical columns\n",
    "- `output_distribution` - type of distribuiton of feature after qnt transformer\n",
    "- `n_quantiles` - number of quantiles used to build feature distribution\n",
    "- `qnt_factor` - decreses `n_quantiles` depending on train data shape"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "449bd024",
   "metadata": {},
   "source": [
    "Transformation for categorical columns"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "1a910e28",
   "metadata": {},
   "source": [
    "- `use_te` - uses target encoding\n",
    "- `top_intersections` - number of intersections of cat columns to use"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "79551da5",
   "metadata": {},
   "source": [
    "Full list of default parameters you can find here:\n",
    "- [nn_params](../../lightautoml/automl/presets/tabular_config.yml)\n",
    "- [nn_pipeline_params](../../lightautoml/automl/presets/tabular_config.yml)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "6c176647",
   "metadata": {},
   "source": [
    "## 4. More use cases"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "6c6863d3",
   "metadata": {},
   "source": [
    "Let's remember default Lama params to be more compact."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "343d7bac",
   "metadata": {},
   "outputs": [],
   "source": [
    "default_lama_params = {\n",
    "    \"task\": task, \n",
    "    \"timeout\": TIMEOUT,\n",
    "    \"cpu_limit\": N_THREADS,\n",
    "    \"reader_params\": {'n_jobs': N_THREADS, 'cv': N_FOLDS, 'random_state': RANDOM_STATE}\n",
    "}\n",
    "\n",
    "default_nn_params = {\n",
    "    \"bs\": 512, \"num_workers\": 0, \"path_to_save\": None, \"n_epochs\": 10, \"freeze_defaults\": True\n",
    "}"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "1f82ac87",
   "metadata": {},
   "source": [
    "### 4.1 Custom model"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7f41519f",
   "metadata": {},
   "source": [
    "Consider simple neural network that we want to train. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "32247a28",
   "metadata": {},
   "outputs": [],
   "source": [
    "class SimpleNet(nn.Module):\n",
    "    def __init__(\n",
    "        self,\n",
    "        n_in,\n",
    "        n_out,\n",
    "        hidden_size,\n",
    "        drop_rate,\n",
    "        **kwargs, # kwargs is must-have to hold unnecessary parameters\n",
    "    ):\n",
    "        super(SimpleNet, self).__init__()\n",
    "        self.features = nn.Sequential(OrderedDict([]))\n",
    "\n",
    "        self.features.add_module(\"norm\", nn.BatchNorm1d(n_in))\n",
    "        self.features.add_module(\"dense1\", nn.Linear(n_in, hidden_size))\n",
    "        self.features.add_module(\"act\", nn.SiLU())\n",
    "        self.features.add_module(\"dropout\", nn.Dropout(p=drop_rate))\n",
    "        self.features.add_module(\"dense2\", nn.Linear(hidden_size, n_out))\n",
    "\n",
    "    def forward(self, x):\n",
    "        \"\"\"\n",
    "        Args:\n",
    "            x: data after feature pipeline transformation\n",
    "            (by default concatenation of columns)\n",
    "        \"\"\"\n",
    "        for layer in self.features:\n",
    "            x = layer(x)\n",
    "        return x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "c7e359df",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[14:56:17] Stdout logging level is INFO.\n",
      "[14:56:17] Task: binary\n",
      "\n",
      "[14:56:17] Start automl preset with listed constraints:\n",
      "[14:56:17] - time: 300.00 seconds\n",
      "[14:56:17] - CPU: 4 cores\n",
      "[14:56:17] - memory: 16 GB\n",
      "\n",
      "[14:56:17] \u001b[1mTrain data shape: (8000, 122)\u001b[0m\n",
      "\n",
      "[14:56:17] Layer \u001b[1m1\u001b[0m train process start. Time left 299.22 secs\n",
      "[14:56:18] Start fitting \u001b[1mLvl_0_Pipe_0_Mod_0_TorchNN_0\u001b[0m ...\n",
      "[14:56:23] Fitting \u001b[1mLvl_0_Pipe_0_Mod_0_TorchNN_0\u001b[0m finished. score = \u001b[1m0.70579837612218\u001b[0m\n",
      "[14:56:23] \u001b[1mLvl_0_Pipe_0_Mod_0_TorchNN_0\u001b[0m fitting and predicting completed\n",
      "[14:56:23] Time left 293.15 secs\n",
      "\n",
      "[14:56:23] \u001b[1mLayer 1 training completed.\u001b[0m\n",
      "\n",
      "[14:56:23] \u001b[1mAutoml preset training completed in 6.86 seconds\u001b[0m\n",
      "\n",
      "[14:56:23] Model description:\n",
      "Final prediction for new objects (level 0) = \n",
      "\t 1.00000 * (5 averaged models Lvl_0_Pipe_0_Mod_0_TorchNN_0) \n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "array([[0.04888836],\n",
       "       [0.02840128],\n",
       "       [0.04246276],\n",
       "       ...,\n",
       "       [0.05778075],\n",
       "       [0.17132443],\n",
       "       [0.20606528]], dtype=float32)"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "automl = TabularAutoML(\n",
    "    **default_lama_params,\n",
    "    general_params={\"use_algos\": [[SimpleNet]]},\n",
    "    nn_params={\n",
    "        **default_nn_params,\n",
    "        \"hidden_size\": 256,\n",
    "        \"drop_rate\": 0.1\n",
    "    },\n",
    ")\n",
    "automl.fit_predict(tr_data, roles=roles, verbose=1)\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "bbf8a589",
   "metadata": {},
   "source": [
    "#### 4.1.1 Define the pipeline by yourself"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "c7015726",
   "metadata": {},
   "outputs": [],
   "source": [
    "from typing import Sequence\n",
    "from typing import Dict\n",
    "from typing import Optional\n",
    "from typing import Any\n",
    "from typing import Callable\n",
    "from typing import Union\n",
    "\n",
    "\n",
    "class CatEmbedder(nn.Module):\n",
    "    \"\"\"Category data model.\n",
    "\n",
    "    Args:\n",
    "        cat_dims: Sequence with number of unique categories\n",
    "            for category features\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(\n",
    "        self,\n",
    "        cat_dims: Sequence[int],\n",
    "        **kwargs\n",
    "    ):\n",
    "        super(CatEmbedder, self).__init__()\n",
    "        emb_dims = [\n",
    "            (int(x), 5)\n",
    "            for x in cat_dims\n",
    "        ]\n",
    "        self.no_of_embs = sum([y for x, y in emb_dims])\n",
    "        self.emb_layers = nn.ModuleList([nn.Embedding(x, y) for x, y in emb_dims])\n",
    "    \n",
    "    def get_out_shape(self) -> int:\n",
    "        \"\"\"Output shape.\n",
    "\n",
    "        Returns:\n",
    "            Int with module output shape.\n",
    "\n",
    "        \"\"\"\n",
    "        return self.no_of_embs\n",
    "\n",
    "    def forward(self, inp: Dict[str, torch.Tensor]) -> torch.Tensor:\n",
    "        \"\"\"Concat all categorical embeddings\n",
    "        \"\"\"\n",
    "        output = torch.cat(\n",
    "            [\n",
    "                emb_layer(inp[\"cat\"][:, i])\n",
    "                for i, emb_layer in enumerate(self.emb_layers)\n",
    "            ],\n",
    "            dim=1,\n",
    "        )\n",
    "        return output\n",
    "\n",
    "\n",
    "class ContEmbedder(nn.Module):\n",
    "    \"\"\"Numeric data model.\n",
    "\n",
    "    Class for working with numeric data.\n",
    "\n",
    "    Args:\n",
    "        num_dims: Sequence with number of numeric features.\n",
    "        input_bn: Use 1d batch norm for input data.\n",
    "\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, num_dims: int,  **kwargs):\n",
    "        super(ContEmbedder, self).__init__()\n",
    "        self.n_out = num_dims\n",
    "    \n",
    "    def get_out_shape(self) -> int:\n",
    "        \"\"\"Output shape.\n",
    "\n",
    "        Returns:\n",
    "            int with module output shape.\n",
    "\n",
    "        \"\"\"\n",
    "        return self.n_out\n",
    "        \n",
    "    def forward(self, inp: Dict[str, torch.Tensor]) -> torch.Tensor:\n",
    "        \"\"\"Forward-pass.\"\"\"\n",
    "        return (inp[\"cont\"] - inp[\"cont\"].mean(axis=0)) / (inp[\"cont\"].std(axis=0) + 1e-6)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "998ea71b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from lightautoml.text.nn_model import TorchUniversalModel\n",
    "\n",
    "class SimpleNet_plus(TorchUniversalModel):\n",
    "    \"\"\"Mixed data model.\n",
    "\n",
    "    Class for preparing input for DL model with mixed data.\n",
    "\n",
    "    Args:\n",
    "            n_out: Number of output dimensions.\n",
    "            cont_params: Dict with numeric model params.\n",
    "            cat_params: Dict with category model para\n",
    "            **kwargs: Loss, task and other parameters.\n",
    "\n",
    "        \"\"\"\n",
    "\n",
    "    def __init__(\n",
    "            self,\n",
    "            n_out: int = 1,\n",
    "            cont_params: Optional[Dict] = None,\n",
    "            cat_params: Optional[Dict] = None,\n",
    "            **kwargs,\n",
    "    ):\n",
    "        # init parent class (need some helper functions to be used)\n",
    "        super(SimpleNet_plus, self).__init__(**{\n",
    "                **kwargs,\n",
    "                \"cont_params\": cont_params,\n",
    "                \"cat_params\": cat_params,\n",
    "                \"torch_model\": None, # dont need any model inside parent class\n",
    "        })\n",
    "        \n",
    "        n_in = 0\n",
    "        \n",
    "        # add cont columns processing\n",
    "        self.cont_embedder = ContEmbedder(**cont_params)\n",
    "        n_in += self.cont_embedder.get_out_shape()\n",
    "        \n",
    "        # add cat columns processing\n",
    "        self.cat_embedder = CatEmbedder(**cat_params)\n",
    "        n_in += self.cat_embedder.get_out_shape()\n",
    "        \n",
    "        self.torch_model = SimpleNet(\n",
    "                **{\n",
    "                    **kwargs,\n",
    "                    **{\"n_in\": n_in, \"n_out\": n_out},\n",
    "                }\n",
    "        )\n",
    "    \n",
    "    def get_logits(self, inp: Dict[str, torch.Tensor]) -> torch.Tensor:\n",
    "        outputs = []\n",
    "        outputs.append(self.cont_embedder(inp))\n",
    "        outputs.append(self.cat_embedder(inp))\n",
    "        \n",
    "        if len(outputs) > 1:\n",
    "            output = torch.cat(outputs, dim=1)\n",
    "        else:\n",
    "            output = outputs[0]\n",
    "        \n",
    "        logits = self.torch_model(output)\n",
    "        return logits"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "0e0fd192",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[14:56:24] Stdout logging level is INFO.\n",
      "[14:56:24] Task: binary\n",
      "\n",
      "[14:56:24] Start automl preset with listed constraints:\n",
      "[14:56:24] - time: 300.00 seconds\n",
      "[14:56:24] - CPU: 4 cores\n",
      "[14:56:24] - memory: 16 GB\n",
      "\n",
      "[14:56:24] \u001b[1mTrain data shape: (8000, 122)\u001b[0m\n",
      "\n",
      "[14:56:24] Layer \u001b[1m1\u001b[0m train process start. Time left 299.21 secs\n",
      "[14:56:25] Start fitting \u001b[1mLvl_0_Pipe_0_Mod_0_TorchNN_0\u001b[0m ...\n",
      "[14:56:30] Fitting \u001b[1mLvl_0_Pipe_0_Mod_0_TorchNN_0\u001b[0m finished. score = \u001b[1m0.6600159152016962\u001b[0m\n",
      "[14:56:30] \u001b[1mLvl_0_Pipe_0_Mod_0_TorchNN_0\u001b[0m fitting and predicting completed\n",
      "[14:56:30] Time left 293.19 secs\n",
      "\n",
      "[14:56:30] \u001b[1mLayer 1 training completed.\u001b[0m\n",
      "\n",
      "[14:56:30] \u001b[1mAutoml preset training completed in 6.82 seconds\u001b[0m\n",
      "\n",
      "[14:56:30] Model description:\n",
      "Final prediction for new objects (level 0) = \n",
      "\t 1.00000 * (5 averaged models Lvl_0_Pipe_0_Mod_0_TorchNN_0) \n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "array([[0.07509199],\n",
       "       [0.06439159],\n",
       "       [0.04291169],\n",
       "       ...,\n",
       "       [0.11671165],\n",
       "       [0.2381251 ],\n",
       "       [0.04382631]], dtype=float32)"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "automl = TabularAutoML(\n",
    "    **default_lama_params,\n",
    "    general_params={\"use_algos\": [[SimpleNet_plus]]},\n",
    "    nn_params={\n",
    "        **default_nn_params,\n",
    "        \"hidden_size\": 256,\n",
    "        \"drop_rate\": 0.1,\n",
    "        \"model_with_emb\": True,\n",
    "    },\n",
    "    debug=True\n",
    ")\n",
    "automl.fit_predict(tr_data, roles = roles, verbose = 1)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "1f82ac87",
   "metadata": {},
   "source": [
    "### 4.2 Tuning network"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "e42c7ac0",
   "metadata": {},
   "source": [
    "One can try optimize metric with the help of Optuna. Among validation stratagies there are:\n",
    "- `fit_on_holdout = True` - holdout\n",
    "- `fit_on_holdout = False` - cross-validation."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "1f82ac87",
   "metadata": {},
   "source": [
    "#### 4.2.1 Built-in models"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "78dfb657",
   "metadata": {},
   "source": [
    "Use `\"_tuned\"` in model name to tune it."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "668a5dcf",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[14:56:31] Stdout logging level is INFO3.\n",
      "[14:56:31] Task: binary\n",
      "\n",
      "[14:56:31] Start automl preset with listed constraints:\n",
      "[14:56:31] - time: 300.00 seconds\n",
      "[14:56:31] - CPU: 4 cores\n",
      "[14:56:31] - memory: 16 GB\n",
      "\n",
      "[14:56:31] \u001b[1mTrain data shape: (8000, 122)\u001b[0m\n",
      "\n",
      "[14:56:31] Feats was rejected during automatic roles guess: []\n",
      "[14:56:31] Layer \u001b[1m1\u001b[0m train process start. Time left 299.23 secs\n",
      "[14:56:31] Start hyperparameters optimization for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_denselight_tuned_0\u001b[0m ... Time budget is 100.00 secs\n",
      "[14:56:32] Epoch: 0, train loss: 0.307483434677124, val loss: 0.2785775661468506, val metric: 0.6090575236140289\n",
      "[14:56:32] Epoch: 1, train loss: 0.27614495158195496, val loss: 0.2799951434135437, val metric: 0.585088014710992\n",
      "[14:56:32] Epoch: 2, train loss: 0.27499067783355713, val loss: 0.28620871901512146, val metric: 0.626435952125129\n",
      "[14:56:32] Early stopping: val loss: 0.27636581659317017, val metric: 0.6215073421321315\n",
      "[14:56:33] \u001b[1mTrial 1\u001b[0m with hyperparameters {'bs': 128, 'weight_decay_bin': 0, 'lr': 0.029154431891537533} scored 0.6215073421321315 in 0:00:01.209556\n",
      "[14:56:33] Epoch: 0, train loss: 0.275651752948761, val loss: 0.30943918228149414, val metric: 0.5956214485409284\n",
      "[14:56:33] Epoch: 1, train loss: 0.28080031275749207, val loss: 0.3092973828315735, val metric: 0.590257175083257\n",
      "[14:56:33] Epoch: 2, train loss: 0.27882617712020874, val loss: 0.3090418577194214, val metric: 0.5908826060693533\n",
      "[14:56:33] Early stopping: val loss: 0.30920112133026123, val metric: 0.5908826060693533\n",
      "[14:56:33] \u001b[1mTrial 2\u001b[0m with hyperparameters {'bs': 512, 'weight_decay_bin': 0, 'lr': 5.415244119402538e-05} scored 0.5908826060693533 in 0:00:00.625572\n",
      "[14:56:33] Epoch: 0, train loss: 0.2746148705482483, val loss: 0.2774512767791748, val metric: 0.5961372954653581\n",
      "[14:56:33] Epoch: 1, train loss: 0.27803024649620056, val loss: 0.276823490858078, val metric: 0.5987940407652709\n",
      "[14:56:34] Epoch: 2, train loss: 0.2752102017402649, val loss: 0.2738468050956726, val metric: 0.6048131458109488\n",
      "[14:56:34] Early stopping: val loss: 0.2762503921985626, val metric: 0.6007344804913642\n",
      "[14:56:34] \u001b[1mTrial 3\u001b[0m with hyperparameters {'bs': 1024, 'weight_decay_bin': 1, 'weight_decay': 2.9204338471814107e-05, 'lr': 0.0006672367170464204} scored 0.6007344804913642 in 0:00:00.504866\n",
      "[14:56:34] Epoch: 0, train loss: 0.2786032557487488, val loss: 0.2767927944660187, val metric: 0.5910777191547594\n",
      "[14:56:35] Epoch: 1, train loss: 0.27806466817855835, val loss: 0.27634257078170776, val metric: 0.592954012113048\n",
      "[14:56:35] Epoch: 2, train loss: 0.2776066064834595, val loss: 0.2759130001068115, val metric: 0.5941113267155251\n",
      "[14:56:35] Early stopping: val loss: 0.27634990215301514, val metric: 0.593207926402275\n",
      "[14:56:36] \u001b[1mTrial 4\u001b[0m with hyperparameters {'bs': 64, 'weight_decay_bin': 0, 'lr': 1.8205657658407255e-05} scored 0.593207926402275 in 0:00:01.881862\n",
      "[14:56:36] Epoch: 0, train loss: 0.27860182523727417, val loss: 0.28250277042388916, val metric: 0.5923793639847972\n",
      "[14:56:36] Epoch: 1, train loss: 0.2779836356639862, val loss: 0.2820056080818176, val metric: 0.5916817678849207\n",
      "[14:56:37] Epoch: 2, train loss: 0.2774103283882141, val loss: 0.2815183401107788, val metric: 0.5948837607111739\n",
      "[14:56:37] Early stopping: val loss: 0.2820083796977997, val metric: 0.5934698590374777\n",
      "[14:56:37] \u001b[1mTrial 5\u001b[0m with hyperparameters {'bs': 128, 'weight_decay_bin': 0, 'lr': 3.077180271250682e-05} scored 0.5934698590374777 in 0:00:01.121957\n",
      "[14:56:37] Hyperparameters optimization for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_denselight_tuned_0\u001b[0m completed\n",
      "[14:56:37] The set of hyperparameters \u001b[1m{'bs': 128, 'weight_decay_bin': 0, 'lr': 0.029154431891537533}\u001b[0m\n",
      " achieve 0.6215 auc\n",
      "[14:56:37] Start fitting \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_denselight_tuned_0\u001b[0m ...\n",
      "[14:56:37] ===== Start working with \u001b[1mfold 0\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_denselight_tuned_0\u001b[0m =====\n",
      "[14:56:37] Epoch: 0, train loss: 0.2774243652820587, val loss: 0.2794122099876404, val metric: 0.5975324876651111\n",
      "[14:56:37] Epoch: 1, train loss: 0.2743901014328003, val loss: 0.2776646912097931, val metric: 0.608838355490696\n",
      "[14:56:38] Epoch: 2, train loss: 0.27337446808815, val loss: 0.2764543294906616, val metric: 0.6192034040551448\n",
      "[14:56:38] Early stopping: val loss: 0.2777901291847229, val metric: 0.6107948319087405\n",
      "[14:56:38] ===== Start working with \u001b[1mfold 1\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_denselight_tuned_0\u001b[0m =====\n",
      "[14:56:38] Epoch: 0, train loss: 0.27729275822639465, val loss: 0.27042776346206665, val metric: 0.6243578040081522\n",
      "[14:56:39] Epoch: 1, train loss: 0.2746446132659912, val loss: 0.268259733915329, val metric: 0.6331256368885869\n",
      "[14:56:39] Epoch: 2, train loss: 0.2735038101673126, val loss: 0.26757094264030457, val metric: 0.6373237941576086\n",
      "[14:56:39] Early stopping: val loss: 0.2686738669872284, val metric: 0.6330725628396741\n",
      "[14:56:39] ===== Start working with \u001b[1mfold 2\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_denselight_tuned_0\u001b[0m =====\n",
      "[14:56:39] Epoch: 0, train loss: 0.2761422395706177, val loss: 0.27418816089630127, val metric: 0.5687123174252717\n",
      "[14:56:40] Epoch: 1, train loss: 0.2733106017112732, val loss: 0.2741503119468689, val metric: 0.5761931046195653\n",
      "[14:56:40] Epoch: 2, train loss: 0.27197468280792236, val loss: 0.2742484509944916, val metric: 0.5801471212635869\n",
      "[14:56:40] Early stopping: val loss: 0.2739076316356659, val metric: 0.5770820949388586\n",
      "[14:56:40] ===== Start working with \u001b[1mfold 3\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_denselight_tuned_0\u001b[0m =====\n",
      "[14:56:41] Epoch: 0, train loss: 0.2768873870372772, val loss: 0.27838170528411865, val metric: 0.6020985478940217\n",
      "[14:56:41] Epoch: 1, train loss: 0.2738708257675171, val loss: 0.2780988812446594, val metric: 0.5976827870244564\n",
      "[14:56:41] Epoch: 2, train loss: 0.2721073031425476, val loss: 0.2787047028541565, val metric: 0.5934050186820653\n",
      "[14:56:41] Early stopping: val loss: 0.2780289649963379, val metric: 0.5992405103600543\n",
      "[14:56:41] ===== Start working with \u001b[1mfold 4\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_denselight_tuned_0\u001b[0m =====\n",
      "[14:56:42] Epoch: 0, train loss: 0.27685776352882385, val loss: 0.27540671825408936, val metric: 0.5904859459918478\n",
      "[14:56:42] Epoch: 1, train loss: 0.27350443601608276, val loss: 0.2741386592388153, val metric: 0.5980489979619567\n",
      "[14:56:42] Epoch: 2, train loss: 0.27244144678115845, val loss: 0.27352553606033325, val metric: 0.6023214588994564\n",
      "[14:56:42] Early stopping: val loss: 0.2742116153240204, val metric: 0.5989326808763588\n",
      "[14:56:43] Fitting \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_denselight_tuned_0\u001b[0m finished. score = \u001b[1m0.6015718759719786\u001b[0m\n",
      "[14:56:43] \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_denselight_tuned_0\u001b[0m fitting and predicting completed\n",
      "[14:56:43] Time left 288.04 secs\n",
      "\n",
      "[14:56:43] \u001b[1mLayer 1 training completed.\u001b[0m\n",
      "\n",
      "[14:56:43] \u001b[1mAutoml preset training completed in 11.97 seconds\u001b[0m\n",
      "\n",
      "[14:56:43] Model description:\n",
      "Final prediction for new objects (level 0) = \n",
      "\t 1.00000 * (5 averaged models Lvl_0_Pipe_0_Mod_0_Tuned_TorchNN_denselight_tuned_0) \n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "array([[0.07784943],\n",
       "       [0.04554275],\n",
       "       [0.05328501],\n",
       "       ...,\n",
       "       [0.07100379],\n",
       "       [0.09577154],\n",
       "       [0.07620702]], dtype=float32)"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "automl = TabularAutoML(\n",
    "    **default_lama_params,\n",
    "    general_params={\"use_algos\": [[\"denselight_tuned\"]]},\n",
    "    nn_params={\n",
    "        **default_nn_params,\n",
    "        \"n_epochs\": 3,\n",
    "        \"tuning_params\": {\n",
    "            \"max_tuning_iter\": 5,\n",
    "            \"max_tuning_time\": 100,\n",
    "            \"fit_on_holdout\": True\n",
    "        }\n",
    "    },\n",
    ")\n",
    "automl.fit_predict(tr_data, roles = roles, verbose = 3)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "1f82ac87",
   "metadata": {},
   "source": [
    "#### 4.2.2 Custom model"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "78dfb657",
   "metadata": {},
   "source": [
    "There is a spesial flag `tuned` to mark that you need optimize parameters for the model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "668a5dcf",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[14:56:43] Stdout logging level is INFO2.\n",
      "[14:56:43] Task: binary\n",
      "\n",
      "[14:56:43] Start automl preset with listed constraints:\n",
      "[14:56:43] - time: 300.00 seconds\n",
      "[14:56:43] - CPU: 4 cores\n",
      "[14:56:43] - memory: 16 GB\n",
      "\n",
      "[14:56:43] \u001b[1mTrain data shape: (8000, 122)\u001b[0m\n",
      "\n",
      "[14:56:43] Layer \u001b[1m1\u001b[0m train process start. Time left 299.22 secs\n",
      "[14:56:43] Start hyperparameters optimization for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m ... Time budget is 100.00 secs\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Optimization Progress: 100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:12<00:00,  2.42s/it, best_trial=0, best_value=0.767]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[14:56:56] Hyperparameters optimization for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m completed\n",
      "[14:56:56] The set of hyperparameters \u001b[1m{'bs': 128, 'weight_decay_bin': 0, 'lr': 0.029154431891537533}\u001b[0m\n",
      " achieve 0.7667 auc\n",
      "[14:56:56] Start fitting \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m ...\n",
      "[14:56:56] ===== Start working with \u001b[1mfold 0\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m =====\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[14:56:58] ===== Start working with \u001b[1mfold 1\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m =====\n",
      "[14:57:01] ===== Start working with \u001b[1mfold 2\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m =====\n",
      "[14:57:04] ===== Start working with \u001b[1mfold 3\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m =====\n",
      "[14:57:06] ===== Start working with \u001b[1mfold 4\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m =====\n",
      "[14:57:09] Fitting \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m finished. score = \u001b[1m0.7271980081974132\u001b[0m\n",
      "[14:57:09] \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m fitting and predicting completed\n",
      "[14:57:09] Time left 273.65 secs\n",
      "\n",
      "[14:57:09] \u001b[1mLayer 1 training completed.\u001b[0m\n",
      "\n",
      "[14:57:09] \u001b[1mAutoml preset training completed in 26.35 seconds\u001b[0m\n",
      "\n",
      "[14:57:09] Model description:\n",
      "Final prediction for new objects (level 0) = \n",
      "\t 1.00000 * (5 averaged models Lvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0) \n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "array([[0.03879727],\n",
       "       [0.02351108],\n",
       "       [0.0386253 ],\n",
       "       ...,\n",
       "       [0.04145308],\n",
       "       [0.182652  ],\n",
       "       [0.28383675]], dtype=float32)"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "automl = TabularAutoML(\n",
    "    **default_lama_params,\n",
    "    general_params={\"use_algos\": [[SimpleNet]]},\n",
    "    nn_params={\n",
    "        **default_nn_params,\n",
    "        \"hidden_size\": 256,\n",
    "        \"drop_rate\": 0.1,\n",
    "        \n",
    "        \"tuned\": True,\n",
    "        \"tuning_params\": {\n",
    "            \"max_tuning_iter\": 5,\n",
    "            \"max_tuning_time\": 100,\n",
    "            \"fit_on_holdout\": True\n",
    "        }\n",
    "    },\n",
    ")\n",
    "automl.fit_predict(tr_data, roles = roles, verbose = 2)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "f52aa5e2",
   "metadata": {},
   "source": [
    "Sometimes we need to tune parameters that we define by ourself. To this purpose we have `optimization_search_space` which describes necessary parameter grid. See example below.  \n",
    "Here is the grid:  \n",
    "- `bs` in `[64, 128, 256, 512, 1024]`\n",
    "- `hidden_size` in `[64, 128, 256, 512, 1024]`\n",
    "- `drop_rate` in `[0.0, 0.3]`\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "e905c542",
   "metadata": {},
   "outputs": [],
   "source": [
    "def my_opt_space(trial: optuna.trial.Trial, estimated_n_trials, suggested_params):\n",
    "    ''' \n",
    "        This function needs for parameter tuning\n",
    "    '''\n",
    "    # optionally\n",
    "    trial_values = copy(suggested_params)\n",
    "\n",
    "    trial_values[\"bs\"] = trial.suggest_categorical(\n",
    "        \"bs\", [2 ** i for i in range(6, 11)]\n",
    "    )\n",
    "    trial_values[\"hidden_size\"] = trial.suggest_categorical(\n",
    "        \"hidden_size\", [2 ** i for i in range(6, 11)]\n",
    "    )\n",
    "    trial_values[\"drop_rate\"] = trial.suggest_float(\n",
    "        \"drop_rate\", 0.0, 0.3\n",
    "    )\n",
    "    return trial_values"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "2398295c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[14:57:09] Stdout logging level is INFO3.\n",
      "[14:57:09] Task: binary\n",
      "\n",
      "[14:57:09] Start automl preset with listed constraints:\n",
      "[14:57:09] - time: 300.00 seconds\n",
      "[14:57:09] - CPU: 4 cores\n",
      "[14:57:09] - memory: 16 GB\n",
      "\n",
      "[14:57:09] \u001b[1mTrain data shape: (8000, 122)\u001b[0m\n",
      "\n",
      "[14:57:10] Feats was rejected during automatic roles guess: []\n",
      "[14:57:10] Layer \u001b[1m1\u001b[0m train process start. Time left 299.19 secs\n",
      "[14:57:10] Start hyperparameters optimization for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m ... Time budget is 156.97 secs\n",
      "[14:57:10] Epoch: 0, train loss: 0.27667880058288574, val loss: 0.2776942551136017, val metric: 0.6380251348418515\n",
      "[14:57:10] Epoch: 1, train loss: 0.2685483694076538, val loss: 0.2682102620601654, val metric: 0.6962383266246506\n",
      "[14:57:11] Epoch: 2, train loss: 0.2586376965045929, val loss: 0.259479820728302, val metric: 0.741352213865324\n",
      "[14:57:11] Early stopping: val loss: 0.2688170075416565, val metric: 0.7005254689396005\n",
      "[14:57:11] \u001b[1mTrial 1\u001b[0m with hyperparameters {'bs': 128, 'hidden_size': 256, 'drop_rate': 0.006175348288740734} scored 0.7005254689396005 in 0:00:01.061573\n",
      "[14:57:11] Epoch: 0, train loss: 0.2708968222141266, val loss: 0.2574772536754608, val metric: 0.73635678432253\n",
      "[14:57:12] Epoch: 1, train loss: 0.2547965347766876, val loss: 0.2453528791666031, val metric: 0.7694297886898558\n",
      "[14:57:12] Epoch: 2, train loss: 0.244949072599411, val loss: 0.24259500205516815, val metric: 0.7720972251177362\n",
      "[14:57:12] Early stopping: val loss: 0.2467520385980606, val metric: 0.7681415077697773\n",
      "[14:57:13] \u001b[1mTrial 2\u001b[0m with hyperparameters {'bs': 64, 'hidden_size': 1024, 'drop_rate': 0.04184815819561255} scored 0.7681415077697773 in 0:00:01.623261\n",
      "[14:57:13] Epoch: 0, train loss: 0.2758210599422455, val loss: 0.30738407373428345, val metric: 0.6267513404001689\n",
      "[14:57:13] Epoch: 1, train loss: 0.27457138895988464, val loss: 0.30406495928764343, val metric: 0.6331232526687729\n",
      "[14:57:13] Epoch: 2, train loss: 0.27020180225372314, val loss: 0.30057811737060547, val metric: 0.6501381828289794\n",
      "[14:57:13] Early stopping: val loss: 0.30382469296455383, val metric: 0.6392439234301416\n",
      "[14:57:13] \u001b[1mTrial 3\u001b[0m with hyperparameters {'bs': 512, 'hidden_size': 512, 'drop_rate': 0.019515477895583853} scored 0.6392439234301416 in 0:00:00.557979\n",
      "[14:57:13] Epoch: 0, train loss: 0.27815869450569153, val loss: 0.28139299154281616, val metric: 0.6253187292525297\n",
      "[14:57:14] Epoch: 1, train loss: 0.2752159833908081, val loss: 0.2780429720878601, val metric: 0.6364214656467331\n",
      "[14:57:14] Epoch: 2, train loss: 0.2704654037952423, val loss: 0.27340632677078247, val metric: 0.6653997680025231\n",
      "[14:57:14] Early stopping: val loss: 0.2779545187950134, val metric: 0.6449262579448445\n",
      "[14:57:14] \u001b[1mTrial 4\u001b[0m with hyperparameters {'bs': 128, 'hidden_size': 64, 'drop_rate': 0.2727961206236346} scored 0.6449262579448445 in 0:00:01.034966\n",
      "[14:57:15] Epoch: 0, train loss: 0.27770310640335083, val loss: 0.28032609820365906, val metric: 0.627515756049842\n",
      "[14:57:15] Epoch: 1, train loss: 0.27274399995803833, val loss: 0.27464091777801514, val metric: 0.6614868151664342\n",
      "[14:57:15] Epoch: 2, train loss: 0.2659642994403839, val loss: 0.26697367429733276, val metric: 0.7082605000240552\n",
      "[14:57:15] Early stopping: val loss: 0.27455854415893555, val metric: 0.6712371238727542\n",
      "[14:57:15] \u001b[1mTrial 5\u001b[0m with hyperparameters {'bs': 128, 'hidden_size': 128, 'drop_rate': 0.17936999364332554} scored 0.6712371238727542 in 0:00:01.005326\n",
      "[14:57:15] Hyperparameters optimization for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m completed\n",
      "[14:57:15] The set of hyperparameters \u001b[1m{'bs': 64, 'hidden_size': 1024, 'drop_rate': 0.04184815819561255}\u001b[0m\n",
      " achieve 0.7681 auc\n",
      "[14:57:15] Start fitting \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m ...\n",
      "[14:57:15] ===== Start working with \u001b[1mfold 0\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m =====\n",
      "[14:57:16] Epoch: 0, train loss: 0.2708968222141266, val loss: 0.2574772536754608, val metric: 0.73635678432253\n",
      "[14:57:16] Epoch: 1, train loss: 0.2547965347766876, val loss: 0.2453528791666031, val metric: 0.7694297886898558\n",
      "[14:57:17] Epoch: 2, train loss: 0.244949072599411, val loss: 0.24259500205516815, val metric: 0.7720972251177362\n",
      "[14:57:17] Early stopping: val loss: 0.2467520385980606, val metric: 0.7681415077697773\n",
      "[14:57:17] ===== Start working with \u001b[1mfold 1\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m =====\n",
      "[14:57:17] Epoch: 0, train loss: 0.27157771587371826, val loss: 0.2592219412326813, val metric: 0.7320822010869567\n",
      "[14:57:18] Epoch: 1, train loss: 0.2547703981399536, val loss: 0.2508712708950043, val metric: 0.742293648097826\n",
      "[14:57:18] Epoch: 2, train loss: 0.24370187520980835, val loss: 0.2526074945926666, val metric: 0.7376868206521738\n",
      "[14:57:18] Early stopping: val loss: 0.25174227356910706, val metric: 0.7418584408967391\n",
      "[14:57:18] ===== Start working with \u001b[1mfold 2\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m =====\n",
      "[14:57:19] Epoch: 0, train loss: 0.2685704529285431, val loss: 0.27050256729125977, val metric: 0.6421482252038044\n",
      "[14:57:19] Epoch: 1, train loss: 0.2485620081424713, val loss: 0.2682766616344452, val metric: 0.673721976902174\n",
      "[14:57:20] Epoch: 2, train loss: 0.2378905713558197, val loss: 0.2720091640949249, val metric: 0.6822032099184783\n",
      "[14:57:20] Early stopping: val loss: 0.26766759157180786, val metric: 0.6720713739809782\n",
      "[14:57:20] ===== Start working with \u001b[1mfold 3\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m =====\n",
      "[14:57:21] Epoch: 0, train loss: 0.2702709436416626, val loss: 0.2615496516227722, val metric: 0.7016389266304347\n",
      "[14:57:21] Epoch: 1, train loss: 0.2521918714046478, val loss: 0.2561715841293335, val metric: 0.7150401239809783\n",
      "[14:57:22] Epoch: 2, train loss: 0.24233928322792053, val loss: 0.2551616132259369, val metric: 0.7239884086277175\n",
      "[14:57:22] Early stopping: val loss: 0.2555495500564575, val metric: 0.7176938264266304\n",
      "[14:57:22] ===== Start working with \u001b[1mfold 4\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m =====\n",
      "[14:57:22] Epoch: 0, train loss: 0.2711535096168518, val loss: 0.2612263262271881, val metric: 0.701416015625\n",
      "[14:57:23] Epoch: 1, train loss: 0.25223615765571594, val loss: 0.254193514585495, val metric: 0.7256443189538043\n",
      "[14:57:23] Epoch: 2, train loss: 0.24515873193740845, val loss: 0.25219428539276123, val metric: 0.7378141983695652\n",
      "[14:57:23] Early stopping: val loss: 0.25415900349617004, val metric: 0.7281971807065218\n",
      "[14:57:23] Fitting \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m finished. score = \u001b[1m0.7241582599492865\u001b[0m\n",
      "[14:57:23] \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0\u001b[0m fitting and predicting completed\n",
      "[14:57:23] Time left 285.60 secs\n",
      "\n",
      "[14:57:23] \u001b[1mLayer 1 training completed.\u001b[0m\n",
      "\n",
      "[14:57:23] \u001b[1mAutoml preset training completed in 14.40 seconds\u001b[0m\n",
      "\n",
      "[14:57:23] Model description:\n",
      "Final prediction for new objects (level 0) = \n",
      "\t 1.00000 * (5 averaged models Lvl_0_Pipe_0_Mod_0_Tuned_TorchNN_0) \n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "array([[0.04496425],\n",
       "       [0.03032025],\n",
       "       [0.03665409],\n",
       "       ...,\n",
       "       [0.05365612],\n",
       "       [0.16432838],\n",
       "       [0.1691863 ]], dtype=float32)"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "automl = TabularAutoML(\n",
    "    **default_lama_params,\n",
    "    general_params={\"use_algos\": [[SimpleNet]]},\n",
    "    nn_params={\n",
    "        **default_nn_params,\n",
    "        \"n_epochs\": 3,\n",
    "        \"tuned\": True,\n",
    "        \"tuning_params\": {\n",
    "            \"max_tuning_iter\": 5,\n",
    "            \"max_tuning_time\": 3600,\n",
    "            \"fit_on_holdout\": True\n",
    "        },\n",
    "        \"optimization_search_space\": my_opt_space,\n",
    "    },\n",
    ")\n",
    "automl.fit_predict(tr_data, roles = roles, verbose = 3)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "1000351d",
   "metadata": {},
   "source": [
    "##### 4.2.3 One more example\n",
    "##### Tuning NODE params"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "fcbad7ce",
   "metadata": {},
   "outputs": [],
   "source": [
    "TIMEOUT = 3000"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "a3bba8dc",
   "metadata": {},
   "outputs": [],
   "source": [
    "default_lama_params = {\n",
    "    \"task\": task, \n",
    "    \"timeout\": TIMEOUT,\n",
    "    \"cpu_limit\": N_THREADS,\n",
    "    \"reader_params\": {'n_jobs': N_THREADS, 'cv': N_FOLDS, 'random_state': RANDOM_STATE}\n",
    "}\n",
    "\n",
    "default_nn_params = {\n",
    "    \"bs\": 512, \"num_workers\": 0, \"path_to_save\": None, \"n_epochs\": 10, \"freeze_defaults\": True\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "ec77132c",
   "metadata": {},
   "outputs": [],
   "source": [
    "def my_opt_space_NODE(trial: optuna.trial.Trial, estimated_n_trials, suggested_params):\n",
    "    ''' \n",
    "        This function needs for parameter tuning\n",
    "    '''\n",
    "    # optionally\n",
    "    trial_values = copy(suggested_params)\n",
    "\n",
    "    trial_values[\"layer_dim\"] = trial.suggest_categorical(\n",
    "        \"layer_dim\", [2 ** i for i in range(8, 10)]\n",
    "    )\n",
    "    trial_values[\"use_original_head\"] = trial.suggest_categorical(\n",
    "        \"use_original_head\", [True, False]\n",
    "    )\n",
    "    trial_values[\"num_layers\"] = trial.suggest_int(\n",
    "        \"num_layers\", 1, 3\n",
    "    )\n",
    "    trial_values[\"drop_rate\"] = trial.suggest_float(\n",
    "        \"drop_rate\", 0.0, 0.3\n",
    "    )\n",
    "    trial_values[\"tree_dim\"] = trial.suggest_int(\n",
    "        \"tree_dim\", 1, 3\n",
    "    )\n",
    "    return trial_values"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "ba312d42",
   "metadata": {},
   "outputs": [],
   "source": [
    "automl = TabularAutoML(\n",
    "    task = task, \n",
    "    timeout = TIMEOUT,\n",
    "    cpu_limit = N_THREADS,\n",
    "    general_params = {\"use_algos\": [[\"node_tuned\"]]}, # ['nn', 'mlp', 'dense', 'denselight', 'resnet', 'snn'] or custom torch model\n",
    "    nn_params = {\"n_epochs\": 10, \"bs\": 512, \"num_workers\": 0, \"path_to_save\": None, \"freeze_defaults\": True, \"optimization_search_space\": my_opt_space_NODE,},\n",
    "    nn_pipeline_params = {\"use_qnt\": True, \"use_te\": False},\n",
    "    reader_params = {'n_jobs': N_THREADS, 'cv': N_FOLDS, 'random_state': RANDOM_STATE}\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "3df2104f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[14:57:24] Stdout logging level is INFO2.\n",
      "[14:57:24] Task: binary\n",
      "\n",
      "[14:57:24] Start automl preset with listed constraints:\n",
      "[14:57:24] - time: 3000.00 seconds\n",
      "[14:57:24] - CPU: 4 cores\n",
      "[14:57:24] - memory: 16 GB\n",
      "\n",
      "[14:57:24] \u001b[1mTrain data shape: (8000, 122)\u001b[0m\n",
      "\n",
      "[14:57:25] Layer \u001b[1m1\u001b[0m train process start. Time left 2999.22 secs\n",
      "[14:57:25] Start hyperparameters optimization for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_node_tuned_0\u001b[0m ... Time budget is 1574.34 secs\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Optimization Progress: 100%|█████████████████████████████████████████████████████████████████████████████████| 25/25 [03:48<00:00,  9.14s/it, best_trial=13, best_value=0.732]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[15:01:14] Hyperparameters optimization for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_node_tuned_0\u001b[0m completed\n",
      "[15:01:14] The set of hyperparameters \u001b[1m{'layer_dim': 512, 'use_original_head': False, 'num_layers': 3, 'drop_rate': 0.1310638585198816, 'tree_dim': 3}\u001b[0m\n",
      " achieve 0.7315 auc\n",
      "[15:01:14] Start fitting \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_node_tuned_0\u001b[0m ...\n",
      "[15:01:14] ===== Start working with \u001b[1mfold 0\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_node_tuned_0\u001b[0m =====\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[15:01:27] ===== Start working with \u001b[1mfold 1\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_node_tuned_0\u001b[0m =====\n",
      "[15:01:41] ===== Start working with \u001b[1mfold 2\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_node_tuned_0\u001b[0m =====\n",
      "[15:01:54] ===== Start working with \u001b[1mfold 3\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_node_tuned_0\u001b[0m =====\n",
      "[15:02:07] ===== Start working with \u001b[1mfold 4\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_node_tuned_0\u001b[0m =====\n",
      "[15:02:20] Fitting \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_node_tuned_0\u001b[0m finished. score = \u001b[1m0.6942477367184283\u001b[0m\n",
      "[15:02:20] \u001b[1mLvl_0_Pipe_0_Mod_0_Tuned_TorchNN_node_tuned_0\u001b[0m fitting and predicting completed\n",
      "[15:02:20] Time left 2703.48 secs\n",
      "\n",
      "[15:02:20] \u001b[1mLayer 1 training completed.\u001b[0m\n",
      "\n",
      "[15:02:20] \u001b[1mAutoml preset training completed in 296.53 seconds\u001b[0m\n",
      "\n",
      "[15:02:20] Model description:\n",
      "Final prediction for new objects (level 0) = \n",
      "\t 1.00000 * (5 averaged models Lvl_0_Pipe_0_Mod_0_Tuned_TorchNN_node_tuned_0) \n",
      "\n"
     ]
    }
   ],
   "source": [
    "oof_pred = automl.fit_predict(tr_data, roles = roles, verbose = 2)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "1f82ac87",
   "metadata": {},
   "source": [
    "### 4.3 Several models"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "f70cafd6",
   "metadata": {},
   "source": [
    "If you have several neural networks you can either define one set parameters for all or use unique for each one of them as below.  \n",
    "**Note:** numeration starts with 0. Each id (string of number) corresponds to the serial number in *the list of used neural networks*."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "3d282b4a",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[15:02:20] Stdout logging level is INFO3.\n",
      "[15:02:20] Task: binary\n",
      "\n",
      "[15:02:20] Start automl preset with listed constraints:\n",
      "[15:02:20] - time: 3000.00 seconds\n",
      "[15:02:20] - CPU: 4 cores\n",
      "[15:02:20] - memory: 16 GB\n",
      "\n",
      "[15:02:20] \u001b[1mTrain data shape: (8000, 122)\u001b[0m\n",
      "\n",
      "[15:02:21] Feats was rejected during automatic roles guess: []\n",
      "[15:02:21] Layer \u001b[1m1\u001b[0m train process start. Time left 2999.21 secs\n",
      "[15:02:21] Training until validation scores don't improve for 200 rounds\n",
      "[15:02:24] \u001b[1mSelector_LightGBM\u001b[0m fitting and predicting completed\n",
      "[15:02:25] Start fitting \u001b[1mLvl_0_Pipe_0_Mod_0_LightGBM\u001b[0m ...\n",
      "[15:02:25] ===== Start working with \u001b[1mfold 0\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_LightGBM\u001b[0m =====\n",
      "[15:02:25] Training until validation scores don't improve for 200 rounds\n",
      "[15:02:27] ===== Start working with \u001b[1mfold 1\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_LightGBM\u001b[0m =====\n",
      "[15:02:27] Training until validation scores don't improve for 200 rounds\n",
      "[15:02:31] ===== Start working with \u001b[1mfold 2\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_LightGBM\u001b[0m =====\n",
      "[15:02:31] Training until validation scores don't improve for 200 rounds\n",
      "[15:02:33] ===== Start working with \u001b[1mfold 3\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_LightGBM\u001b[0m =====\n",
      "[15:02:33] Training until validation scores don't improve for 200 rounds\n",
      "[15:02:37] ===== Start working with \u001b[1mfold 4\u001b[0m for \u001b[1mLvl_0_Pipe_0_Mod_0_LightGBM\u001b[0m =====\n",
      "[15:02:37] Training until validation scores don't improve for 200 rounds\n",
      "[15:02:39] Fitting \u001b[1mLvl_0_Pipe_0_Mod_0_LightGBM\u001b[0m finished. score = \u001b[1m0.7324164765495265\u001b[0m\n",
      "[15:02:39] \u001b[1mLvl_0_Pipe_0_Mod_0_LightGBM\u001b[0m fitting and predicting completed\n",
      "[15:02:39] Time left 2981.34 secs\n",
      "\n",
      "[15:02:39] Start fitting \u001b[1mLvl_0_Pipe_1_Mod_0_TorchNN_mlp_0\u001b[0m ...\n",
      "[15:02:39] ===== Start working with \u001b[1mfold 0\u001b[0m for \u001b[1mLvl_0_Pipe_1_Mod_0_TorchNN_mlp_0\u001b[0m =====\n",
      "[15:02:39] Epoch: 0, train loss: 0.27922049164772034, val loss: 0.30915024876594543, val metric: 0.5770001764036115\n",
      "[15:02:39] Epoch: 1, train loss: 0.2804635167121887, val loss: 0.30738765001296997, val metric: 0.5919196454821967\n",
      "[15:02:40] Early stopping: val loss: 0.3083692789077759, val metric: 0.5864190601429403\n",
      "[15:02:40] ===== Start working with \u001b[1mfold 1\u001b[0m for \u001b[1mLvl_0_Pipe_1_Mod_0_TorchNN_mlp_0\u001b[0m =====\n",
      "[15:02:40] Epoch: 0, train loss: 0.2781796455383301, val loss: 0.2599707245826721, val metric: 0.6234980044157609\n",
      "[15:02:40] Epoch: 1, train loss: 0.27900466322898865, val loss: 0.25812670588493347, val metric: 0.6304188603940217\n",
      "[15:02:40] Early stopping: val loss: 0.259158730506897, val metric: 0.6298456606657609\n",
      "[15:02:40] ===== Start working with \u001b[1mfold 2\u001b[0m for \u001b[1mLvl_0_Pipe_1_Mod_0_TorchNN_mlp_0\u001b[0m =====\n",
      "[15:02:40] Epoch: 0, train loss: 0.27802005410194397, val loss: 0.26105043292045593, val metric: 0.54180908203125\n",
      "[15:02:41] Epoch: 1, train loss: 0.2754856050014496, val loss: 0.26127585768699646, val metric: 0.5531536599864131\n",
      "[15:02:41] Early stopping: val loss: 0.2610853612422943, val metric: 0.5479577105978262\n",
      "[15:02:41] ===== Start working with \u001b[1mfold 3\u001b[0m for \u001b[1mLvl_0_Pipe_1_Mod_0_TorchNN_mlp_0\u001b[0m =====\n",
      "[15:02:41] Epoch: 0, train loss: 0.2771044075489044, val loss: 0.2935408353805542, val metric: 0.5986620032269022\n",
      "[15:02:41] Epoch: 1, train loss: 0.2794908881187439, val loss: 0.292603075504303, val metric: 0.5987601902173911\n",
      "[15:02:41] Early stopping: val loss: 0.2930262088775635, val metric: 0.6013183593749999\n",
      "[15:02:41] ===== Start working with \u001b[1mfold 4\u001b[0m for \u001b[1mLvl_0_Pipe_1_Mod_0_TorchNN_mlp_0\u001b[0m =====\n",
      "[15:02:41] Epoch: 0, train loss: 0.27787843346595764, val loss: 0.2770363688468933, val metric: 0.5949680494225544\n",
      "[15:02:41] Epoch: 1, train loss: 0.27761200070381165, val loss: 0.2755982279777527, val metric: 0.5874899159307065\n",
      "[15:02:41] Early stopping: val loss: 0.27642762660980225, val metric: 0.5912050993546196\n",
      "[15:02:42] Fitting \u001b[1mLvl_0_Pipe_1_Mod_0_TorchNN_mlp_0\u001b[0m finished. score = \u001b[1m0.5890259518134635\u001b[0m\n",
      "[15:02:42] \u001b[1mLvl_0_Pipe_1_Mod_0_TorchNN_mlp_0\u001b[0m fitting and predicting completed\n",
      "[15:02:42] Start fitting \u001b[1mLvl_0_Pipe_1_Mod_1_TorchNN_dense_1\u001b[0m ...\n",
      "[15:02:42] ===== Start working with \u001b[1mfold 0\u001b[0m for \u001b[1mLvl_0_Pipe_1_Mod_1_TorchNN_dense_1\u001b[0m =====\n",
      "[15:02:42] Epoch: 0, train loss: 0.27331802248954773, val loss: 0.3054792881011963, val metric: 0.6767831465058721\n",
      "[15:02:42] Epoch: 1, train loss: 0.24652224779129028, val loss: 0.28541794419288635, val metric: 0.7536603749378579\n",
      "[15:02:42] Epoch: 2, train loss: 0.22167454659938812, val loss: 0.29559990763664246, val metric: 0.7280871968397026\n",
      "[15:02:42] Epoch: 3, train loss: 0.18925495445728302, val loss: 0.32088974118232727, val metric: 0.7067904699285298\n",
      "[15:02:43] Epoch: 4, train loss: 0.15982066094875336, val loss: 0.3512553572654724, val metric: 0.7054808067525163\n",
      "[15:02:43] Early stopping: val loss: 0.2940850555896759, val metric: 0.733935243837901\n",
      "[15:02:43] ===== Start working with \u001b[1mfold 1\u001b[0m for \u001b[1mLvl_0_Pipe_1_Mod_1_TorchNN_dense_1\u001b[0m =====\n",
      "[15:02:43] Epoch: 0, train loss: 0.2741259038448334, val loss: 0.2579110264778137, val metric: 0.714859672214674\n",
      "[15:02:43] Epoch: 1, train loss: 0.24377931654453278, val loss: 0.2441163808107376, val metric: 0.7128348972486412\n",
      "[15:02:43] Epoch: 2, train loss: 0.21424879133701324, val loss: 0.2476673424243927, val metric: 0.6905942170516304\n",
      "[15:02:44] Epoch: 3, train loss: 0.18909378349781036, val loss: 0.2624853551387787, val metric: 0.6963155995244565\n",
      "[15:02:44] Epoch: 4, train loss: 0.15743489563465118, val loss: 0.2790760099887848, val metric: 0.6850798233695653\n",
      "[15:02:44] Early stopping: val loss: 0.24882426857948303, val metric: 0.7349694293478262\n",
      "[15:02:44] ===== Start working with \u001b[1mfold 2\u001b[0m for \u001b[1mLvl_0_Pipe_1_Mod_1_TorchNN_dense_1\u001b[0m =====\n",
      "[15:02:44] Epoch: 0, train loss: 0.2707020044326782, val loss: 0.2598980665206909, val metric: 0.6082498301630435\n",
      "[15:02:44] Epoch: 1, train loss: 0.2435956597328186, val loss: 0.2571799159049988, val metric: 0.6500668733016304\n",
      "[15:02:45] Epoch: 2, train loss: 0.21632780134677887, val loss: 0.2744308412075043, val metric: 0.630631156589674\n",
      "[15:02:45] Epoch: 3, train loss: 0.18905353546142578, val loss: 0.2583690583705902, val metric: 0.6252043350883152\n",
      "[15:02:45] Epoch: 4, train loss: 0.1545938104391098, val loss: 0.31239771842956543, val metric: 0.6080746858016305\n",
      "[15:02:45] Early stopping: val loss: 0.25516635179519653, val metric: 0.6447541610054348\n",
      "[15:02:45] ===== Start working with \u001b[1mfold 3\u001b[0m for \u001b[1mLvl_0_Pipe_1_Mod_1_TorchNN_dense_1\u001b[0m =====\n",
      "[15:02:45] Epoch: 0, train loss: 0.27545467019081116, val loss: 0.28978264331817627, val metric: 0.6915867017663043\n",
      "[15:02:46] Epoch: 1, train loss: 0.24367769062519073, val loss: 0.2738344967365265, val metric: 0.7211224099864131\n",
      "[15:02:46] Epoch: 2, train loss: 0.21612469851970673, val loss: 0.2693524658679962, val metric: 0.7317690641983695\n",
      "[15:02:46] Epoch: 3, train loss: 0.18013358116149902, val loss: 0.31211358308792114, val metric: 0.7004659901494565\n",
      "[15:02:46] Epoch: 4, train loss: 0.16005361080169678, val loss: 0.31500178575515747, val metric: 0.7096371858016304\n",
      "[15:02:46] Early stopping: val loss: 0.27935606241226196, val metric: 0.7314346976902173\n",
      "[15:02:46] ===== Start working with \u001b[1mfold 4\u001b[0m for \u001b[1mLvl_0_Pipe_1_Mod_1_TorchNN_dense_1\u001b[0m =====\n",
      "[15:02:47] Epoch: 0, train loss: 0.2715047299861908, val loss: 0.27490749955177307, val metric: 0.672867484714674\n",
      "[15:02:47] Epoch: 1, train loss: 0.24714724719524384, val loss: 0.2625226378440857, val metric: 0.7219928243885869\n",
      "[15:02:47] Epoch: 2, train loss: 0.21466588973999023, val loss: 0.26622310280799866, val metric: 0.7148384425951086\n",
      "[15:02:47] Epoch: 3, train loss: 0.18601463735103607, val loss: 0.28521138429641724, val metric: 0.6925048828124999\n",
      "[15:02:47] Epoch: 4, train loss: 0.15070100128650665, val loss: 0.2958201467990875, val metric: 0.6602305536684783\n",
      "[15:02:47] Early stopping: val loss: 0.2677777111530304, val metric: 0.7166058084239131\n",
      "[15:02:47] Fitting \u001b[1mLvl_0_Pipe_1_Mod_1_TorchNN_dense_1\u001b[0m finished. score = \u001b[1m0.7086451902861568\u001b[0m\n",
      "[15:02:47] \u001b[1mLvl_0_Pipe_1_Mod_1_TorchNN_dense_1\u001b[0m fitting and predicting completed\n",
      "[15:02:47] Time left 2973.02 secs\n",
      "\n",
      "[15:02:47] \u001b[1mLayer 1 training completed.\u001b[0m\n",
      "\n",
      "[15:02:47] Blending: optimization starts with equal weights. Score = \u001b[1m0.7380476\u001b[0m\n",
      "[15:02:47] Blending: iteration \u001b[1m0\u001b[0m: score = \u001b[1m0.7394669\u001b[0m, weights = \u001b[1m[0.45736045 0.0553336  0.487306  ]\u001b[0m\n",
      "[15:02:48] Blending: iteration \u001b[1m1\u001b[0m: score = \u001b[1m0.7395753\u001b[0m, weights = \u001b[1m[0.4892029  0.         0.51079714]\u001b[0m\n",
      "[15:02:48] Blending: no improvements for score. Terminated.\n",
      "\n",
      "[15:02:48] Blending: best score = \u001b[1m0.7395753\u001b[0m, best weights = \u001b[1m[0.4892029  0.         0.51079714]\u001b[0m\n",
      "[15:02:48] \u001b[1mAutoml preset training completed in 27.27 seconds\u001b[0m\n",
      "\n",
      "[15:02:48] Model description:\n",
      "Final prediction for new objects (level 0) = \n",
      "\t 0.48920 * (5 averaged models Lvl_0_Pipe_0_Mod_0_LightGBM) +\n",
      "\t 0.51080 * (5 averaged models Lvl_0_Pipe_1_Mod_1_TorchNN_dense_1) \n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "array([[0.06395157],\n",
       "       [0.04285344],\n",
       "       [0.04808115],\n",
       "       ...,\n",
       "       [0.04276791],\n",
       "       [0.19339147],\n",
       "       [0.10395089]], dtype=float32)"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "automl = TabularAutoML(\n",
    "    **default_lama_params,\n",
    "    general_params = {\"use_algos\": [[\"lgb\", \"mlp\", \"dense\"]]},\n",
    "    nn_params = {\"0\": {**default_nn_params, \"n_epochs\": 2},\n",
    "                 \"1\": {**default_nn_params, \"n_epochs\": 5}},\n",
    ")\n",
    "automl.fit_predict(tr_data, roles = roles, verbose = 3)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "tmp-venv2",
   "language": "python",
   "name": "tmp-venv2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  },
  "papermill": {
   "default_parameters": {},
   "duration": 1531.539656,
   "end_time": "2021-06-22T20:35:52.076563",
   "environment_variables": {},
   "exception": null,
   "input_path": "__notebook__.ipynb",
   "output_path": "__notebook__.ipynb",
   "parameters": {},
   "start_time": "2021-06-22T20:10:20.536907",
   "version": "2.3.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
