{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "W5_Tutorial2.ipynb",
      "provenance": [],
      "collapsed_sections": []
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gvoa11a7eS91"
      },
      "source": [
        "# CIS 522 Week 5: Regularization\n",
        "\n",
        "\n",
        "__Instructor:__ Lyle Ungar\n",
        "\n",
        "__Content creators:__ Ravi Teja Konkimalla, Mohitrajhu Lingan Kumaraian"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oI04YQ_2IVm8"
      },
      "source": [
        "#### Ensure you're running a GPU notebook.\n",
        "\n",
        "From \"Runtime\" in the drop-down menu above, click \"Change runtime type\". Ensure that \"Hardware Accelerator\" says \"GPU\".\n",
        "\n",
        "#### Ensure you can save!\n",
        "\n",
        "From \"File\", click \"Save a copy in Drive\""
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "QcARZs7tbHIi",
        "cellView": "form"
      },
      "source": [
        "#@title Import Libraries\n",
        "from __future__ import print_function\n",
        "import torch\n",
        "import pathlib\n",
        "import random\n",
        "import torch.nn as nn\n",
        "import torch.nn.functional as F\n",
        "import torch.optim as optim\n",
        "from torchvision import datasets, transforms\n",
        "from torchvision.datasets import ImageFolder\n",
        "from torch.utils.data import DataLoader, TensorDataset\n",
        "import torch.nn.utils.prune as prune\n",
        "from torch.optim.lr_scheduler import StepLR\n",
        "import time\n",
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "import matplotlib.animation as animation\n",
        "import copy\n",
        "from tqdm import tqdm"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "OLIyaJBeIDs1",
        "cellView": "form"
      },
      "source": [
        "#@markdown What is your Pennkey and pod? (text, not numbers, e.g. bfranklin)\n",
        "my_pennkey = '' #@param {type:\"string\"}\n",
        "my_pod = 'Select' #@param ['Select', 'euclidean-wombat', 'sublime-newt', 'buoyant-unicorn', 'lackadaisical-manatee','indelible-stingray','superfluous-lyrebird','discreet-reindeer','quizzical-goldfish','astute-jellyfish','ubiquitous-cheetah','nonchalant-crocodile','fashionable-lemur','spiffy-eagle','electric-emu','quotidian-lion']\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LU2phW9PrXnP"
      },
      "source": [
        "#Question of the Week\n",
        "Why does it work better to regularize an overparameterized ANN than to start with a smaller one? [think about  the regularization  methods you know]\n",
        "Each group has a 10 min discussion."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "6QSIMHQhr0T9",
        "cellView": "form"
      },
      "source": [
        "#@markdown Summerize your discussion\n",
        "question_of_the_week = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GvXxa4b2l4m4"
      },
      "source": [
        "# Setup\n",
        "Note that some of the code for today can take up to an hour to run. We have therefore \"hidden\" that code and shown the resulting outputs.\n",
        "\n",
        "[Here](https://docs.google.com/presentation/d/1n4eA5VGG8ab0mkW1kJK5egaldJR4cnpFAHDVbkVPnRI/edit#slide=id.gb88533964a_0_198) are the slides for today's videos (in case you want to take notes). **Do not read them now.**"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "wDBtaMET-fNA",
        "cellView": "form"
      },
      "source": [
        "# @title Figure Settings\n",
        "import ipywidgets as widgets\n",
        "%matplotlib inline \n",
        "fig_w, fig_h = (8, 6)\n",
        "plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})\n",
        "%config InlineBackend.figure_format = 'retina'\n",
        "SMALL_SIZE = 12\n",
        "\n",
        "plt.rcParams.update(plt.rcParamsDefault)\n",
        "plt.rc('animation', html='jshtml')\n",
        "plt.rc('font', size=SMALL_SIZE)          # controls default text sizes\n",
        "plt.rc('axes', titlesize=SMALL_SIZE)     # fontsize of the axes title\n",
        "plt.rc('axes', labelsize=SMALL_SIZE)    # fontsize of the x and y labels\n",
        "plt.rc('xtick', labelsize=SMALL_SIZE)    # fontsize of the tick labels\n",
        "plt.rc('ytick', labelsize=SMALL_SIZE)    # fontsize of the tick labels\n",
        "plt.rc('legend', fontsize=SMALL_SIZE)    # legend fontsize\n",
        "plt.rc('figure', titlesize=SMALL_SIZE)  # fontsize of the figure title"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "mvhx88j7e6m2",
        "cellView": "form"
      },
      "source": [
        "# @title Loading Animal Faces data\n",
        "%%capture\n",
        "!rm -r AnimalFaces32x32/\n",
        "!git clone https://github.com/arashash/AnimalFaces32x32\n",
        "!rm -r afhq/\n",
        "!unzip ./AnimalFaces32x32/afhq_32x32.zip"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "RMuvvw3VHm1Z",
        "cellView": "form"
      },
      "source": [
        "#@title Seeding for Reproducibility\n",
        "seed = 90108\n",
        "random.seed(seed)\n",
        "np.random.seed(seed)\n",
        "torch.manual_seed(seed)\n",
        "torch.cuda.manual_seed(seed)\n",
        "torch.cuda.manual_seed_all(seed)\n",
        "torch.backends.cudnn.deterministic = True\n",
        "torch.backends.cudnn.benchmark = True\n",
        "torch.backends.cudnn.enabled = True\n",
        "torch.set_deterministic(True)\n",
        "def seed_worker(worker_id):\n",
        "    worker_seed = seed % (worker_id+1)\n",
        "    np.random.seed(worker_seed)\n",
        "    random.seed(worker_seed)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "RxCVV-2yt-GX",
        "cellView": "form"
      },
      "source": [
        "# @title Helper functions\r\n",
        "def imshow(img):\r\n",
        "    img = img / 2 + 0.5\r\n",
        "    npimg = img.numpy()\r\n",
        "    plt.imshow(np.transpose(npimg, (1, 2, 0)))\r\n",
        "    plt.axis(False)\r\n",
        "    plt.show()\r\n",
        "\r\n",
        "def train(args, model, device, train_loader, optimizer, epoch,reg_function1=None,reg_function2=None,criterion=F.nll_loss):\r\n",
        "    model.train()\r\n",
        "    for batch_idx, (data, target) in enumerate(train_loader):\r\n",
        "        data, target = data.to(device), target.to(device)\r\n",
        "        optimizer.zero_grad()\r\n",
        "        output = model(data)\r\n",
        "        if reg_function1 is None:\r\n",
        "          loss = criterion(output, target)\r\n",
        "        elif reg_function2 is None:\r\n",
        "          loss = criterion(output, target)+args['lambda']*reg_function1(model)\r\n",
        "        else:\r\n",
        "          loss = criterion(output, target)+args['lambda1']*reg_function1(model)+args['lambda2']*reg_function2(model)\r\n",
        "        loss.backward()\r\n",
        "        optimizer.step()\r\n",
        "        # if (batch_idx % args['log_interval'] == 0 and batch_idx != 0):\r\n",
        "            # print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\r\n",
        "                # epoch, batch_idx * len(data), len(train_loader.dataset),\r\n",
        "                # 100. * batch_idx / len(train_loader), loss.item()))\r\n",
        "\r\n",
        "def test(model, device, test_loader, loader = 'Test',criterion=F.nll_loss):\r\n",
        "    model.eval()\r\n",
        "    test_loss = 0\r\n",
        "    correct = 0\r\n",
        "    with torch.no_grad():\r\n",
        "        for data, target in test_loader:\r\n",
        "            data, target = data.to(device), target.to(device)\r\n",
        "            output = model(data)\r\n",
        "            test_loss += criterion(output, target, reduction='sum').item()  # sum up batch loss\r\n",
        "            pred = output.argmax(dim=1, keepdim=True)  # get the index of the max log-probability\r\n",
        "            correct += pred.eq(target.view_as(pred)).sum().item()\r\n",
        "\r\n",
        "    test_loss /= len(test_loader.dataset)\r\n",
        "\r\n",
        "    # print('\\n{} set: Average loss: {:.4f}, Accuracy: {}/{} ({:.4f}%)\\n'.format(\r\n",
        "        # loader, test_loss, correct, len(test_loader.dataset),\r\n",
        "        # 100. * correct / len(test_loader.dataset)))\r\n",
        "    return 100. * correct / len(test_loader.dataset)\r\n",
        "\r\n",
        "def main(args, model,train_loader,val_loader,test_data,reg_function1=None,reg_function2=None,criterion=F.nll_loss):\r\n",
        "  \"\"\"\r\n",
        "  Trains the model with train_loader and tests the learned model using val_loader\r\n",
        "  \"\"\"\r\n",
        "\r\n",
        "  use_cuda = not args['no_cuda'] and torch.cuda.is_available()\r\n",
        "  device = torch.device('cuda' if use_cuda else 'cpu') \r\n",
        "\r\n",
        "  model = model.to(device)\r\n",
        "  optimizer = optim.SGD(model.parameters(), lr=args['lr'], momentum=args['momentum'])\r\n",
        "\r\n",
        "  best_acc  = 0.0\r\n",
        "  best_epoch = 0\r\n",
        "\r\n",
        "  val_acc_list, train_acc_list,param_norm_list = [], [], []\r\n",
        "  for epoch in tqdm(range(args['epochs'])):\r\n",
        "      train(args, model, device, train_loader, optimizer, epoch,reg_function1=reg_function1,reg_function2=reg_function2)\r\n",
        "      train_acc = test(model,device,train_loader, 'Train')\r\n",
        "      val_acc = test(model,device,val_loader, 'Val')\r\n",
        "      param_norm = calculate_frobenius_norm(model)\r\n",
        "      train_acc_list.append(train_acc)\r\n",
        "      val_acc_list.append(val_acc)\r\n",
        "      param_norm_list.append(param_norm)\r\n",
        "\r\n",
        "  return val_acc_list, train_acc_list, param_norm_list, model, best_epoch\r\n",
        "\r\n",
        "def calculate_frobenius_norm(model):\r\n",
        "    norm = 0.0\r\n",
        "\r\n",
        "    for name,param in model.named_parameters():\r\n",
        "        norm += torch.norm(param).data**2\r\n",
        "    return norm**0.5\r\n",
        "\r\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "QSfEJun00dwZ",
        "cellView": "form"
      },
      "source": [
        "# @title Network Classes for Animal Faces\n",
        "class Animal_Net(nn.Module):\n",
        "    def __init__(self):\n",
        "        super(Animal_Net, self).__init__()\n",
        "        self.fc1 = nn.Linear(3*32*32, 128)\n",
        "        self.fc2 = nn.Linear(128, 32)\n",
        "        self.fc3 = nn.Linear(32, 3)\n",
        "\n",
        "    def forward(self, x):\n",
        "        x = x.view(x.shape[0],-1)\n",
        "        x = F.relu(self.fc1(x))\n",
        "        x = F.relu(self.fc2(x))\n",
        "        x = self.fc3(x)\n",
        "        output = F.log_softmax(x, dim=1)\n",
        "        return output\n",
        "\n",
        "\n",
        "class Big_Animal_Net(nn.Module):\n",
        "    def __init__(self):\n",
        "        torch.manual_seed(104)\n",
        "        super(Big_Animal_Net, self).__init__()\n",
        "        self.fc1 = nn.Linear(3*32*32, 124)\n",
        "        self.fc2 = nn.Linear(124, 64)\n",
        "        self.fc3 = nn.Linear(64, 3)\n",
        "\n",
        "    def forward(self, x):\n",
        "        x = x.view(x.shape[0],-1)\n",
        "        x = F.leaky_relu(self.fc1(x))\n",
        "        x = F.leaky_relu(self.fc2(x))\n",
        "        x = self.fc3(x)\n",
        "        output = F.log_softmax(x, dim=1)\n",
        "        return output"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "QGBBuMD3vSvT",
        "cellView": "form"
      },
      "source": [
        "# @title Dataloder\r\n",
        "batch_size = 128\r\n",
        "classes = ('cat', 'dog', 'wild')\r\n",
        "\r\n",
        "train_transform = transforms.Compose([\r\n",
        "     transforms.RandomRotation(10),\r\n",
        "     transforms.RandomHorizontalFlip(),\r\n",
        "     transforms.ToTensor(),\r\n",
        "     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))  \r\n",
        "     ])\r\n",
        "\r\n",
        "data_path = pathlib.Path('.')/'afhq' # using pathlib to be compatible with all OS's\r\n",
        "img_dataset = ImageFolder(data_path/'train', transform=train_transform)\r\n",
        "img_train_data, img_val_data,_ = torch.utils.data.random_split(img_dataset, [100,100,14430])\r\n",
        "\r\n",
        "train_loader = torch.utils.data.DataLoader(img_train_data,batch_size=batch_size,worker_init_fn=seed_worker)\r\n",
        "val_loader = torch.utils.data.DataLoader(img_val_data,batch_size=1000,worker_init_fn=seed_worker)\r\n",
        "\r\n",
        "test_transform = transforms.Compose([\r\n",
        "     transforms.ToTensor(),\r\n",
        "     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))\r\n",
        "     ])\r\n",
        "img_test_dataset = ImageFolder(data_path/'val', transform=test_transform)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "u4yoYTBYtQ76"
      },
      "source": [
        "#Section 0: Hyper Parameter Tuning\n",
        "(15 min from start)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "L6iiQE2zt77F"
      },
      "source": [
        "#@title Video : Tuning Methods\n",
        "try: t1;\n",
        "except NameError: t1=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"-ly5hwbpx-w\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "joL4AygZv96J"
      },
      "source": [
        "While completing the kaggle competition as a part of HW3 you would have noticied how difficult and time consuming Hyper-Parameter tuning can be. While hyper parameter tuning is quite laborious it is also a major part of training any Deep Learning model and paramount for incorporating good generalization capabilities. There are a few techniques that we can use to guide us during the search. \n",
        "\n",
        "\n",
        "\n",
        "*   Grid Search: Try all possible combinations of hyperparameters\n",
        "*   Random Search: Randomly try different combinations of hyperparameters\n",
        "*   Coordinate-wise Gradient Descent: Start at one set of hyperparameters and try changing one at a time, accept any changes that reduce your validation error\n",
        "*   Bayesian Optimization/ Auto ML:  Start from a set of hyperparameters that have worked well on a similar problem, and then do some sort of local exploration (e.g. gradient descent) from there.\n",
        "\n",
        "There are lots of details, like what range to explore over, which parameter to optimize first, etc. Some hyperparameters don’t seem to matter much (people use a dropout of either 0.5 or 0, but not much else).  Others can matter a lot more (e.g. size and depth of the neural net). The key is to see what worked on similar problems.\n",
        "\n",
        "You can automate the process of tuning the network Architecture using Neural Architecture Search which designs new architectures using a few building blocks (Linear, Convolutional, Convolution Layers, etc.) and optimizes the design based on performance using a wide range of techniques such as Grid Search, Reinforcement Learning, GD, Evolutionary Algorithms, etc. This obviously requires very high computer power. Read this [article](https://lilianweng.github.io/lil-log/2020/08/06/neural-architecture-search.html) to learn more about NAS.    \n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3qaOATxHS8TR"
      },
      "source": [
        "#Section 1: Stochastic Gradient Descent\n",
        "\n",
        "(Time Estimate: 25 min from start)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DV1QFxoRpHm2",
        "cellView": "form"
      },
      "source": [
        "#@title Video : SGD\n",
        "try: t2;\n",
        "except NameError: t2=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"E3g2Z-ZqMZw\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "hxRoCUOaXAXz"
      },
      "source": [
        "## Learning Rate\n",
        "In this section below we will see how learning rate can act as regularizer while training the neural network. In summary:\n",
        "\n",
        "\n",
        "*   Smaller Learning rate does not regularize well. Rather it slowly converges to a local deep minima. \n",
        "*   Larger Learning rate regularizes well by missing local minimas and finding a broader, flatter minima. These minima may be more robust.\n",
        "\n",
        "But beware, taking a very large learning rate may result in overshooting or finding a really bad local minima.\n",
        "\n",
        "\n",
        "\n",
        "In the below block we will train the Animal Net model with different learning rates and see how that is going to affect the regularization performance."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "-sbY2JYF1maP",
        "cellView": "form"
      },
      "source": [
        "#@title Generating Data Loaders\r\n",
        "batch_size = 128\r\n",
        "train_transform = transforms.Compose([\r\n",
        "     transforms.ToTensor(),\r\n",
        "     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))    \r\n",
        "     ])\r\n",
        "\r\n",
        "data_path = pathlib.Path('.')/'afhq' # using pathlib to be compatible with all OS's\r\n",
        "img_dataset = ImageFolder(data_path/'train', transform=train_transform)\r\n",
        "img_train_data, img_val_data, = torch.utils.data.random_split(img_dataset, [11700,2930])\r\n",
        "\r\n",
        "full_train_loader = torch.utils.data.DataLoader(img_train_data,batch_size=batch_size,num_workers=4,worker_init_fn=seed_worker)\r\n",
        "full_val_loader = torch.utils.data.DataLoader(img_val_data,batch_size=1000,num_workers=4,worker_init_fn=seed_worker)\r\n",
        "\r\n",
        "test_transform = transforms.Compose([\r\n",
        "     transforms.ToTensor(),\r\n",
        "     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))    # [TO-DO]\r\n",
        "     ])\r\n",
        "img_test_dataset = ImageFolder(data_path/'val', transform=test_transform)\r\n",
        "# img_test_loader = DataLoader(img_test_dataset, batch_size=batch_size,shuffle=False, num_workers=1)\r\n",
        "classes = ('cat', 'dog', 'wild')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FFWmkt9O-047"
      },
      "source": [
        "args = {'test_batch_size': 1000,\n",
        "        'epochs': 350,\n",
        "        'batch_size': 32,\n",
        "        'momentum': 0.99,\n",
        "        'no_cuda': False\n",
        "        }\n",
        "\n",
        "lr = [5e-4,1e-3, 5e-3]\n",
        "acc_dict = {}\n",
        "\n",
        "for i in range(len(lr)):\n",
        "    model = Animal_Net()\n",
        "    args['lr'] = lr[i]\n",
        "    val_acc, train_acc, param_norm,_,_ = main(args,model,train_loader,val_loader,img_test_dataset)\n",
        "    acc_dict['val_'+str(i)] = val_acc\n",
        "    acc_dict['train_'+str(i)] = train_acc\n",
        "    acc_dict['param_norm'+str(i)] = param_norm"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "80-gI9BJU1Dm",
        "cellView": "form"
      },
      "source": [
        "#@title Plot Train and Validation accuracy (Run me)\n",
        "plt.plot(acc_dict['val_0'], linestyle='dashed',label='lr = 5e-4 - validation', c = 'blue')\n",
        "plt.plot(acc_dict['train_0'],label = '5e-4 - train', c = 'blue')\n",
        "plt.plot(acc_dict['val_1'], linestyle='dashed',label='lr = 1e-3 - validation', c = 'green')\n",
        "plt.plot(acc_dict['train_1'],label='1e-3 - train', c = 'green')\n",
        "plt.plot(acc_dict['val_2'], linestyle='dashed',label='lr = 5e-3 - validation', c = 'purple')\n",
        "plt.plot(acc_dict['train_2'],label = '5e-3 - train', c = 'purple')\n",
        "plt.title('Optimal Learning Rate')\n",
        "plt.ylabel('Accuracy (%)')\n",
        "plt.xlabel('Epoch')\n",
        "print('Maximum Test Accuracy obtained with lr = 5e-4: '+str(max(acc_dict['val_0'])))\n",
        "print('Maximum Test Accuracy obtained with lr = 1e-3: '+str(max(acc_dict['val_1'])))\n",
        "print('Maximum Test Accuracy obtained with lr = 5e-3: '+str(max(acc_dict['val_2'])))\n",
        "plt.legend()\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qBYNBuov0rmU",
        "cellView": "form"
      },
      "source": [
        "#@title Plot parametric norms (Run me)\n",
        "plt.plot(acc_dict['param_norm0'],label='lr = 5e-4',c='blue')\n",
        "plt.plot(acc_dict['param_norm1'],label = 'lr = 1e-3',c='green')\n",
        "plt.plot(acc_dict['param_norm2'],label ='lr = 5e-3', c='red')\n",
        "plt.legend()\n",
        "plt.xlabel('epoch')\n",
        "plt.ylabel('parameter norms')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ECKFAMOrN5Fy"
      },
      "source": [
        "In the model above, we observe something different from what we expected. Why do you think this is happening?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "uBH50aWCOCnB",
        "cellView": "form"
      },
      "source": [
        "#@markdown Write down the answer\r\n",
        "learning_rate = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OySRfKnQGbhW"
      },
      "source": [
        "## Batch Size\n",
        "Batch size, in some cases, can also help in regularizing the models. Lower batch size leads to a noisy convergence and hence helps in converging to a broader local minima. Whereas, higher batch size lead to a smoother convergence thereby converging easily to a  deeper local minima.  This can be good or bad.\n",
        "\n",
        "In the below blcok we will train the Animal Net model with different batch sizes and see how that is going to affect the regularization performance."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "2aIo8e8NQfWQ",
        "cellView": "form"
      },
      "source": [
        "#@title Dataset for Batch_size\r\n",
        "data_path = pathlib.Path('.')/'afhq' # using pathlib to be compatible with all OS's\r\n",
        "img_dataset = ImageFolder(data_path/'train', transform=train_transform)\r\n",
        "\r\n",
        "#Splitting dataset\r\n",
        "reg_train_data, reg_val_data,_ = torch.utils.data.random_split(img_dataset, [250,100,14280])\r\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "rQbO03j8GjUX",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "0bb0465c-0891-4e75-a41a-5dae10449f49"
      },
      "source": [
        "args = {'lr': 5e-3,\n",
        "        'epochs': 60,\n",
        "        'momentum': 0.99,\n",
        "        'no_cuda': False\n",
        "        }\n",
        "\n",
        "batch_sizes = [32,64,128]\n",
        "acc_dict = {}\n",
        "\n",
        "for i in range(len(batch_sizes)):\n",
        "    model = Animal_Net()\n",
        "    #Creating train_loader and Val_loader\n",
        "    reg_train_loader = torch.utils.data.DataLoader(reg_train_data,batch_size=batch_sizes[i],worker_init_fn=seed_worker)\n",
        "    reg_val_loader = torch.utils.data.DataLoader(reg_val_data,batch_size=1000,worker_init_fn=seed_worker)\n",
        "    val_acc, train_acc,param_norm,_,_ = main(args,model,reg_train_loader,reg_val_loader,img_test_dataset)\n",
        "    acc_dict['train_'+str(i)] = train_acc\n",
        "    acc_dict['val_'+str(i)] = val_acc\n",
        "    acc_dict['param_norm'+str(i)] = param_norm"
      ],
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "100%|██████████| 60/60 [00:14<00:00,  4.15it/s]\n",
            "100%|██████████| 60/60 [00:13<00:00,  4.40it/s]\n",
            "100%|██████████| 60/60 [00:13<00:00,  4.50it/s]\n"
          ],
          "name": "stderr"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ifx1rCTiG-tW",
        "cellView": "form"
      },
      "source": [
        "#@title Plot Train and Val curves\n",
        "plt.plot(acc_dict['train_0'], label='mb_size =' + str(batch_sizes[0]), c = 'blue')\n",
        "plt.plot(acc_dict['val_0'], linestyle='dashed', c = 'blue')\n",
        "\n",
        "plt.plot(acc_dict['train_1'], label='mb_size =' + str(batch_sizes[1]), c = 'orange')\n",
        "plt.plot(acc_dict['val_1'], linestyle='dashed', c = 'orange')\n",
        "plt.plot(acc_dict['train_2'], label='mb_size =' + str(batch_sizes[2]), c = 'green')\n",
        "plt.plot(acc_dict['val_2'], linestyle='dashed', c = 'green')\n",
        "print('maximum accuracy for mini batchsize = 32: '+str(max(acc_dict['val_0'])))\n",
        "print('maximum accuracy for mini batchsize = 64: '+str(max(acc_dict['val_1'])))\n",
        "print('maximum accuracy for mini batchsize = 128: '+str(max(acc_dict['val_2'])))\n",
        "\n",
        "plt.title('Optimal Batch Size')\n",
        "plt.ylabel('Accuracy (%)')\n",
        "plt.xlabel('Epoch (Sec)')\n",
        "plt.legend()\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "S6Kp_jf37nnS",
        "cellView": "form"
      },
      "source": [
        "#@title Plot Parametric Norms\r\n",
        "plt.plot(acc_dict['param_norm0'],c='blue',label='mb_size =' + str(batch_sizes[0]))\r\n",
        "plt.plot(acc_dict['param_norm1'],c='orange',label='mb_size =' + str(batch_sizes[1]))\r\n",
        "plt.plot(acc_dict['param_norm2'],c='green',label='mb_size =' + str(batch_sizes[2]))\r\n",
        "plt.xlabel('epoch')\r\n",
        "plt.ylabel('Parameter Norm')\r\n",
        "plt.legend()\r\n",
        "plt.show()\r\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "l_Sbvul0O9AP"
      },
      "source": [
        "Here what observation can you make for different batch size. Why do you think this is happening?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "tPsOclZ2PIvN",
        "cellView": "form"
      },
      "source": [
        "#@markdown Write down the answer\r\n",
        "batch_size = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "i1dLTAENW0_q"
      },
      "source": [
        "#Section 2: Pruning\n",
        "(Time Estimate: 40 min from start)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZWNq6aknCZQD"
      },
      "source": [
        "Before we dig deeper into pruning let's take a small detour and calculate the inference time (time taken for one forward pass during testing) and number of parameters of the biggest model we trained this week.\n",
        "\n",
        "\n",
        "```\n",
        "class Animal_Net_Dropout(nn.Module):\n",
        "    def __init__(self):\n",
        "        torch.manual_seed(32)\n",
        "        super(Animal_Net_Dropout, self).__init__()\n",
        "        self.fc1 = nn.Linear(3*32*32, 124)\n",
        "        self.fc2 = nn.Linear(124, 64)\n",
        "        self.fc3 = nn.Linear(64, 3)\n",
        "```\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bFzHLFwIrUxP"
      },
      "source": [
        "##Exercise 1: Calculating Inference"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oWuJ2IKxlUTU"
      },
      "source": [
        "Now calculate by hand the exact total number of parameters and fill in the code below to calculate the average inference time of the above model."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Mu27sYWFlfM8"
      },
      "source": [
        "def calculate_inference(N):\n",
        "\n",
        "    ####################################################################\n",
        "    # Fill in all missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Define the calculate inference function\")\n",
        "    ####################################################################\n",
        "\n",
        "    total_time = 0.0\n",
        "    model = Big_Animal_Net()\n",
        "    model.eval()\n",
        "    \n",
        "    for i in range(N):\n",
        "        #generate random data of batch 1 which should be passed into the model\n",
        "        X = ....\n",
        "        #make sure you don't calculate gradients to make the compute faster\n",
        "        with ...: \n",
        "            start_time = time.time()\n",
        "            y = ...\n",
        "            end_time = time.time()\n",
        "            total_time ...\n",
        "\n",
        "    print(f'Inference time of the above network is: {total_time/N}')\n",
        "\n",
        "##uncomment to run\n",
        "#calculate_inference(1000)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "76wEp1RzoTIr"
      },
      "source": [
        "[Click for Solution](https://github.com/CIS-522/course-content/blob/main/tutorials/W05_Regularization/solutions/W5_Tutorial2_Ex01.py)\n",
        "\n",
        "Example Output:\n",
        "\n",
        "![Screenshot from 2021-02-12 14-04-49.png]()"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bZgxj7RiGKsw",
        "cellView": "form"
      },
      "source": [
        "#@markdown Write down the answer\n",
        "number_of_parameters = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "E_IDYTpEHO9c"
      },
      "source": [
        "This relatively small model with approximately 400,000 parameters when trained on 100 training samples takes about 2 min to finish training with an inference time of approx 3msec,  which can be used for real time applications. But this obviously isn't the best network we can do. We can use a bigger network and train on bigger dataset, in fact the entire Animal Faces Dataset consists of about around 15000 training data. Also, we have to keep in mind that the images we are using are of very low resolution (32X32), the sizes of the images in the original dataset is 512X512. While this will definitely improve the performance of the model it will also increase the resources needed to handle the model.\n",
        "\n",
        "Google is known for training very big language models and recently it trained a [trillion paramter](https://thenextweb.com/neural/2021/01/13/googles-new-trillion-parameter-ai-language-model-is-almost-6-times-bigger-than-gpt-3/) model. This is almost 1.2e6 times bigger than the model we just trained. So it is sufficient to say that these big models need intense compute power to train while also becomeing harder to deploy and get real time inference on smaller micro - proccesors. \n",
        "\n",
        "This is where regularization and pruning come in very handy. Until now you should have noticed that the Frobenious norm of the regularized models that we trained tend to be smaller than those of unregualrized models. This indicates that the regualarization is shrinking the weights (making the model sparser) while improving the test performance. \n",
        "\n",
        "While methods like L1 regulartization promote implicit sparsity, in pruning we explicitly set a few weights of the trained model to zero and then retrain the model to adjust the other weights. This reduces the memory consumption, improves inference and helps the planet :)\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "y5IUWMKh8k_q",
        "cellView": "form"
      },
      "source": [
        "#@title Video :  Pruning\n",
        "try: t3;\n",
        "except NameError: t3=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"7S5LA2OFdzs\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_eJuNAOiMJuN"
      },
      "source": [
        "One of the most common methods of pruning a NN is to zero out a certain percentage of parameters based on their L1 norm. We don't actually remove the parameters because that makes forward computations difficult.\n",
        "\n",
        "Luckily we have Pytorch's torch.nn.utils.prune methods to play around and test pruning."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6Avjo2i_mKDf"
      },
      "source": [
        "##Exercise 2: L1Prune\n",
        "Before we train a pruned model let us write a function to prunes a model. Use [prune.L1Unstructured](https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.l1_unstructured.html#torch.nn.utils.prune.l1_unstructured) method."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "BTz0Ae1cmjXy"
      },
      "source": [
        "def prune_l1_unstructured(model,prune_percent_weight,prune_percent_bias = 0):\n",
        "    ####################################################################\n",
        "    # Fill in all missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Define the calculate inference function\")\n",
        "    ####################################################################\n",
        "\n",
        "    for name, module in trained_model.named_modules():\n",
        "        if isinstance(module, torch.nn.Conv2d) or isinstance(module, torch.nn.Linear):\n",
        "            #Prune both weight and bias using the prune_percent_weight and prune_percent_bias\n",
        "            ...\n",
        "            ...\n",
        "\n",
        "            print(\n",
        "                \"Sparsity in {}: {:.2f}%\".format(name,\n",
        "                    100. * float(torch.sum(module.weight == 0))\n",
        "                    / float(module.weight.nelement())\n",
        "                )\n",
        "            )\n",
        "##uncomment to run the test\n",
        "# test_model = Animal_Net()\n",
        "# prune_percent = 0.15\n",
        "# prune_l1_unstructured(test_model,0.15)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BDpqNdJFqS5O"
      },
      "source": [
        "[Click for Solution](https://github.com/CIS-522/course-content/blob/main/tutorials/W05_Regularization/solutions/W5_Tutorial2_Ex02.py)\n",
        "\n",
        "Example Output:\n",
        "\n",
        "![Screenshot from 2021-02-12 18-52-30.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2iNU0Uo5mIpC"
      },
      "source": [
        "In the section below, you will see the working of a very simple pruning technique which prunes a percentage of the parameters based on their weights."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "OcROLun4YbeG"
      },
      "source": [
        "args = {'test_batch_size': 1000,\n",
        "        'epochs': 200,\n",
        "        'lr': 5e-3,\n",
        "        'momentum': 0.9,\n",
        "        'no_cuda': False\n",
        "        }\n",
        "\n",
        "acc_dict = {}\n",
        "model = Big_Animal_Net()\n",
        "prune_percent = 0.5\n",
        "\n",
        "print(\"Training a randomly initialized model\")\n",
        "val_acc, train_acc, _, trained_model ,_ = main(args,model,train_loader,val_loader,img_test_dataset)\n",
        "\n",
        "##pruning a model\n",
        "print('Pruning and verifying and model:')\n",
        "prune_l1_unstructured(trained_model,prune_percent)\n",
        "\n",
        "#training the pruned model\n",
        "print(\"Training a pruned model\")\n",
        "val_acc_prune, train_acc_prune, _, pruned_model ,_ = main(args,trained_model.to('cpu'),train_loader,val_loader,img_test_dataset)\n",
        "\n",
        "val_acc_prune = [val_acc_prune[0]]*args['epochs'] + val_acc_prune\n",
        "train_acc_prune = [train_acc_prune[0]]*args['epochs'] + train_acc_prune\n",
        "plt.plot(val_acc,label='Val',c='blue',ls = 'dashed')\n",
        "plt.plot(train_acc,label='Train',c='blue',ls = 'solid')\n",
        "plt.plot(val_acc_prune,label='Val Prune',c='red',ls = 'dashed')\n",
        "plt.plot(train_acc_prune,label='Train Prune',c='red',ls = 'solid')\n",
        "plt.title('Pruning')\n",
        "plt.ylabel('Accuracy (%)')\n",
        "plt.xlabel('Epoch')\n",
        "plt.legend()\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "720HNoTNrcLQ"
      },
      "source": [
        "Now change the prune_percent and report the percentage at which the model underfits."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "YQm_yEshrnMp",
        "cellView": "form"
      },
      "source": [
        "#@markdown Write down your discussion\n",
        "pruning_percent = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xyTIvCgsvZ3x"
      },
      "source": [
        "Let us say you create a new model with number of parameters equal to the number of parameters left after pruning. Do you think this model will work as good as the model which get after pruning the larger network?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Fsr3vqmMvZ3z",
        "cellView": "form"
      },
      "source": [
        "#@markdown Write down your discussion\n",
        "under_parameterized_model = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "z8eoVO3_SUCa"
      },
      "source": [
        "In the above pruning technique after pruning the network, how do you think the performance of the will change if we re-initialize the weights while maintaing the prune mask?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "aOoExJGgTB_n",
        "cellView": "form"
      },
      "source": [
        "#@markdown Write down your discussion\n",
        "pruning_re_init = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qp3TtsUsZT_N"
      },
      "source": [
        "## Lottery Tickets\n",
        "(Time Estimate: 65 min from start)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jyFEk3srVf39"
      },
      "source": [
        "The lottery ticket hypothesis claims that \" A dense randomly initialized NN contains a subnetwork that is initialzed such that when trained in isolation it can match the test accuracy of the original network after training for at most same number of iterations\" i.e. a pruned model when reinitialized with the same weights will can match the test accuracy of the denser model. If the initialization changes the accuracy match is no longer guaranteed.\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7qwr_pTEVVne"
      },
      "source": [
        "Here we train the following networks:\n",
        "\n",
        "\n",
        "1.   An unregularized model with Xavier initialization of weights for 200 epochs\n",
        "2.   A Pruned model with the weights reinitialized to the random values.\n",
        "3.   A pruned model with weights initialized with same Xavier Initialization as unregularized model.\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Su8AFvyZMxsY"
      },
      "source": [
        "args = {'test_batch_size': 1000,\n",
        "        'epochs': 200,\n",
        "        'lr': 1e-3,\n",
        "        'momentum': 0.9,\n",
        "        'no_cuda': False,\n",
        "        }\n",
        "\n",
        "acc_dict = {}\n",
        "init_model = Big_Animal_Net()\n",
        "xavier_model = Big_Animal_Net()\n",
        "prune_percent = 0.4\n",
        "\n",
        "#Xavier Initilaization for one of the two models\n",
        "for name, module in xavier_model.named_modules():\n",
        "    if isinstance(module, torch.nn.Conv2d) or isinstance(module, torch.nn.Linear):\n",
        "        torch.nn.init.xavier_uniform_(module.weight)\n",
        "\n",
        "print('Training the full model')\n",
        "val_acc, train_acc, _, trained_model ,_ = main(args,copy.deepcopy(xavier_model),train_loader,val_loader,img_test_dataset)\n",
        "\n",
        "\n",
        "#prune the trained model\n",
        "prune_l1_unstructured(trained_model,prune_percent)\n",
        "\n",
        "#initialize masks for the initialzed model and xavier model\n",
        "for name, module in init_model.named_modules():\n",
        "    if isinstance(module, torch.nn.Conv2d) or isinstance(module, torch.nn.Linear):\n",
        "        prune.identity(module, name='weight')\n",
        "        prune.identity(module, name='bias')\n",
        "\n",
        "for name, module in xavier_model.named_modules():\n",
        "    if isinstance(module, torch.nn.Conv2d) or isinstance(module, torch.nn.Linear):\n",
        "        prune.identity(module, name='weight')\n",
        "        prune.identity(module, name='bias')\n",
        "\n",
        "init_modules = [[name,module] for name, module in init_model.named_modules()]\n",
        "xavier_modueles = [[name,module] for name, module in xavier_model.named_modules()]\n",
        "trained_modules = [[name,module] for name, module in trained_model.named_modules()]\n",
        "\n",
        "for i in range(len(init_modules)):\n",
        "    if isinstance(init_modules[i][1], torch.nn.Conv2d) or isinstance(init_modules[i][1], torch.nn.Linear):\n",
        "        init_modules[i][1].weight_mask = copy.deepcopy(trained_modules[i][1].weight_mask) \n",
        "        init_modules[i][1].bias_mask = copy.deepcopy(trained_modules[i][1].bias_mask)\n",
        "\n",
        "for i in range(len(xavier_modueles)):\n",
        "    if isinstance(xavier_modueles[i][1], torch.nn.Conv2d) or isinstance(xavier_modueles[i][1], torch.nn.Linear):\n",
        "        xavier_modueles[i][1].weight_mask = copy.deepcopy(trained_modules[i][1].weight_mask) \n",
        "        xavier_modueles[i][1].bias_mask = copy.deepcopy(trained_modules[i][1].bias_mask)\n",
        "\n",
        "\n",
        "print('Training the pruned and Xavier model')\n",
        "val_acc_lottery_x, train_acc_lottery_x, _, pruned_model_x ,_ = main(args,xavier_model,train_loader,val_loader,img_test_dataset)\n",
        "print('Training the pruned Init model')\n",
        "val_acc_lottery, train_acc_lottery, _, pruned_model ,_ = main(args,init_model,train_loader,val_loader,img_test_dataset)\n",
        "\n",
        "plt.plot(val_acc,c='blue',ls = 'dashed')\n",
        "plt.plot(train_acc,label='Train - full model',c='blue',ls = 'solid')\n",
        "plt.plot(val_acc_lottery,c='red',ls = 'dashed')\n",
        "plt.plot(train_acc_lottery,label='Train - Random',c='red',ls = 'solid')\n",
        "plt.plot(val_acc_lottery_x,c='green',ls = 'dashed')\n",
        "plt.plot(train_acc_lottery_x,label='Train - Xavier',c='green',ls = 'solid')\n",
        "plt.title('Lottery Tickets')\n",
        "plt.ylabel('Accuracy (%)')\n",
        "plt.xlabel('Epoch')\n",
        "plt.legend()\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EYTqX4Xv0DNn"
      },
      "source": [
        "Why according to you does the first train epoch of lottery ticket have such a high train accuracy?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "d8LCQbv80DNt",
        "cellView": "form"
      },
      "source": [
        "#@markdown Write down your discussion\n",
        "lottery_tickets = 'value' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vUtwddoZ9J4P"
      },
      "source": [
        "#Section3: Distillation\n",
        "(Time Estimate: 90 min from start)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0JPlxMQOpVmE"
      },
      "source": [
        "Bigger neural nets are better for model performance but require significant memory, while smaller networks tend be be less accurate but are easier to deploy and use. \n",
        "\n",
        "Distillation is a technique which allows us to train smaller networks such that they mimic the outputs of the bigger network. The bigger network is called the teacher network wheras the smaller one is the student network. \n",
        "\n",
        "Distillation begins by training a teacher network. It then trains the student network with both the original labels and \"soft\" labels--the output of the teacher model. This lets us train the student network on unlabelled datasets, using the labeled given by the teacher network. "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "T0cLGyqf8oUd",
        "cellView": "form"
      },
      "source": [
        "#@title Video : Distillation\n",
        "try: t4;\n",
        "except NameError: t4=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"y_KzDpklMmE\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FX6oKO3oDS5e"
      },
      "source": [
        "Let's begin by desiging a smaller network and training the parent model and also the small model using hard labels."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "gHA10EuJrmjV"
      },
      "source": [
        "class Small_Animal_Net(nn.Module):\n",
        "    def __init__(self):\n",
        "        torch.manual_seed(32)\n",
        "        super(Small_Animal_Net, self).__init__()\n",
        "        self.fc1 = nn.Linear(3*32*32, 32)\n",
        "        self.fc2 = nn.Linear(32, 3)\n",
        "\n",
        "    def forward(self, x):\n",
        "        x = x.view(x.shape[0],-1)\n",
        "        x = F.leaky_relu(self.fc1(x))\n",
        "        x = self.fc2(x)\n",
        "        output = F.log_softmax(x, dim=1)\n",
        "        return output"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "n-zsV74w3P8m",
        "cellView": "form"
      },
      "source": [
        "#@title Train the models (Run Me!)\n",
        "args = {'test_batch_size': 1000,\n",
        "        'epochs': 200,\n",
        "        'momentum': 0.9,\n",
        "        'no_cuda': False,\n",
        "        'lr' : 5e-3,\n",
        "        'cross_entropy':True\n",
        "        }\n",
        "\n",
        "Bmodel = Big_Animal_Net()\n",
        "Smodel = Small_Animal_Net()\n",
        "\n",
        "val_acc_big, train_acc_big, _, trained_big_model ,_ = main(args,Bmodel,train_loader,val_loader,img_test_dataset)\n",
        "val_acc_small, train_acc_small, _, _ ,_ = main(args,Smodel,train_loader,val_loader,img_test_dataset)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6b8BnQzOrb7t"
      },
      "source": [
        "Loss Function:\n",
        "\n",
        "We use the same cross entropy loss as before but we use both hard and soft labels to calculate loss. We cannot directly use the CrossEntropy Loss in Pytorch to calculate loss for soft labels. Hence we will exploit the relation between Cross Entropy and KL Divergence. \n",
        "\n",
        "Cross Entropy loss and KL Divergence are both related by: H(p,q) = H(p) + KL(p,q) where H(p,q) calculates cross entropy loss between distributions p and q wheras KL represents KL divergence. Here p is the probability distribution of soft_outputs which are constant and hence we can omit from the loss function.\n",
        "\n",
        "        L = (1 - alpha)*CE(outputs,ground_truth) + alpha * (T**2) *CE(outputs,soft_targets)\n",
        "        L = (1 - alpha)*CE(outputs,ground_truth) + alpha * (T**2) *KL(outputs,soft_targets)\n",
        "\n",
        "Here alpha and temperature are hyper parameters where temperatures is used to smoothen the outputs of the parent network. \n",
        "\n",
        "[Click to learn more about the relation](https://adventuresinmachinelearning.com/cross-entropy-kl-divergence/)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "nKhrkVwY6Wlq"
      },
      "source": [
        "def distillation_loss(args,soft_outputs, pred_logits, target):\n",
        "\n",
        "    alpha = args['alpha']\n",
        "    T = args['temperature']\n",
        "    dist_loss = (1. - alpha) * F.cross_entropy(pred_logits, target) + \\\n",
        "                    (alpha * (T ** 2)) * F.kl_div(F.log_softmax(pred_logits/T, dim=1),\n",
        "                             F.softmax(soft_outputs/T, dim=1))\n",
        "\n",
        "    return dist_loss"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "dH5qZx-HQcDT",
        "cellView": "form"
      },
      "source": [
        "#@title Modified Train Functions (Run Me!)\n",
        "def train_softmax_distillation(args, student_model,parent_model, device, train_loader, optimizer, epoch):\n",
        "    \n",
        "    student_model.train()\n",
        "    parent_model.eval()\n",
        "    for batch_idx, (data, target) in enumerate(train_loader):\n",
        "        data, target = data.to(device), target.to(device)\n",
        "        optimizer.zero_grad()\n",
        "        soft_outputs = parent_model(data)\n",
        "        pred_logits = student_model(data)\n",
        "        loss = distillation_loss(args,soft_outputs.detach(), pred_logits, target)\n",
        "        loss.backward()\n",
        "        optimizer.step()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "mXaMJOfVP0TL",
        "cellView": "form"
      },
      "source": [
        "#@title Modified Main Function (Run Me!)\n",
        "def distilation_main(args, teacher_model,student_model,train_loader,val_loader):\n",
        "\n",
        "    use_cuda = not args['no_cuda'] and torch.cuda.is_available()\n",
        "    device = torch.device('cuda' if use_cuda else 'cpu')\n",
        "\n",
        "    student_model = student_model.to(device)\n",
        "    teacher_model = teacher_model.to(device)\n",
        "    optimizer = optim.SGD(student_model.parameters(), lr=args['lr'], momentum=args['momentum'])\n",
        "    val_acc_list = []\n",
        "    train_acc_list = []\n",
        "    for epoch in tqdm(range(1, args['epochs'] + 1)):\n",
        "        train_softmax_distillation(args, student_model,teacher_model, device, train_loader, optimizer, epoch)\n",
        "        train_acc = test(student_model,device,train_loader,'Train')\n",
        "        val_acc = test(student_model,device,val_loader,'Val')\n",
        "        val_acc_list.append(val_acc)\n",
        "        train_acc_list.append(train_acc)\n",
        "\n",
        "    return val_acc_list, train_acc_list, student_model"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "o13VleaKFAtE"
      },
      "source": [
        "Now that we have everything ready let's train the student network."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Rp-5pGaeubLc"
      },
      "source": [
        "args = {'test_batch_size': 1000,\n",
        "        'epochs': 200,\n",
        "        'momentum': 0.9,\n",
        "        'no_cuda': False,\n",
        "        'lr' : 5e-3,\n",
        "        'alpha': 1,\n",
        "        'temperature': 40\n",
        "        }\n",
        "\n",
        "student_model = Small_Animal_Net()\n",
        "\n",
        "val_acc_st, train_acc_st, _, = distilation_main(args,trained_big_model,student_model,train_loader,val_loader)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "YtRFEj_Uwq1l",
        "cellView": "form"
      },
      "source": [
        "#@title Plot the validation curves of student and the original small model \n",
        "plt.plot(val_acc_small,label='Val - Small Model',c='red',ls = 'dashed')\n",
        "plt.axhline(y = max(val_acc_small),c = 'red')\n",
        "plt.plot(val_acc_st,label='Val - Student Model',c='green',ls = 'dashed')\n",
        "plt.axhline(y = max(val_acc_st),c = 'green')\n",
        "plt.title('Distillation')\n",
        "plt.ylabel('Accuracy (%)')\n",
        "plt.xlabel('Epoch')\n",
        "plt.legend()\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZXhCP6P5Snd_"
      },
      "source": [
        "This method not only provides a way to train small and better networks but also gives us a chance to train small networks on more data using the soft labels from the heavily optimized parent networks."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zUu5orrg0UHc"
      },
      "source": [
        "What other techniques can you use to reduce the dimensionality or size of the big network or the input data?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ayCEX2zp0UHl",
        "cellView": "form"
      },
      "source": [
        "#@markdown Write down your discussion\n",
        "distillation = 'value' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8SsdI6tVz96F"
      },
      "source": [
        "Do you think regularization helps when you have infinite data available?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "8rLGFyIPz96J",
        "cellView": "form"
      },
      "source": [
        "#@markdown Write down your discussion\n",
        "data = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MJZMeMTm1NG6"
      },
      "source": [
        "Which regualarization technique from this week do you think had the biggest effect on the network and why do you think so? Can you apply all of them on the same network?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "2d99_VOK1NHG",
        "cellView": "form"
      },
      "source": [
        "#@markdown Write down your discussion\n",
        "complete = 'value' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "b5HDVVEljKGQ"
      },
      "source": [
        "# Section 4: Adversarial  Attacks\n",
        "Time Estimate: (115 minutes from start)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "03YLRgGemuN6",
        "cellView": "form"
      },
      "source": [
        "#@title Video : Adversarial\n",
        "try: t5;\n",
        "except NameError: t5=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"2e-PxxlGfpM\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FcaFjyzR4aJo"
      },
      "source": [
        "Designing perturbations to the input data to trick any machine learning model is called Adversarial attacks. These attacks are an inevitable consequence of learning in high dimensional search space and complex decision boundaries. Depending on the application these attacks can be very dangerous.\n",
        "\n",
        "![Adversarial Examples of a Stop Sign](https://media.springernature.com/lw685/springer-static/image/art%3A10.1186%2Fs13638-020-01775-5/MediaObjects/13638_2020_1775_Fig1_HTML.png?as=webp)\n",
        "\n",
        "Hence, it is important for us to build models which can defend against such attcks. One possible way to do it is by reducing the dimensionality of the network which is also a byproduct of good regularization. A few ways of building models robust to such attachs are:\n",
        "\n",
        "\n",
        "\n",
        "*   [Defensive Distillation](https://deepai.org/machine-learning-glossary-and-terms/defensive-distillation) : Models trained via distillation are less prone to such attacks as they are trained on soft labels as there is an element of randomness in the training process.\n",
        "*   [Feature Squeezing](https://evademl.org/squeezing/): Identifies adversevial attacks for on-line classifiers whose model is being used by comparing model's perdiction before and after squeezing the input. \n",
        "* [SGD](https://arxiv.org/abs/1706.06083) You can also pick weight to minimize what the adversary is trying to maximize via SGD.\n",
        "\n",
        "In this weeks HW you will be designing an attack and also defending your model against it using regularization techniques you learned this week. \n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VtRxB698CTfG"
      },
      "source": [
        "---\n",
        "# Wrap up"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "P5-HZSWcCbr3"
      },
      "source": [
        "## Submit responses"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FCJJf7OFk8SU",
        "cellView": "form"
      },
      "source": [
        "#@markdown #Run Cell to Show Airtable Form\n",
        "#@markdown ##**Confirm your answers and then click \"Submit\"**\n",
        "\n",
        "import time\n",
        "import numpy as np\n",
        "import urllib.parse\n",
        "from IPython.display import IFrame\n",
        "def prefill_form(src, fields: dict):\n",
        "  '''\n",
        "  src: the original src url to embed the form\n",
        "  fields: a dictionary of field:value pairs,\n",
        "  e.g. {\"pennkey\": my_pennkey, \"location\": my_location}\n",
        "  '''\n",
        "  prefill_fields = {}\n",
        "  for key in fields:\n",
        "      new_key = 'prefill_' + key\n",
        "      prefill_fields[new_key] = fields[key]\n",
        "  prefills = urllib.parse.urlencode(prefill_fields)\n",
        "  src = src + prefills\n",
        "  return src\n",
        "\n",
        "\n",
        "#autofill time if it is not present\n",
        "try: t0;\n",
        "except NameError: t0 = time.time()\n",
        "try: t1;\n",
        "except NameError: t1 = time.time()\n",
        "try: t2;\n",
        "except NameError: t2 = time.time()\n",
        "try: t3;\n",
        "except NameError: t3 = time.time()\n",
        "try: t4;\n",
        "except NameError: t4 = time.time()\n",
        "try: t5;\n",
        "except NameError: t5 = time.time()\n",
        "\n",
        "#autofill fields if they are not present\n",
        "#a missing pennkey and pod will result in an Airtable warning\n",
        "#which is easily fixed user-side.\n",
        "try: my_pennkey;\n",
        "except NameError: my_pennkey = \"\"\n",
        "try: my_pod;\n",
        "except NameError: my_pod = \"\"\n",
        "try: question_of_the_week;\n",
        "except NameError: question_of_the_week = \"\"\n",
        "try: learning_rate;\n",
        "except NameError: learning_rate = \"\"\n",
        "try: batch_size;\n",
        "except NameError: batch_size = \"\"\n",
        "try: number_of_parameters;\n",
        "except NameError: number_of_parameters = \"\"\n",
        "try: pruning_percent;\n",
        "except NameError: pruning_percent = \"\"\n",
        "try: under_parameterized_model;\n",
        "except NameError: under_parameterized_model = \"\"\n",
        "try: pruning_re_init;\n",
        "except NameError: pruning_re_init = \"\"\n",
        "try: lottery_tickets;\n",
        "except NameError: lottery_tickets = \"\"\n",
        "try: distillation;\n",
        "except NameError: distillation = \"\"\n",
        "try: cumilative_times;\n",
        "except NameError: cumilative_times = \"\"\n",
        "try: complete;\n",
        "except NameError: complete = \"\"\n",
        "try: data;\n",
        "except NameError: data = \"\"\n",
        "\n",
        "times = [(t-t0) for t in [t1,t2,t3,t4,t5]]\n",
        "\n",
        "fields = {\"my_pennkey\": my_pennkey,\n",
        "          \"my_pod\": my_pod,\n",
        "          \"question_of_the_week\":question_of_the_week,\n",
        "          \"learning_rate\":learning_rate,\n",
        "          \"batch_size\":batch_size,\n",
        "          \"number_of_parameters\":number_of_parameters,\n",
        "          \"pruning_percent\":pruning_percent,\n",
        "          \"under_parameterized_model\":under_parameterized_model,\n",
        "          \"pruning_re_init\":pruning_re_init,\n",
        "          \"lottery_tickets\": lottery_tickets,\n",
        "          \"distillation\": complete,\n",
        "          \"cumilative_times\": times\n",
        "        }\n",
        "\n",
        "src=\"https://airtable.com/embed/shr2XivycAmugUavT?\"\n",
        "\n",
        "#now instead of the original source url, we do: src = prefill_form(src, fields)\n",
        "display(IFrame(src = prefill_form(src, fields), width = 800, height = 400))\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HKn5d3CCC05w"
      },
      "source": [
        "## Feedback\n",
        "How could this session have been better? How happy are you in your group? How do you feel right now?\n",
        "\n",
        "Feel free to use the embeded form below or use this link:\n",
        "<a target=\"_blank\" rel=\"noopener noreferrer\" href=\"https://airtable.com/shrNSJ5ECXhNhsYss\">https://airtable.com/shrNSJ5ECXhNhsYss</a>"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HIvhG6VZ8zez"
      },
      "source": [
        "display(IFrame(src=\"https://airtable.com/embed/shrNSJ5ECXhNhsYss?backgroundColor=red\", width = 800, height = 400))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7WcXYkb0vDvl"
      },
      "source": [
        "## Homeworks\n",
        "\n",
        "1.   [Understanding Generalization](https://docs.google.com/document/d/1XOaTXYBleQlDNFM1-t512RHfJXRwA4-LIejuBA6pbLY/edit)\n",
        "2.   [Adversarial Attacks](https://)"
      ]
    }
  ]
}