{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "name": " W6_Tutorial1.ipynb",
      "provenance": [],
      "collapsed_sections": [
        "X2kPrtVyXtaP",
        "PSYvQarTh86r",
        "Iek_jzssb_H7",
        "zgTT-A3kgMe0",
        "0kglwCDy7x71",
        "yGTLB5JFJ4-k"
      ],
      "toc_visible": true,
      "machine_shape": "hm",
      "include_colab_link": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/CIS-522/course-content/blob/main/tutorials/W06_ConvNets/W6_Tutorial1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "X2kPrtVyXtaP"
      },
      "source": [
        "# CIS-522 Week 6 Part 1\r\n",
        "# Convolutional Neural Networks : The Fundamentals\r\n",
        "\r\n",
        "__Instructor__: Konrad Kording\r\n",
        "\r\n",
        "__Content creators:__ Hmrishav Bandyopadhyay, Rahul Shekhar, Tejas Srivastava\r\n",
        "\r\n",
        "__Content reviewers:__  Tejas Srivastava"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "p2DuJddRXxYY"
      },
      "source": [
        "---\r\n",
        "# Tutorial Objectives\r\n",
        "At the end of this tutorial, we will be able to:\r\n",
        "- be able to implement convolution as an operation\r\n",
        "- be able to define what convolution is.\r\n",
        "- be able to understand pooling\r\n",
        "- be able to code a simple cnn in pytorch\r\n",
        "\r\n",
        " "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "e6GnDC12UtUD",
        "cellView": "form"
      },
      "source": [
        "#@markdown What is your Pennkey and pod? (text, not numbers, e.g. bfranklin)\n",
        "\n",
        "my_pennkey = '' #@param {type:\"string\"}\n",
        "my_pod = 'Select' #@param ['Select', 'euclidean-wombat', 'sublime-newt', 'buoyant-unicorn', 'lackadaisical-manatee','indelible-stingray','superfluous-lyrebird','discreet-reindeer','quizzical-goldfish','astute-jellyfish','ubiquitous-cheetah','nonchalant-crocodile','fashionable-lemur','spiffy-eagle','electric-emu','quotidian-lion']"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vgOQxVS1X2dB"
      },
      "source": [
        "---\r\n",
        "# Setup\r\n",
        "\r\n",
        "\r\n",
        "[Here](https://drive.google.com/file/d/1okbxJdaKwi1klnkSkrYoBriDmng_wk0q/view?usp=sharing) are the slides for today's videos (in case you want to take notes). **Do not read them now.**"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "azn8UA_IgJT4",
        "cellView": "form"
      },
      "source": [
        "#@title Dependencies\n",
        "!pip install livelossplot --quiet"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fZue0ldzOv_J"
      },
      "source": [
        "# Imports\n",
        "\n",
        "import os\n",
        "import cv2\n",
        "from tqdm.auto import tqdm\n",
        "\n",
        "import time\n",
        "import torch\n",
        "import pathlib\n",
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "\n",
        "import torch.nn as nn\n",
        "import torch.optim as optim\n",
        "import torch.nn.functional as F\n",
        "import torchvision.transforms as transforms\n",
        "from torchvision.datasets import ImageFolder\n",
        "import torchvision.datasets as datasets\n",
        "from torch.utils.data import DataLoader, TensorDataset\n",
        "from torchvision.utils import make_grid\n",
        "from IPython.display import HTML, display\n",
        "\n",
        "from tqdm.notebook import tqdm, trange\n",
        "from time import sleep\n",
        "\n",
        "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
        "device, torch.get_num_threads()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "YRZ-A9qBMDWF"
      },
      "source": [
        "# @title Figure Settings\n",
        "%config InlineBackend.figure_format = 'retina'\n",
        "%matplotlib inline \n",
        "\n",
        "fig_w, fig_h = (8, 6)\n",
        "plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})\n",
        "\n",
        "plt.rcParams[\"mpl_toolkits.legacy_colorbar\"] = False\n",
        "\n",
        "import warnings\n",
        "warnings.filterwarnings(\"ignore\", category=UserWarning, module=\"matplotlib\")\n",
        "\n",
        "\n",
        "plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/\"\n",
        "              \"course-content/master/nma.mplstyle\")"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "d0xGiq-bL_Qk"
      },
      "source": [
        "# @title Set seed for reproducibility\n",
        "seed = 2021\n",
        "torch.manual_seed(seed)\n",
        "torch.cuda.manual_seed_all(seed)\n",
        "torch.cuda.manual_seed(seed)\n",
        "np.random.seed(seed)\n",
        "torch.backends.cudnn.deterministic = True\n",
        "torch.backends.cudnn.benchmark = False\n",
        "def seed_worker(worker_id):\n",
        "  worker_seed = torch.initial_seed() % 2**32\n",
        "  np.random.seed(worker_seed)\n",
        "  random.seed(worker_seed)\n",
        "\n",
        "print ('Seed has been set.')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SG3iPDUaxSLv"
      },
      "source": [
        "---\r\n",
        "# Recap the Experience from Last Week\r\n",
        "\r\n",
        "Last week we saw how overparametrized ANNs are efficient universal approximators due to adaptive basis functions and also how ANN’s memorize some but generalize well. We also looked at several regularization techniques such as *L1*, *L2*, *Data Augmentation*, and *Dropout*. \r\n",
        "\r\n",
        "*Estimated Completion Time: 5 minutes from start of the tutorial*\r\n",
        "\r\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "pXIP2n6DtpSh",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Discussing Week 5 - Regularization\n",
        "import time\n",
        "try: t0;\n",
        "except NameError: t0=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "\n",
        "video = YouTubeVideo(id=\"xMKKVMjQNhY\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "h7jAGOEdo67G",
        "cellView": "form"
      },
      "source": [
        "#@markdown Tell us your thoughts about what you have learned. What questions do you have?\r\n",
        "last_week_recap = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GPRsrvzvbuGY"
      },
      "source": [
        "---\n",
        "# Section 1: Background on CNNs \n",
        "\n",
        "*Estimated Completion Time: 20 minutes from start of the tutorial*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "E3r0QSfiw7b_",
        "cellView": "form"
      },
      "source": [
        "#@title Video: History of CNNs and Good Representations\n",
        "import time\n",
        "try: t1;\n",
        "except NameError: t1=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "\n",
        "video = YouTubeVideo(id=\"x-SYSSBmEX4\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fbNdrkv-Ja18"
      },
      "source": [
        "At a high level, a CNN attempts to generate a feature representation of an image that can be used to determine what the object is. What does this mean exactly?<br> \n",
        "<br>\n",
        "It means that the network will try to decipher which characteristics - for a human, it could be the detection of two legs and structure of the face - are important to differentiate between objects. "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "2NIEVs2kapmK",
        "cellView": "form"
      },
      "source": [
        "#@markdown ## Exercise 1 \r\n",
        "#@markdown ImageNet is one of the largest image databases. Let us take 4 categories within this dataset - animal, plant, activity, and food. What features do you think you would use to classify an image into one of these categories? Discuss with your pod.\r\n",
        "imagenet_features = 'random imagenet features for all the classes' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nfn_ge-PN4jL"
      },
      "source": [
        "---\n",
        "# Section 2: Neural Analogy\n",
        "\n",
        "*Estimated Completion Time: 30 minutes from start of the tutorial*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "EpMayL-qNnIj",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Electrophysiology Introduction\n",
        "import time\n",
        "try: t2;\n",
        "except NameError: t2=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "\n",
        "video = YouTubeVideo(id=\"cndkwecg52M\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lkuxYZ_-lJIr"
      },
      "source": [
        "---\n",
        "# Section 3: \n",
        "Imagine that you want to detect an object in an image. It seems reasonable that whatever method we use to recognize objects should not be overly concerned with the precise location of the object in the image. Ideally, our system should exploit this knowledge. Pigs usually do not fly and planes usually do not swim. Nonetheless, we should still recognize a pig was the one to appear at the top of the image. This is what we refer to as Spatial Invariance i.e the position of the object in the image does not play a role in recognizing the object itself.\n",
        "\n",
        "*Estimated Completion Time: 35 minutes from the start of the tutorial*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "LVPuyjQXZz43",
        "cellView": "form"
      },
      "source": [
        "#@markdown ## Exercise 3\r\n",
        "#@markdown Do you think the Fully Connected Networks that you have been using till now accomplishes this task? Why/Why not?\r\n",
        "fcn_invariance = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vZWoqL2jbZbx"
      },
      "source": [
        "---\r\n",
        "#Section 4: Convolutions\r\n",
        "\r\n",
        "*Estimated Completion Time: 65 minutes from start of the tutorial*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xyYq3oxdJmm2",
        "cellView": "form"
      },
      "source": [
        "#@title Video: How to Perform Convolutions?\n",
        "import time\n",
        "try: t3;\n",
        "except NameError: t3=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "\n",
        "video = YouTubeVideo(id=\"5cbf6wRjvJc\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Iek_jzssb_H7"
      },
      "source": [
        "## Exercise 4.1: Perform Convolution by Hand\r\n",
        "\r\n",
        "Do a convolution by hand using this simple 2x2 kernel. Discuss how the size of the output changes (assume no padding).\r\n",
        "\r\n",
        "$$ \\textbf{Image} = \r\n",
        "\\begin{bmatrix}0 &1 &2 \\\\3 &4 &5 \\\\ 6 &7 &8   \r\n",
        "\\end{bmatrix} \r\n",
        "$$\r\n",
        "\r\n",
        "$$ \\textbf{Kernel} = \r\n",
        "\\begin{bmatrix} 0 &1 \\\\2 & 3\r\n",
        "\\end{bmatrix} \r\n",
        "$$"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "b4L0Kt9fefOY"
      },
      "source": [
        "## Exercise 4.2: Code a convolution.\n",
        "Here we have given a function that performs a convolution on a provided image. Fill in the missing lines of code."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "1sfK2fO8gT9z"
      },
      "source": [
        "def convolution2d(image, kernel):\n",
        "    m, n = kernel.shape\n",
        "    # Note: We are dealing with a square kernel here, only for simplicity.\n",
        "    # It is very much possible to perform the same on kernels with different heights and widths!\n",
        "    if (m == n):\n",
        "        y, x = image.shape\n",
        "        ####################################################################\n",
        "        # Fill in missing code below (...),\n",
        "        # then remove or comment the line below to test your function\n",
        "        raise NotImplementedError(\"perform the convolution\")\n",
        "        ####################################################################\n",
        "        # Hint: x_op and y_op will be the output dimensions, initialize them\n",
        "        y_op = ... \n",
        "        x_op = ...\n",
        "        convolved_image = np.zeros((y_op, x_op))\n",
        "        # Hint: Now perform the actual convolution\n",
        "        for i in range(y_op):\n",
        "            for j in range(x_op):\n",
        "                ####################################################################\n",
        "                # Fill in missing code below (...),\n",
        "                # then remove or comment the line below to test your function\n",
        "                raise NotImplementedError(\"perform the convolution\")\n",
        "                ####################################################################\n",
        "                convolved_image[i][j] = ...\n",
        "    \n",
        "    return convolved_image\n",
        "\n",
        "### Uncomment below to test your function\n",
        "# image = np.arange(9).reshape(3, 3)\n",
        "# print(\"Image:\\n\",image)\n",
        "# kernel = np.arange(4).reshape(2, 2)\n",
        "# print(\"Kernel:\\n\",kernel)\n",
        "# print(\"Convolved output:\\n\",convolution2d(image, kernel))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "m3AQOER5ZZGk"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "def convolution2d(image, kernel):\n",
        "    m, n = kernel.shape\n",
        "    if (m == n):\n",
        "        y, x = image.shape\n",
        "        y_op, x_op = image.shape\n",
        "        y_op = y - m + 1\n",
        "        x_op = x - m + 1\n",
        "        convolved_image = np.zeros((y_op, x_op))\n",
        "        for i in range(y_op):\n",
        "            for j in range(x_op):\n",
        "                convolved_image[i][j] = np.sum(image[i:i+m, j:j+m]*kernel)\n",
        "    \n",
        "    return convolved_image\n",
        "\n",
        "\n",
        "image = np.arange(9).reshape(3, 3)\n",
        "print(\"Image:\\n\",image)\n",
        "kernel = np.arange(4).reshape(2, 2)\n",
        "print(\"Kernel:\\n\",kernel)\n",
        "print(\"Convolved output:\\n\",convolution2d(image, kernel))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "snrmqx37eWlv"
      },
      "source": [
        "Great! At this point, you should have a fair idea of how to perform a convolution on an image given a kernel. In the following cell, we will show you how you can set up a CNN to perform the exact same convolution as above."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Wp8wKR0dGJcG"
      },
      "source": [
        "class Net(nn.Module):\n",
        "  def __init__(self, kernel=None, padding=0):\n",
        "    super(Net, self).__init__()\n",
        "    self.conv1 = nn.Conv2d(in_channels=1, out_channels=1, kernel_size=3, padding=padding)\n",
        "      \n",
        "    # set up kernel \n",
        "    if kernel is not None:\n",
        "      dim1, dim2 = kernel.shape[0], kernel.shape[1]\n",
        "      kernel = kernel.reshape(1, 1, dim1, dim2)\n",
        "  \n",
        "      self.conv1.weight = torch.nn.Parameter(kernel)\n",
        "      self.conv1.bias = torch.nn.Parameter(torch.zeros_like(self.conv1.bias))\n",
        "            \n",
        "  def forward(self, x):\n",
        "    x = self.conv1(x)\n",
        "    return x\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zG4wfFG4GVpV"
      },
      "source": [
        "kernel = torch.Tensor(np.arange(4).reshape(2, 2))\n",
        "net = Net(kernel=kernel, padding=0).to(device)\n",
        "\n",
        "# set up image \n",
        "image = torch.Tensor(np.arange(9).reshape(3, 3))\n",
        "image = image.reshape(1, 1, 3, 3).to(device) # BatchSize X Channels X Height X Width\n",
        "\n",
        "output = net(image)\n",
        "output"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "t-y4CYXlzMM2"
      },
      "source": [
        "As a quick aside, notice the difference in the input and output size. The input had a size of $3 \\times 3$ and the output is of size $2 \\times 2$. This is because the edges get missed out as then the kernel would be out of bounds of the image. If we don't want to lose that information, we will have to pad the image with $0$s on the edges. This process is called padding."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "1ErUf9DrjAlY"
      },
      "source": [
        "net = Net(kernel=kernel, padding=1).to(device)\n",
        "output = net(image)\n",
        "output"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MrK6ajMyf8Aq"
      },
      "source": [
        "---\r\n",
        "# Section 5: Edge Detection\r\n",
        "\r\n",
        "\r\n",
        "*Estimated Completion Time: 80 minutes from start of the tutorial*\r\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "iZ_Me8UCLXPC",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Padding and Edge Detection \n",
        "import time\n",
        "try: t4;\n",
        "except NameError: t4=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "\n",
        "video = YouTubeVideo(id=\"mNbYB0C8OO0\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zSBWXIVFL6p1"
      },
      "source": [
        "One of the simpler tasks performed by a convolution layer is edge detection by detecting the change in the value of pixels. This is usually learned by the first couple of layers in the network. Look at the following very simple kernel and discuss whether this will detect vertical or horizontal edges.\n",
        "\n",
        "\n",
        "$$ \\textbf{Kernel} = \n",
        "\\begin{bmatrix} 1 & -1 \\\\ 1 & -1\n",
        "\\end{bmatrix} \n",
        "$$"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "TxsH98yNgiZR",
        "cellView": "form"
      },
      "source": [
        "#@markdown ## Exercise 5.1\r\n",
        "#@markdown What kind of edges do you think this will generate?\r\n",
        "edge_generate = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DZ88bQ7fdguH"
      },
      "source": [
        "Here, we start with an image that has three vertical regions with the darker shade in the middle and the lighter shade in the surrounding regions. It can also be thought of as a very zoomed-in vertical edge within an image!"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "jrEhsH_HjazE"
      },
      "source": [
        "X = np.ones((6, 8))\n",
        "X[:, 2:6] = 0\n",
        "plt.imshow(X, cmap=plt.get_cmap('gray'))\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ESs3KXw_kNZ1"
      },
      "source": [
        "image = torch.from_numpy(X)\n",
        "image = image.reshape(1, 1, 6, 8) # BatchSize X Channels X Height X Width\n",
        "kernel = torch.Tensor([[1.0, -1.0],[1.0, -1.0]])\n",
        "net = Net(kernel=kernel)\n",
        "edges = net(image.float())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "YML7E2ionFn4"
      },
      "source": [
        "plt.imshow(edges.reshape(5, 7).detach().numpy(), cmap=plt.get_cmap('gray'))\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "StMvgaCweLot"
      },
      "source": [
        "The convolved output basically highlights both the vertical transitions in going from a lighter shade to darker and from darker to lighter. Thus, helping us detect the vertical edges in the input image.  "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "auMcCa-vgMfB",
        "cellView": "form"
      },
      "source": [
        "#@markdown ## Exercise 5.2\r\n",
        "#@markdown As you can see, this kernel detects vertical edges. If the kernel was transposed, what would would be produced by running this kernel?\r\n",
        "transpose_kernel = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kQW6YoF6DvmJ"
      },
      "source": [
        "We will come back to strides in the MaxPool section."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0B-vTn5ipEhh"
      },
      "source": [
        "---\n",
        "# Section 6: Visualizing the Primary Components of a CNN\n",
        "\n",
        "*Estimated Completion Time: 110 minutes from start of the tutorial*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8d5Tx2_6kopU"
      },
      "source": [
        "To visualize the various components of a CNN, we will build a simple CNN step by step. Remember how we've already used the MNIST dataset that consisted of binary images of handwritten digits. This time we will be using the EMNIST letters dataset, which consists of handwritten characters ($A \\rightarrow Z$).\n",
        "\n",
        "We will simplify the problem further by only keeping the images that correspond to $X$ (labeled as 24 in the dataset) and $O$ (labeled as 15 in the dataset) and building a CNN that can classify an image either $X$ or $O$."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "mpQsS7VsMN28",
        "cellView": "form"
      },
      "source": [
        "#@title Dataset/DataLoader Functions\n",
        "#@markdown Run this cell\n",
        "\n",
        "# loading the dataset \n",
        "def get_Xvs0_dataset():\n",
        "\n",
        "    transform = transforms.Compose([\n",
        "        transforms.ToTensor(),\n",
        "        transforms.Normalize((0.1307,), (0.3081,))\n",
        "        ])\n",
        "    emnist_train = datasets.EMNIST(root='./data', split='letters', download=True, train=True, transform=transform)\n",
        "    emnist_test = datasets.EMNIST(root='./data', split='letters', download=True, train=False, transform=transform)\n",
        "\n",
        "    # only want O (15) and X (24) labels \n",
        "    train_idx = (emnist_train.targets == 15) | (emnist_train.targets == 24)\n",
        "    emnist_train.targets = emnist_train.targets[train_idx]\n",
        "    emnist_train.data = emnist_train.data[train_idx]  \n",
        "\n",
        "    # convert Xs predictions to 1, Os predictions to 0\n",
        "    emnist_train.targets = (emnist_train.targets == 24).type(torch.int64)\n",
        "\n",
        "    test_idx = (emnist_test.targets == 15) | (emnist_test.targets == 24)\n",
        "    emnist_test.targets = emnist_test.targets[test_idx]\n",
        "    emnist_test.data = emnist_test.data[test_idx]\n",
        "\n",
        "    # convert Xs predictions to 1, Os predictions to 0\n",
        "    emnist_test.targets = (emnist_test.targets == 24).type(torch.int64)\n",
        "\n",
        "    return emnist_train, emnist_test\n",
        "\n",
        "def get_data_loaders(train_dataset, test_dataset, batch_size=32):\n",
        "    train_loader = DataLoader(train_dataset, batch_size=batch_size,\n",
        "                         shuffle=True, num_workers=0)\n",
        "    test_loader = DataLoader(test_dataset, batch_size=batch_size,\n",
        "                         shuffle=True, num_workers=0)\n",
        "    \n",
        "    return train_loader, test_loader\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Tvf-CCYjQPyO"
      },
      "source": [
        "emnist_train, emnist_test = get_Xvs0_dataset()\n",
        "train_loader, test_loader = get_data_loaders(emnist_train, emnist_test)\n",
        "\n",
        "# index of an image in the dataset that corresponds to an X and O\n",
        "x_img_idx = 11\n",
        "o_img_idx = 0"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "r0KvshHFoDLW"
      },
      "source": [
        "Let's view a couple samples from the dataset."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "P5r5TWYan04J"
      },
      "source": [
        "fig, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4)\n",
        "ax1.imshow(emnist_train[0][0].reshape(28, 28), cmap=plt.get_cmap('gray'))\n",
        "ax2.imshow(emnist_train[10][0].reshape(28, 28), cmap=plt.get_cmap('gray'))\n",
        "ax3.imshow(emnist_train[4][0].reshape(28, 28), cmap=plt.get_cmap('gray'))\n",
        "ax4.imshow(emnist_train[6][0].reshape(28, 28), cmap=plt.get_cmap('gray'))\n",
        "fig.set_size_inches(18.5, 10.5)\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DQms9Z4loJtH"
      },
      "source": [
        "Great! Now, it's time to watch a video explaining the different components of a CNN."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "55mnSnoDO928",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Multiple Filters, ReLU and Max Pool\n",
        "import time\n",
        "try: t5;\n",
        "except NameError: t5=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "\n",
        "video = YouTubeVideo(id=\"kMCZKMbT5rk\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_QcK1TwrPfRa"
      },
      "source": [
        "## Section 6.1 Multiple Filters\n",
        "\n",
        "The following network sets up 3 filters and runs them on an $X$ image of the dataset."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Omn-FNa79Vvv"
      },
      "source": [
        "class Net2(nn.Module):\n",
        "  def __init__(self, kernel=None, padding=0):\n",
        "    super(Net2, self).__init__()\n",
        "    self.conv1 = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=3,\n",
        "                           padding=padding)\n",
        "\n",
        "    # first kernel - leading diagonal\n",
        "    kernel_1 = torch.unsqueeze(torch.Tensor([[1, -1, -1],\n",
        "                                             [-1, 1, -1],\n",
        "                                             [-1, -1, 1]]), 0)\n",
        "\n",
        "    # second kernel -checkerboard pattern\n",
        "    kernel_2 = torch.unsqueeze(torch.Tensor([[1, -1, 1],\n",
        "                                             [-1, 1, -1],\n",
        "                                             [1, -1, 1]]), 0)\n",
        "\n",
        "    # third kernel - other diagonal\n",
        "    kernel_3 = torch.unsqueeze(torch.Tensor([[-1, -1, 1],\n",
        "                                             [-1, 1, -1],\n",
        "                                             [1, -1, -1]]), 0)\n",
        "\n",
        "    multiple_kernels = torch.stack([kernel_1, kernel_2, kernel_3], dim=0)\n",
        "\n",
        "    self.conv1.weight = torch.nn.Parameter(multiple_kernels)\n",
        "    self.conv1.bias = torch.nn.Parameter(torch.zeros_like(self.conv1.bias))\n",
        "\n",
        "  def forward(self, x):\n",
        "    x = self.conv1(x)\n",
        "    return x"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7koDhK8YB7eR"
      },
      "source": [
        "net2 = Net2().to(device)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "a6cFWhDpFJwy"
      },
      "source": [
        "x_img = emnist_train[x_img_idx][0].unsqueeze(dim=0).to(device)\n",
        "output_x = net2(x_img)\n",
        "output_x = output_x.squeeze(dim=0).detach().cpu().numpy()\n",
        "\n",
        "o_img = emnist_train[o_img_idx][0].unsqueeze(dim=0).to(device)\n",
        "output_o = net2(o_img)\n",
        "output_o = output_o.squeeze(dim=0).detach().cpu().numpy()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yqQl9T29pGtZ"
      },
      "source": [
        "Let us view the image of $X$ and $O$ that we want to run the filters on."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zbjfWFL2EDC_"
      },
      "source": [
        "print(\"ORIGINAL IMAGES\")\n",
        "plt.imshow(emnist_train[x_img_idx][0].reshape(28, 28),\n",
        "           cmap=plt.get_cmap('gray'))\n",
        "plt.show()\n",
        "plt.imshow(emnist_train[o_img_idx][0].reshape(28, 28),\n",
        "           cmap=plt.get_cmap('gray'))\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "iBNdyIFBHzhB"
      },
      "source": [
        "print(\"CONVOLVED IMAGES\")\n",
        "fig, (ax1, ax2, ax3) = plt.subplots(1, 3)\n",
        "ax1.imshow(output_x[0], cmap=plt.get_cmap('gray'))\n",
        "ax2.imshow(output_x[1], cmap=plt.get_cmap('gray'))\n",
        "ax3.imshow(output_x[2], cmap=plt.get_cmap('gray'))\n",
        "fig.set_size_inches(18.5, 10.5)\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "x1fQKiiIiYf4"
      },
      "source": [
        "fig, (ax1, ax2, ax3) = plt.subplots(1, 3)\r\n",
        "ax1.imshow(output_o[0], cmap=plt.get_cmap('gray'))\r\n",
        "ax2.imshow(output_o[1], cmap=plt.get_cmap('gray'))\r\n",
        "ax3.imshow(output_o[2], cmap=plt.get_cmap('gray'))\r\n",
        "fig.set_size_inches(18.5, 10.5)\r\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HsyHVgXaoqK3",
        "cellView": "form"
      },
      "source": [
        "#@markdown ## Exercise 6.1\n",
        "#@markdown Do you see how these filters would help recognize an X?\n",
        "multiple_filters = 'multiple filters sample response, 95 mins elapsed' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HB_7ZQPOqTTH"
      },
      "source": [
        "## Section 6.2 ReLU after convolutions\r\n",
        "\r\n",
        "Note that the convolutional operation is still linear in nature. However, the inputs to our CNNs, which we want to identify and learn correctly, are naturally non-linear and complex. The purpose of applying the rectifier function is to increase the non-linearity in our model, which wants to learn complex features typically from an image.\r\n",
        "\r\n",
        "When you look at any image, you'll find it contains many non-linear features (e.g., the transition between pixels, the borders, the colors, etc.).\r\n",
        "\r\n",
        "The rectifier serves to break up the linearity even further to make up for the linearity that we might impose an image when we put it through the convolution operation. To see how that actually plays out, we can look at the following picture and see the changes that happen to it; it undergoes the convolution operation followed by rectification.\r\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KN4j46aqnBMA"
      },
      "source": [
        "Now let us apply ReLU to our previous model and visualize it."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Iw7BRyclJI5V"
      },
      "source": [
        "class Net3(nn.Module):\n",
        "  def __init__(self, kernel=None, padding=0):\n",
        "    super(Net3, self).__init__()\n",
        "    self.conv1 = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=3,\n",
        "                           padding=padding)\n",
        "    \n",
        "    # first kernel - leading diagonal\n",
        "    kernel_1 = torch.unsqueeze(torch.Tensor([[1, -1, -1],\n",
        "                                             [-1, 1, -1],\n",
        "                                             [-1, -1, 1]]), 0)\n",
        "\n",
        "    # second kernel - checkerboard pattern\n",
        "    kernel_2 = torch.unsqueeze(torch.Tensor([[1, -1, 1],\n",
        "                                             [-1, 1, -1],\n",
        "                                             [1, -1, 1]]), 0)\n",
        "\n",
        "    # third kernel - other diagonal\n",
        "    kernel_3 = torch.unsqueeze(torch.Tensor([[-1, -1, 1],\n",
        "                                             [-1, 1, -1],\n",
        "                                             [1, -1, -1]]), 0)\n",
        "\n",
        "    multiple_kernels = torch.stack([kernel_1, kernel_2, kernel_3], dim=0)\n",
        "\n",
        "    self.conv1.weight = torch.nn.Parameter(multiple_kernels)\n",
        "    self.conv1.bias = torch.nn.Parameter(torch.zeros_like(self.conv1.bias))\n",
        "\n",
        "  def forward(self, x):\n",
        "    x = self.conv1(x)\n",
        "    x = F.relu(x)\n",
        "    return x"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "CN0Nl6yRKIQP"
      },
      "source": [
        "net3 = Net3().to(device)\n",
        "x_img = emnist_train[x_img_idx][0].unsqueeze(dim=0).to(device)\n",
        "output_x = net3(x_img)\n",
        "output_x = output_x.squeeze(dim=0).detach().cpu().numpy()\n",
        "\n",
        "o_img = emnist_train[o_img_idx][0].unsqueeze(dim=0).to(device)\n",
        "output_o = net3(o_img)\n",
        "output_o = output_o.squeeze(dim=0).detach().cpu().numpy()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "niWbfTYDKP5q"
      },
      "source": [
        "print(\"RECTIFIED OUTPUTS\")\n",
        "fig, (ax1, ax2, ax3) = plt.subplots(1, 3)\n",
        "ax1.imshow(output_x[0], cmap=plt.get_cmap('gray'))\n",
        "ax2.imshow(output_x[1], cmap=plt.get_cmap('gray'))\n",
        "ax3.imshow(output_x[2], cmap=plt.get_cmap('gray'))\n",
        "fig.set_size_inches(18.5, 10.5)\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Tm8dnVIrjd_i"
      },
      "source": [
        "fig, (ax1, ax2, ax3) = plt.subplots(1, 3)\r\n",
        "ax1.imshow(output_o[0], cmap=plt.get_cmap('gray'))\r\n",
        "ax2.imshow(output_o[1], cmap=plt.get_cmap('gray'))\r\n",
        "ax3.imshow(output_o[2], cmap=plt.get_cmap('gray'))\r\n",
        "fig.set_size_inches(18.5, 10.5)\r\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "G_0Ha3OBpTz9"
      },
      "source": [
        "Discuss how the ReLU activations help strengthen the features necessary to detect an $X$."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1sNJljVRLcoJ"
      },
      "source": [
        "## Section 6.3: Pooling\r\n",
        "\r\n",
        "Convolutional layers in a convolutional neural network systematically apply learned filters to input images to create feature maps that summarize the presence of those features in the input. However, a limitation of the feature map output of convolutional layers is that they record the precise position of features in the input. This means that small movements in the position of the feature in the input image will result in a different feature map. This can happen with re-cropping, rotation, shifting, and other minor changes to the input image.\r\n",
        "\r\n",
        "A common approach to addressing this problem from signal processing is called downsampling. This is where a lower resolution version of an input signal is created that still contains the large or important structural elements without the fine detail that may not be as useful to the task. This also translates to the problem of having too many features in the output of the convolutional layers, and techniques such as Max Pooling and average Pooling are used to downsample, thereby shrinking the layers, the number of features, and bringing spatial invariance of features."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ThHxWfixN_0r"
      },
      "source": [
        "Like convolutional layers, pooling operators consist of a fixed-shape window slid over all regions in the input according to its stride, computing a single output for each location traversed by the fixed-shape window (sometimes known as the pooling window). \r\n",
        "\r\n",
        "Thus, it is a  method of information compression where we replace the output of a convolutional neural network at the pixel level with its neighbors' summary statistics.\r\n",
        "- In Maxpooling, we replace each pixel with the maximum value in its immediate neighbors fitting inside the pooling kernel.\r\n",
        "- In Avgpooling, we replace each pixel with the average value in its immediate neighbors fitting inside the pooling kernel.\r\n",
        "\r\n",
        "<figure>\r\n",
        "    <center><img src=https://developers.google.com/machine-learning/glossary/images/PoolingConvolution.svg?hl=fr width=400px>\r\n",
        "    <figcaption>An Example of Pooling with a kernel size of 2</figcaption>\r\n",
        "    </center>\r\n",
        "</figure>\r\n",
        "\r\n",
        "Pooling helps us maintain translational invariance in our network as it selects the statistical summary of the values residing in the kernel space. Thus, A small displacement of the value contributing primarily to that summary does not make a huge difference.\r\n",
        "\r\n",
        "Note that the pooling layer contains no parameters (there is no kernel), unlike the convolutional layer. \r\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SGDDIpr0BkJw"
      },
      "source": [
        "We usually use MaxPool with a stride of two to lower the dimensionality. This is essentially how we downsample. The following two pictures depict how stride makes a difference."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AfMVQolJNA6w"
      },
      "source": [
        "\r\n",
        "\r\n",
        "\r\n",
        "<figure>\r\n",
        "    <center><img src=https://raw.githubusercontent.com/vdumoulin/conv_arithmetic/master/gif/no_padding_no_strides.gif>\r\n",
        "    <figcaption> Stride 1 </figcaption>\r\n",
        "    </center>    \r\n",
        "</figure>\r\n",
        "\r\n",
        "\r\n",
        "<figure>\r\n",
        "    <center><img src=https://raw.githubusercontent.com/vdumoulin/conv_arithmetic/master/gif/no_padding_strides.gif>\r\n",
        "    <figcaption> Stride Two </figcaption>\r\n",
        "    </center>    \r\n",
        "</figure>\r\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SaTH6D2tbx2t"
      },
      "source": [
        "## Exercise 6.2: Implement MaxPooling \r\n",
        "\r\n",
        "Let us now implement MaxPooling in PyTorch and observe the effects of Pooling on the dimension of the input image. Use a kernel of size 2 and stride of 2."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "St-4IK9nECmP"
      },
      "source": [
        "class Net3(nn.Module):\n",
        "  def __init__(self, kernel=None, padding=0, stride=2):\n",
        "    super(Net3, self).__init__()\n",
        "    self.conv1 = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=3,\n",
        "                           padding=padding)\n",
        "  \n",
        "    # first kernel \n",
        "    kernel_1 = torch.unsqueeze(torch.Tensor([[1, -1, -1],\n",
        "                                             [-1, 1, -1],\n",
        "                                             [-1, -1, 1]]), 0)\n",
        "\n",
        "    # second kernel \n",
        "    kernel_2 = torch.unsqueeze(torch.Tensor([[1, -1, 1],\n",
        "                                             [-1, 1, -1],\n",
        "                                             [1, -1, 1]]), 0)\n",
        "\n",
        "    # third kernel \n",
        "    kernel_3 = torch.unsqueeze(torch.Tensor([[-1, -1, 1],\n",
        "                                             [-1, 1, -1],\n",
        "                                             [1, -1, -1]]), 0)\n",
        "\n",
        "    multiple_kernels = torch.stack([kernel_1, kernel_2, kernel_3], dim=0)\n",
        "\n",
        "    self.conv1.weight = torch.nn.Parameter(multiple_kernels)\n",
        "    self.conv1.bias = torch.nn.Parameter(torch.zeros_like(self.conv1.bias))\n",
        "\n",
        "    ####################################################################\n",
        "    # Fill in missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Define the maxpool layer\")\n",
        "    ####################################################################\n",
        "    # Hint: Use nn.Maxpool2D\n",
        "    self.pool = ...\n",
        "            \n",
        "  def forward(self, x):\n",
        "    x = self.conv1(x)\n",
        "    x = F.relu(x)\n",
        "    ####################################################################\n",
        "    # Fill in missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Define the maxpool layer\")\n",
        "    ####################################################################\n",
        "    x = ...\n",
        "    return x\n",
        "\n",
        "\n",
        "### Uncomment the lines below to run the network and then run the next cell\n",
        "### to plot the images  \n",
        "# net = Net3().to(device)\n",
        "# x_img = emnist_train[x_img_idx][0].unsqueeze(dim=0).to(device)\n",
        "# output = net(x_img)\n",
        "# output = output.squeeze(dim=0).detach().cpu().numpy()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "RgI_C12XkCqR"
      },
      "source": [
        "# to_remove solution\n",
        "class Net3(nn.Module):\n",
        "  def __init__(self, kernel=None, padding=0, stride=2):\n",
        "    super(Net3, self).__init__()\n",
        "    self.conv1 = nn.Conv2d(in_channels=1, out_channels=3, kernel_size=3,\n",
        "                            padding=padding)\n",
        "\n",
        "    # first kernel \n",
        "    kernel_1 = torch.unsqueeze(torch.Tensor([[1, -1, -1],\n",
        "                                              [-1, 1, -1],\n",
        "                                              [-1, -1, 1]]), 0)\n",
        "\n",
        "    # second kernel \n",
        "    kernel_2 = torch.unsqueeze(torch.Tensor([[1, -1, 1],\n",
        "                                              [-1, 1, -1],\n",
        "                                              [1, -1, 1]]), 0)\n",
        "\n",
        "    # third kernel \n",
        "    kernel_3 = torch.unsqueeze(torch.Tensor([[-1, -1, 1],\n",
        "                                              [-1, 1, -1],\n",
        "                                              [1, -1, -1]]), 0)\n",
        "\n",
        "    multiple_kernels = torch.stack([kernel_1, kernel_2, kernel_3], dim=0)\n",
        "\n",
        "    self.conv1.weight = torch.nn.Parameter(multiple_kernels)\n",
        "    self.conv1.bias = torch.nn.Parameter(torch.zeros_like(self.conv1.bias))\n",
        "    self.pool = nn.MaxPool2d(2, stride=stride)\n",
        "            \n",
        "  def forward(self, x):\n",
        "    x = self.conv1(x)\n",
        "    x = F.relu(x)\n",
        "    x = self.pool(x)\n",
        "    return x\n",
        "\n",
        "\n",
        "net = Net3().to(device)\n",
        "x_img = emnist_train[x_img_idx][0].unsqueeze(dim=0).to(device)\n",
        "output_x = net(x_img)\n",
        "output_x = output_x.squeeze(dim=0).detach().cpu().numpy()\n",
        "\n",
        "o_img = emnist_train[o_img_idx][0].unsqueeze(dim=0).to(device)\n",
        "output_o = net(o_img)\n",
        "output_o = output_o.squeeze(dim=0).detach().cpu().numpy()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "AoBknQ-4kCqZ"
      },
      "source": [
        "fig, (ax1, ax2, ax3) = plt.subplots(1, 3)\n",
        "ax1.imshow(output_x[0], cmap=plt.get_cmap('gray'))\n",
        "ax2.imshow(output_x[1], cmap=plt.get_cmap('gray'))\n",
        "ax3.imshow(output_x[2], cmap=plt.get_cmap('gray'))\n",
        "fig.set_size_inches(18.5, 10.5)\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "MPfolIkUkcz_"
      },
      "source": [
        "fig, (ax1, ax2, ax3) = plt.subplots(1, 3)\r\n",
        "ax1.imshow(output_o[0], cmap=plt.get_cmap('gray'))\r\n",
        "ax2.imshow(output_o[1], cmap=plt.get_cmap('gray'))\r\n",
        "ax3.imshow(output_o[2], cmap=plt.get_cmap('gray'))\r\n",
        "fig.set_size_inches(18.5, 10.5)\r\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9oGHNqBMkKtV"
      },
      "source": [
        "You should observe the size of the output as being half of what you saw after the ReLU section, which is due to the Maxpool layer. \r\n",
        "\r\n",
        "Despite the reduction in the size of the output, the important or high-level feature in the output still remains intact."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XvuHXTFRnEAu"
      },
      "source": [
        "---\r\n",
        "# Section 7: Number of Parameters in CNNs\r\n",
        "*Estimated Completion Time: 125 minutes from start of the tutorial*\r\n",
        "\r\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "U5R9OjRASBm5",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Reduction in Parameters to Learn Compared to Fully Connected Networks\n",
        "import time\n",
        "try: t6;\n",
        "except NameError: t6=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "\n",
        "video = YouTubeVideo(id=\"oJN_migdZus\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KuOq2iDxeES_"
      },
      "source": [
        "\r\n",
        "## Exercise 7: Number of parameters in Convolutional vs Fully connected Models\r\n",
        "Convolutional Networks encourage weight sharing by learning a single kernel that is repeated over the entire input image. In general, this kernel is barely a few parameters compared to the huge number of parameters in a linear network. Let's calculate few-layer network parameters on random image data of shape $32\\times32$ using both Conv layers and linear layers. The Num_Linears in this exercise is the number of linear layers we use in the network, with each linear layer having the same input and output dimensions. Simultaneously, the Num_Convs is the number of Convolutional blocks we make use of, with each block containing a single kernel. The kernel size is the length and width of this kernel.\r\n",
        "\r\n",
        "\r\n",
        "<figure>\r\n",
        "    <center><img src=https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W06_ConvNets/static/img_params.png>\r\n",
        "    <figcaption> Parameter comparison </figcaption>\r\n",
        "    </center>    \r\n",
        "</figure>"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "wrDA0ARxdVf9",
        "cellView": "form"
      },
      "source": [
        "#@title Reduced_Params == Reduced_time_to_train\r\n",
        "\r\n",
        "batch_size= 1 #@param {type:\"integer\"}\r\n",
        "sample_image=torch.rand(batch_size,1,32,32)\r\n",
        "print(\"Input Shape {}\".format(sample_image.shape))\r\n",
        "\r\n",
        "Num_Linears = 1 #@param {type:\"slider\", min:1, max:3, step:1}\r\n",
        "linear_layer = []\r\n",
        "for i in range(Num_Linears):\r\n",
        "    linear_layer.append(nn.Linear(32*32*1,32*32*1,bias=False))\r\n",
        "linear_layer = nn.Sequential(*linear_layer)\r\n",
        "\r\n",
        "kernel_size= 3 #@param {type:\"slider\", min:3, max:21, step:2}\r\n",
        "\r\n",
        "Num_Convs = 1 #@param {type:\"slider\", min:1, max:20, step:1}\r\n",
        "\r\n",
        "conv_layer = []\r\n",
        "conv_layer.append(nn.Conv2d(in_channels=1,out_channels=3,kernel_size=kernel_size,padding=True,bias=False))\r\n",
        "for i in range(Num_Convs-1):\r\n",
        "    conv_layer.append(nn.Conv2d(in_channels=3,out_channels=3,kernel_size=kernel_size,padding=True,bias=False))\r\n",
        "conv_layer = nn.Sequential(*conv_layer)\r\n",
        "\r\n",
        "\r\n",
        "t_1=time.time()\r\n",
        "print(\"\\nOutput From Linear {}\".format(linear_layer(torch.flatten(sample_image,1)).shape))\r\n",
        "t_2=time.time()\r\n",
        "print(\"Output From Conv Layer {}\".format(conv_layer(sample_image).shape))\r\n",
        "t_3=time.time()\r\n",
        "\r\n",
        "print(\"Time taken by Linear Layer {}\".format(t_2-t_1))\r\n",
        "print(\"Time taken by Conv Layer {}\".format(t_3-t_2))\r\n",
        "print(\"\\nTime Performance improvement by Conv Layer {:.2f} %\".format(((t_2-t_1)-(t_3-t_2))*100.0/(t_2-t_1)))\r\n",
        "\r\n",
        "\r\n",
        "print(\"\\nTotal Parameters in Linear Layer {}\".format(sum(p.numel() for p in linear_layer.parameters())))\r\n",
        "print(\"Total Parameters in Conv Layer {}\".format(sum(p.numel() for p in conv_layer.parameters())))\r\n",
        "\r\n",
        "\r\n",
        "del linear_layer,conv_layer"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "l39x7KcbdXF3"
      },
      "source": [
        "The difference in parameters is huge and keeps multiplying as the input image size increases as the linear layer tries to build a matrix that can be directly multiplied with the input pixels.\r\n",
        "\r\n",
        "<br>\r\n",
        "\r\n",
        "The CNN parameter size, however, is invariant of the image size, as irrespective of the input that it gets, it keeps sliding the same learnable filter over the images. <br>The reduced parameter set not only brings down memory usage by huge chunks but also allows the model to generalize better.\r\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rIrcL2Bpk_pU"
      },
      "source": [
        "---\r\n",
        "# Section 8: Stacking up the Layers\r\n",
        "\r\n",
        "\r\n",
        "\r\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "knRAWpGNTwAU",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Putting it All Together\n",
        "import time\n",
        "try: t7;\n",
        "except NameError: t7=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "\n",
        "video = YouTubeVideo(id=\"vX4u3gQN730\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "YgRyNJbLw1iB",
        "cellView": "form"
      },
      "source": [
        "#@title Train/Test Functions\n",
        "\n",
        "def train(model, device, train_loader, epochs):\n",
        "    model.train()\n",
        "\n",
        "    criterion = nn.CrossEntropyLoss()\n",
        "    optimizer = torch.optim.SGD(model.parameters(),\n",
        "                              lr=0.01)\n",
        "    for epoch in range(epochs):\n",
        "        with tqdm(train_loader, unit='batch') as tepoch:\n",
        "            for data, target in tepoch:\n",
        "                data, target = data.to(device), target.to(device)\n",
        "                optimizer.zero_grad()\n",
        "                output = model(data)\n",
        "                \n",
        "                loss = criterion(output, target)\n",
        "                loss.backward()\n",
        "                optimizer.step()\n",
        "                tepoch.set_postfix(loss=loss.item())\n",
        "                sleep(0.1)\n",
        "\n",
        "def test(model, device, data_loader):\n",
        "    model.eval()\n",
        "    correct = 0\n",
        "    total = 0\n",
        "    for data in data_loader:\n",
        "        inputs, labels = data\n",
        "        inputs = inputs.to(device).float()\n",
        "        labels = labels.to(device).long()\n",
        "\n",
        "        outputs = model(inputs)\n",
        "        _, predicted = torch.max(outputs, 1)\n",
        "        total += labels.size(0)\n",
        "        correct += (predicted == labels).sum().item()\n",
        "\n",
        "    acc = 100 * correct / total\n",
        "    return acc"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QPQONEyiMCpP"
      },
      "source": [
        "## Exercise 8: Implement your own CNN.\r\n",
        "\r\n",
        "Let's stack up all we have learnt. Create a CNN with the following structure. <br>\r\n",
        "- Convolution (input_channel=1, output_channels=32, kernel_size=3)\r\n",
        "- Convolution (input_channel=32, output_channels=64, kernel_size=3)\r\n",
        "- Pool Layer \r\n",
        "- Fully Connected Layer (9216, 128)\r\n",
        "- Fully Connected layer (128, 2)\r\n",
        "\r\n",
        "Note: As discussed in the video, we would like to flatten the output from the Convolutional Layers before passing on the Linear layers, thereby converting an input of shape [BatchSize, Channels, Height, Width] to [BatchSize, Channels\\*Height\\*Width], which in this case would be from [32, 64, 12, 12] (output of second convolution layer) to [32, 64\\*12\\*12] = [64, 9216].<br> Hint: You could use torch.flatten in order to flatten the input at this stage. \r\n",
        "\r\n",
        "Also, don't forget the ReLUs! No need to add to a ReLU after the final fully connected layer.\r\n",
        "\r\n",
        "\r\n",
        "*Estimated Completion Time: 140 minutes from start of the tutorial*\r\n",
        "\r\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zhKnreKCxF2q"
      },
      "source": [
        "class EMNIST_Net(nn.Module):\n",
        "  def __init__(self):\n",
        "    super(EMNIST_Net, self).__init__()\n",
        "\n",
        "    ####################################################################\n",
        "    # Fill in missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Define the required layers\")\n",
        "    ####################################################################\n",
        "    self.conv1 = nn.Conv2d(...)\n",
        "    self.conv2 = nn.Conv2d(...)\n",
        "    self.fc1 = nn.Linear(...)\n",
        "    self.fc2 = nn.Linear(...)\n",
        "    self.pool = nn.MaxPool2d(...)\n",
        "\n",
        "  def forward(self, x):\n",
        "    ####################################################################\n",
        "    # Fill in missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    # Hint: Do not forget to flatten the image as it goes from Convolution Layers to Linear Layers!\n",
        "    raise NotImplementedError(\"Define forward pass for any input x\")\n",
        "    ####################################################################\n",
        "    x = ...\n",
        "\n",
        "\n",
        "### Uncomment the lines below to train your network   \n",
        "# emnist_net = EMNIST_Net().to(device)\n",
        "# train(emnist_net, device, train_loader, 1)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "QqvsTLB7vg3E"
      },
      "source": [
        "# to_remove solution\n",
        "class EMNIST_Net(nn.Module):\n",
        "  def __init__(self):\n",
        "    super(EMNIST_Net, self).__init__()\n",
        "    self.conv1 = nn.Conv2d(1, 32, 3)\n",
        "    self.conv2 = nn.Conv2d(32, 64, 3)\n",
        "    self.fc1 = nn.Linear(9216, 128)\n",
        "    self.fc2 = nn.Linear(128, 2)\n",
        "    self.pool = nn.MaxPool2d(2)\n",
        "\n",
        "  def forward(self, x):\n",
        "    x = self.conv1(x)\n",
        "    x = F.relu(x)\n",
        "    x = self.conv2(x)\n",
        "    x = F.relu(x)\n",
        "    x = self.pool(x)\n",
        "    x = torch.flatten(x, 1)\n",
        "    x = self.fc1(x)\n",
        "    x = F.relu(x)        \n",
        "    x = self.fc2(x)\n",
        "    return x\n",
        "\n",
        "\n",
        "emnist_net = EMNIST_Net().to(device)\n",
        "train(emnist_net, device, train_loader, 1)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mo_2SN-Tx3cb"
      },
      "source": [
        "Now, let's run the network on the test data!"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "2rkw2vpwL7-a"
      },
      "source": [
        "test(emnist_net, device, test_loader)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IC9vzLMVyzIP"
      },
      "source": [
        "You should have been able to get a test accuracy of around $99%$!"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "idhFFB4MPjUN"
      },
      "source": [
        "x_img = emnist_train[x_img_idx][0].unsqueeze(dim=0).to(device)\n",
        "plt.imshow(emnist_train[x_img_idx][0].reshape(28, 28),\n",
        "           cmap=plt.get_cmap('gray'))\n",
        "plt.show()\n",
        "output = emnist_net(x_img)\n",
        "F.softmax(output)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "I5LGEQflrFWU"
      },
      "source": [
        "The network is quite confident that this image is an $X$ ! <br>Note that this is evident from the Softmax Output, which shows the probabilities of the image belonging to each of the classes. A higher probability of belonging to class 1 i.e., class $X$. <br><br>Let us also test the network against an $O$ image. "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ki50Tqt9PpMO"
      },
      "source": [
        "x_img = emnist_train[o_img_idx][0].unsqueeze(dim=0).to(device)\r\n",
        "plt.imshow(emnist_train[o_img_idx][0].reshape(28, 28),\r\n",
        "           cmap=plt.get_cmap('gray'))\r\n",
        "plt.show()\r\n",
        "output = emnist_net(o_img)\r\n",
        "F.softmax(output)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "W65aPbuhsNqm"
      },
      "source": [
        "Again, the predicted label is correct for the $O$ image. "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uyHy54ZxPjPt"
      },
      "source": [
        "# Submit your responses\r\n",
        "Please run the following cell and then press \"Submit\" so we can record your responses."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "JAiufZ39Lpq8",
        "cellView": "form"
      },
      "source": [
        "import time\r\n",
        "import numpy as np\r\n",
        "from IPython.display import IFrame\r\n",
        "#@markdown #Run Cell to Show Airtable Form\r\n",
        "#@markdown ##**Confirm your answers and then click \"Submit\"**\r\n",
        "\r\n",
        "def prefill_form(src, fields: dict):\r\n",
        "  '''\r\n",
        "  src: the original src url to embed the form\r\n",
        "  fields: a dictionary of field:value pairs,\r\n",
        "  e.g. {\"pennkey\": my_pennkey, \"location\": my_location}\r\n",
        "  '''\r\n",
        "  prefills = \"&\".join([\"prefill_%s=%s\"%(key, fields[key]) for key in fields])\r\n",
        "  src = src + prefills\r\n",
        "  src = \"+\".join(src.split(\" \"))\r\n",
        "  return src\r\n",
        "\r\n",
        "\r\n",
        "#autofill time if it is not present\r\n",
        "try: t0;\r\n",
        "except NameError: t0 = time.time()\r\n",
        "try: t1;\r\n",
        "except NameError: t1 = time.time()\r\n",
        "try: t2;\r\n",
        "except NameError: t2 = time.time()\r\n",
        "try: t3;\r\n",
        "except NameError: t3 = time.time()\r\n",
        "try: t4;\r\n",
        "except NameError: t4 = time.time()\r\n",
        "try: t5;\r\n",
        "except NameError: t5 = time.time()\r\n",
        "try: t6;\r\n",
        "except NameError: t6 = time.time()\r\n",
        "try: t7;\r\n",
        "except NameError: t7 = time.time()\r\n",
        "\r\n",
        "#autofill fields if they are not present\r\n",
        "#a missing pennkey and pod will result in an Airtable warning\r\n",
        "#which is easily fixed user-side.\r\n",
        "try: my_pennkey;\r\n",
        "except NameError: my_pennkey = \"\"\r\n",
        "\r\n",
        "try: my_pod;\r\n",
        "except NameError: my_pod = \"Select\"\r\n",
        "\r\n",
        "try: last_week_recap;\r\n",
        "except NameError: last_week_recap = \"\"\r\n",
        "\r\n",
        "try: imagenet_features;\r\n",
        "except NameError: imagenet_features = \"\"\r\n",
        "\r\n",
        "try: fcn_invariance;\r\n",
        "except NameError: fcn_invariance = \"\"\r\n",
        "\r\n",
        "try: edge_generate;\r\n",
        "except NameError: edge_generate = \"\"\r\n",
        "\r\n",
        "try: transpose_kernel;\r\n",
        "except NameError: transpose_kernel = \"\"\r\n",
        "\r\n",
        "try: multiple_filters;\r\n",
        "except NameError: multiple_filters = \"\"\r\n",
        "\r\n",
        "times = np.array([t1, t2,t3,t4,t5,t6,t7])-t0\r\n",
        "\r\n",
        "fields = {\"my_pennkey\": my_pennkey,\r\n",
        "          \"my_pod\": my_pod,\r\n",
        "          \"last_week_recap\":last_week_recap,\r\n",
        "          \"imagenet_features\": imagenet_features,\r\n",
        "          \"fcn_invariance\":fcn_invariance,\r\n",
        "          \"edge_generate\": edge_generate,\r\n",
        "          \"transpose_kernel\":transpose_kernel,\r\n",
        "          \"multiple_filters\":multiple_filters,\r\n",
        "          \"cumulative_times\": times}\r\n",
        "\r\n",
        "src = \"https://airtable.com/embed/shrBAPp6Ma1khiT4I?\"\r\n",
        "\r\n",
        "#now instead of the original source url, we do: src = prefill_form(src, fields)\r\n",
        "display(IFrame(src = prefill_form(src, fields), width = 800, height = 400))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0kglwCDy7x71"
      },
      "source": [
        "## Feedback\r\n",
        "How could this session have been better? How happy are you in your group? How do you feel right now?\r\n",
        "\r\n",
        "Feel free to use the embeded form below or use this link:\r\n",
        "<a target=\"_blank\" rel=\"noopener noreferrer\" href=\"https://airtable.com/shrNSJ5ECXhNhsYss\">https://airtable.com/shrNSJ5ECXhNhsYss</a>"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0-1urfTaUILm"
      },
      "source": [
        "display(IFrame(src=\"https://airtable.com/embed/shrNSJ5ECXhNhsYss?backgroundColor=red\", width = 800, height = 400))"
      ],
      "execution_count": null,
      "outputs": []
    }
  ]
}