{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "W7_Tutorial",
      "provenance": [],
      "collapsed_sections": [],
      "toc_visible": true,
      "authorship_tag": "ABX9TyPRHX1aqhPYXWdnXtUesKPb",
      "include_colab_link": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/CIS-522/course-content/blob/main/tutorials/W07_Vision_TL/W7_Tutorial.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qhcP2K8WtVNl"
      },
      "source": [
        "# **CIS-522 Week 7: Transfer Learning and Image Models**\n",
        "\n",
        "**Instructor:** Konrad Kording\n",
        "\n",
        "**Content Creator:** [Ben Heil](https://twitter.com/autobencoder)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "XcQ08tdKGu97",
        "cellView": "form"
      },
      "source": [
        "#@markdown What is your Pennkey and pod? (text, not numbers, e.g. bfranklin)\n",
        "my_pennkey = '' #@param {type:\"string\"}\n",
        "my_pod = 'Select' #@param ['Select', 'euclidean-wombat', 'sublime-newt', 'buoyant-unicorn', 'lackadaisical-manatee','indelible-stingray','superfluous-lyrebird','discreet-reindeer','quizzical-goldfish','astute-jellyfish','ubiquitous-cheetah','nonchalant-crocodile','fashionable-lemur','spiffy-eagle','electric-emu','quotidian-lion', 'quantum-herring']\n",
        "\n",
        "import time\n",
        "t0 = time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lcq8QjKJ5F1O"
      },
      "source": [
        "## Setup and Imports"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "4aRvUzld5Hz6"
      },
      "source": [
        "# Install facenet, a model used to do facial recognition (used in exercise 5)\n",
        "!pip -q install facenet-pytorch"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "29qL3gCzwXmz"
      },
      "source": [
        "import copy\n",
        "import glob\n",
        "import random\n",
        "import time\n",
        "\n",
        "import ipywidgets as widgets\n",
        "import matplotlib.patches as mpatches\n",
        "import matplotlib.pyplot as plt\n",
        "import numpy as np\n",
        "import sklearn.decomposition\n",
        "import torch\n",
        "import torch.nn as nn\n",
        "import torch.nn.functional as F\n",
        "import torchvision\n",
        "import tqdm\n",
        "import urllib\n",
        "from facenet_pytorch import MTCNN, InceptionResnetV1\n",
        "from matplotlib.colors import ListedColormap\n",
        "from IPython import display\n",
        "from PIL import Image\n",
        "from torchvision.datasets import ImageFolder\n",
        "from torchvision.utils import make_grid\n",
        "from torchvision import transforms"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "hhnDUeuF9Huu"
      },
      "source": [
        "seed = 522\n",
        "random.seed(522)\n",
        "torch.manual_seed(seed)\n",
        "torch.cuda.manual_seed_all(seed)\n",
        "torch.cuda.manual_seed(seed)\n",
        "np.random.seed(seed)\n",
        "torch.backends.cudnn.deterministic = True\n",
        "torch.backends.cudnn.benchmark = False\n",
        "\n",
        "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QslRylUxoBvU"
      },
      "source": [
        "## Tutorial Objectives:\n",
        "\n",
        "1. Be able to list 3 historical state of the art DL computer vision architectures\n",
        "2. Understand how architectures incorporate ideas we have about the world\n",
        "3. Know what resnets are and how they are built\n",
        "4. Learn to recognize opportunities for transfer learning and domain adaptation"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "zzpsl_eGPjG_"
      },
      "source": [
        "# @title Week 7 Slides\n",
        "from IPython.display import HTML\n",
        "HTML('<iframe src=\"https://docs.google.com/presentation/d/e/2PACX-1vRRPg9GFwwUTkQPfOdT3JfAXeWCndm7wDS4fksOoGurLEHEaEcYX4lxcbcUevz4bKCa2IQ8TxXNbwc6/embed?start=false&loop=false&delayms=3000\" frameborder=\"0\" width=\"960\" height=\"569\" allowfullscreen=\"true\" mozallowfullscreen=\"true\" webkitallowfullscreen=\"true\"></iframe>')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PQmim4zYtimK"
      },
      "source": [
        "## Recap Week 6 "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "h-EZIpI4Cz5s"
      },
      "source": [
        "#@title Video: Week 6 Recap\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"Ksle76Mon3c\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tLmC6cX7G44j"
      },
      "source": [
        "### Recap the experience from last week\n",
        "\n",
        "What did you learn last week. What questions do you have? [5 min discussion]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZoYQYb9Rtuie"
      },
      "source": [
        "---\n",
        "# Section 1: What can we do to make convnets scale?\n",
        "*Estimated time: 10 minutes since start*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "thLPE9vsDLBC",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Making Convnets Scale\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"kQj1kFk_taw\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5jUgdW4WlQMF"
      },
      "source": [
        "Images are high dimensional. That is to say that `image_length` * `image_width` * `image_channels` is a big number, and multiplying that big number by a normal sized fully-connected layer leads to a ton of parameters to learn. Last week we learned about convolutional neural networks, one way of working around high dimensionality in images and other domains. \n",
        "\n",
        "The widget below calculates the parameters required for a single convolutional or fully connected layer that operates on an image of a certain height and width. Adjust the sliders to gain an intuition for how different model and data characteristics affect the number of parameters your model need to fit.\n",
        "\n",
        "Note: these classes are designed to show parameter scaling in the first layer of a network, to be actually useful they would need more layers, an activation function, etc."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "I1dGy3GMxxmM"
      },
      "source": [
        "class FullyConnectedNet(nn.Module):\n",
        "    def __init__(self):\n",
        "        super(FullyConnectedNet, self).__init__()\n",
        "\n",
        "        image_width = 128\n",
        "        image_channels = 3\n",
        "        self.input_size = image_channels * image_width ** 2\n",
        "\n",
        "        self.fc1 = nn.Linear(self.input_size, 256)\n",
        "\n",
        "    def forward(self, x):\n",
        "        x = x.view(-1, self.input_size)\n",
        "        return self.fc1(x)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "agV-S2YryaMR"
      },
      "source": [
        "class ConvNet(nn.Module):\n",
        "    def __init__(self):\n",
        "        super(ConvNet, self).__init__()\n",
        "\n",
        "        self.conv1 = nn.Conv2d(in_channels=3, \n",
        "                               out_channels=256, \n",
        "                               kernel_size=(3,3),\n",
        "                               padding=1)\n",
        "        \n",
        "    def forward(self, x):\n",
        "        return self.conv1(x)\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "US-4H0Pn5y__"
      },
      "source": [
        "### Exercise 1.1 Calculate FCNN Parameters"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "nogfdV4L4Xw0"
      },
      "source": [
        "def get_fcnn_parameter_count() -> int:\n",
        "    \"\"\"\n",
        "    Calculate the number of parameters used by the fully connected network.\n",
        "    Hint: Casting the result of fc_net.parameters() to a list may make it \n",
        "          easier to work with\n",
        "\n",
        "    Returns:\n",
        "        param_count: The number of parameters in the network\n",
        "    \"\"\"\n",
        "    ####################################################################\n",
        "    # Fill in all missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Calculate the number of parameters in the fully connected network\")\n",
        "    #################################################################### \n",
        "\n",
        "    fc_net = FullyConnectedNet()\n",
        "    fc_net_parameters = ...\n",
        "    param_count = ...\n",
        "\n",
        "    return param_count\n",
        "\n",
        "### Uncomment below to test your function\n",
        "# print(get_fcnn_parameter_count())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HWyfB8fJ4uq2"
      },
      "source": [
        "# to_remove solution\n",
        "def get_fcnn_parameter_count() -> int:\n",
        "    \"\"\"\n",
        "    Calculate the number of parameters used by the fully connected network.\n",
        "    Hint: Casting the result of fc_net.parameters() to a list may make it \n",
        "          easier to work with\n",
        "\n",
        "    Returns:\n",
        "        param_count: The number of parameters in the network\n",
        "    \"\"\"\n",
        "\n",
        "    fc_net = FullyConnectedNet()\n",
        "\n",
        "    fc_net_parameters = fc_net.parameters()\n",
        "    param_count = 0\n",
        "    for layer in fc_net_parameters:\n",
        "        current_layer_params = None\n",
        "        for dimension in layer.shape:\n",
        "            if current_layer_params is None:\n",
        "                current_layer_params = dimension\n",
        "            else:\n",
        "                current_layer_params *= dimension\n",
        "        param_count += current_layer_params\n",
        "\n",
        "    # Alternatively, there's a convenient torch function to count the number of items in a tensor:\n",
        "    fc_net_parameters = fc_net.parameters()\n",
        "\n",
        "    param_count = 0\n",
        "    for layer in fc_net_parameters:\n",
        "        param_count += torch.numel(layer)\n",
        "\n",
        "    return param_count\n",
        "\n",
        "### Uncomment below to test your function\n",
        "print(get_fcnn_parameter_count())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Zd2I_1th9sNP"
      },
      "source": [
        "### Exercise 1.2 Calculate CNN Parameters"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FfVAyzJG9sNY"
      },
      "source": [
        "def get_cnn_parameter_count() -> int:\n",
        "    \"\"\"\n",
        "    Calculate the number of parameters used by the convolutional network.\n",
        "    Hint: Casting the result of cnn_net.parameters() to a list may make it \n",
        "          easier to work with\n",
        "\n",
        "    Returns:\n",
        "        param_count: The number of parameters in the network\n",
        "    \"\"\"\n",
        "    ####################################################################\n",
        "    # Fill in all missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Calculate the number of parameters in the fully connected network\")\n",
        "    #################################################################### \n",
        "\n",
        "    convnet = ConvNet()\n",
        "    convnet_parameters = ...\n",
        "    conv_shape = ...\n",
        "\n",
        "    param_count = ...\n",
        "\n",
        "### Uncomment below to test your function\n",
        "#print(get_cnn_parameter_count())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "dL2Q9QeK9sNZ"
      },
      "source": [
        "# to_remove solution\n",
        "def get_cnn_parameter_count() -> int:\n",
        "    \"\"\"\n",
        "    Calculate the number of parameters used by the convolutional network.\n",
        "    Hint: Casting the result of cnn_net.parameters() to a list may make it \n",
        "          easier to work with\n",
        "\n",
        "    Returns:\n",
        "        param_count: The number of parameters in the network\n",
        "    \"\"\"\n",
        "\n",
        "    convnet = ConvNet()\n",
        "    convnet_parameters = list(convnet.parameters())\n",
        "\n",
        "    param_count = 0\n",
        "    for layer in convnet_parameters:\n",
        "        param_count += torch.numel(layer)\n",
        "\n",
        "    return param_count\n",
        "\n",
        "### Uncomment below to test your function\n",
        "print(get_cnn_parameter_count())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Es3mU2tv7K58"
      },
      "source": [
        "## Check your results\n",
        "The widget below calculates the number of parameters in a FCNN and CNN with the same architecture as our models above. Our models had an input image that was 128x128, and used 256 filters (or 256 nodes in the FCNN case).\n",
        "\n",
        "Note how few parameters the convolutional networks take, especially as you increase the input image size."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "WBK7w03qoX3n",
        "cellView": "form"
      },
      "source": [
        "# @title Parameter Calculator\n",
        "\n",
        "def calculate_parameters(filter_count, image_width, fcnn_nodes):\n",
        "    # Convnet math: Implement how parameters scale as a function of image size between convnets and FCNN\n",
        "\n",
        "    filter_width = 3\n",
        "    image_channels = 3\n",
        "\n",
        "    # Assuming a square, RGB image\n",
        "    image_area = image_width ** 2\n",
        "    image_volume = image_area * image_channels\n",
        "\n",
        "    # If we're using padding=same, the output of a convnet will be the same shape as the original\n",
        "    # image, but with more features\n",
        "    fcnn_parameters = image_volume * fcnn_nodes\n",
        "    cnn_parameters = image_channels * filter_count * filter_width ** 2 \n",
        "\n",
        "    # Add bias\n",
        "    fcnn_parameters += fcnn_nodes\n",
        "    cnn_parameters += filter_count\n",
        "\n",
        "    print('CNN parameters: {}'.format(cnn_parameters))\n",
        "    print('Fully Connected parameters: {}'.format(fcnn_parameters))\n",
        "\n",
        "    return None\n",
        "\n",
        "_ = widgets.interact(calculate_parameters, \n",
        "                     filter_count=(16, 512, 16), \n",
        "                     image_width=(16, 512, 16),\n",
        "                     fcnn_nodes=(16, 512, 16))\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Nt5NBWiNumtf"
      },
      "source": [
        "--- \n",
        "# Section 2: The History of Convnets\n",
        "*Estimated Time Elapsed: 25 minutes*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BC7d3njFqjpf"
      },
      "source": [
        "Convolutional neural networks have been around for a long time. [The first CNN model](https://www.rctn.org/bruno/public/papers/Fukushima1980.pdf) was published in 1980, and was based on ideas in neuroscience that [predated it by decades](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1359523/). Why is it then that [AlexNet](https://proceedings.neurips.cc/paper/2012/hash/c399862d3b9d6b76c8436e924a68c45b-Abstract.html), a CNN model published in 2012, is generally considered to mark the start of the deep learning revolution?\n",
        "\n",
        "Watch the video below to get a better idea of the role that hardware has played in progressing deep learning"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "abPTNUAhDVlf",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Historical Convnets\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"wxLjzfAkp-U\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TzwSi0W_ugCl"
      },
      "source": [
        "### Exercise 2:\n",
        "Based on previous classes and our discussion of convnets so far, what additions might we make to improve CNNs?\n",
        "What causes them to have the parameter efficiency we saw in the previous section and lecture?\n",
        "\n",
        "Discuss this with your group for ~ 15 minutes, then record your answers in the table below: "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "lIh-nQ42upK9",
        "cellView": "form"
      },
      "source": [
        "parameter_efficiency = '' #@param {type:\"string\"}\n",
        "\n",
        "t1 = time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "5Bfq7Utx1oTH"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "# Improvements on vanilla CNNs include skip connections to improve gradient flow, special \n",
        "# architectures like U-nets that are good for image segmentation, or maybe finding a way to get \n",
        "# rotational invariance\n",
        "\n",
        "# The parameter efficiency of CNNs comes from sharing weights across the entire input instead of \n",
        "# having each weight correspond to a particular pixel."
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "43pAjvBdupyG"
      },
      "source": [
        "---\n",
        "# Section 3: Big and Deep Convnets\n",
        "*Estimated Time Elapsed: 50 minutes*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Gf8sepS84fdj"
      },
      "source": [
        "### Introduction to AlexNet\n",
        "\n",
        "AlexNet arguably marked the start of the current age of deep learning.\n",
        "It incorporates a number of the defining characteristics of successful DL today: deep networks, GPU-powered paralellization, and building blocks encoding task-specific priors.\n",
        "In this section you'll have the opportunity to play with AlexNet and see the world through its eyes."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fiqfNnwPuvDj",
        "cellView": "form"
      },
      "source": [
        "# @title Import Alexnet\n",
        "# This cell gives you the `alexnet` model as well as the \n",
        "# `input_image` and `input_batch` variables used below\n",
        "\n",
        "# Code from https://pytorch.org/hub/pytorch_vision_alexnet/\n",
        "\n",
        "alexnet = torch.hub.load('pytorch/vision:v0.6.0', 'alexnet', pretrained=True)\n",
        "\n",
        "\n",
        "url, filename = (\"https://github.com/pytorch/hub/raw/master/images/dog.jpg\", \"dog.jpg\")\n",
        "try: urllib.URLopener().retrieve(url, filename)\n",
        "except: urllib.request.urlretrieve(url, filename)\n",
        "\n",
        "input_image = Image.open(filename)\n",
        "preprocess = transforms.Compose([\n",
        "    transforms.Resize(256),\n",
        "    transforms.CenterCrop(224),\n",
        "    transforms.ToTensor(),\n",
        "    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),\n",
        "])\n",
        "input_tensor = preprocess(input_image)\n",
        "input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model\n",
        "\n",
        "# move the input and model to GPU for speed if available\n",
        "if torch.cuda.is_available():\n",
        "    input_batch = input_batch.to(device)\n",
        "    alexnet.to(device)\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "rzmSLfxS43Ju"
      },
      "source": [
        "### What does AlexNet learn?\n",
        "This code visualizes the top-layer filters learned by AlexNet.\n",
        "What do these filters remind you of? Hint: they were discussed in Week 6"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "nCEe1deLEEfe"
      },
      "source": [
        "with torch.no_grad():\n",
        "    params = list(alexnet.parameters())\n",
        "    fig, axs = plt.subplots(8, 8)\n",
        "    for filter_index in range(params[0].shape[0]):\n",
        "        row_index = filter_index % 8\n",
        "        col_index = filter_index // 8\n",
        "\n",
        "        filter = params[0][filter_index,:,:,:]\n",
        "        filter_image = filter.permute(1, 2, 0)\n",
        "        scaled_image = (filter_image + 1) / 2\n",
        "        scaled_image = scaled_image.round()\n",
        "        grey_image = scaled_image.mean(dim=2)\n",
        "        axs[row_index, col_index].imshow(scaled_image.cpu())\n",
        "        axs[row_index, col_index].axis('off')\n",
        "    plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vH4Uj3itxm1G"
      },
      "source": [
        "### Exercise 3.1: Filter Similarity"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ET-tVKccHIxf",
        "cellView": "form"
      },
      "source": [
        "#@markdown What do these filters remind you of?\n",
        "#@markdown (hint: we discussed them in week 6)\n",
        "filter_response = '' #@param {type:\"string\"}\n",
        "\n",
        "t2 = time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "3bCwOfLl2zLl"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "# Some of the filters look like edge detectors, as they are color insensitive and consist of \n",
        "# vertical or horizontal patterns like those seen in week 6"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cRMfcHIdA-XI"
      },
      "source": [
        "## What does AlexNet see?\n",
        "One way of visualizing CNNs is to look at the output of individual filters for a given image. Below is a widget that lets you examine the outputs of various filters used in AlexNet."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "NYYs-uwpzMVN",
        "cellView": "form"
      },
      "source": [
        "# @title Image Widget Code\n",
        "def alexnet_intermediate_output(net, image):\n",
        "    return F.relu(net.features[0](image))\n",
        "\n",
        "def browse_images(input_batch):\n",
        "    intermediate_output = alexnet_intermediate_output(alexnet, input_batch)\n",
        "    n = intermediate_output.shape[1]\n",
        "\n",
        "    def view_image(i):\n",
        "        with torch.no_grad():\n",
        "            channel = intermediate_output[0, i,:].squeeze()\n",
        "            plt.figure(figsize=(6,6))\n",
        "            plt.imshow(channel.cpu())\n",
        "            plt.title('Filter {}'.format(i))\n",
        "            plt.axis('off')\n",
        "\n",
        "            plt.show()\n",
        "        \n",
        "    widgets.interact(view_image, i=(0,n-1))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bjr3Wf_ezwug"
      },
      "source": [
        "plt.imshow(input_image)\n",
        "plt.axis('off')\n",
        "plt.show()\n",
        "browse_images(input_batch)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mA33aCxvzpvM"
      },
      "source": [
        "## Exercise 3.2 Filter Purpose\n",
        "What do these filters appear to be doing? Note that different filters play different roles so there are several good answers."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "To6Jx0zNHj43"
      },
      "source": [
        "filter_roles = '' #@param {type:\"string\"}\n",
        "\n",
        "t3 = time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "yPyJiiK33goP"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "# Based on the areas that are highlighted in the outupt, some filters seem to be detecting edges, \n",
        "# while others seem to react to the white color of the dog or the green of the background"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QLYL1RWQ7-EC"
      },
      "source": [
        "### Further Reading\n",
        "If the question \"what are neural network filters looking for\" is at all interesting to you, or if you like geometric art, you'll enjoy [this post](https://distill.pub/2017/feature-visualization/) creating images that maximize output of various CNN neurons. There is also a good article showing what the space of images looks like as models train [here](https://distill.pub/2020/grand-tour/)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CEhRBorquwCy"
      },
      "source": [
        "---\n",
        "# Section 4: Adding Skip connections and discussing the optimization landscape\n",
        "\n",
        "*Estimated Time Elapsed: 65 minutes*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "GrCIIIIvDq8D"
      },
      "source": [
        "#@title Video: Convnets After AlexNet\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"SPNYj5jMxcg\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uRbJqQ1H1Jrk"
      },
      "source": [
        "In this section we'll be working with a state of the art CNN model called [ResNet](https://arxiv.org/abs/1512.03385). ResNet has two particularly interesting features. First, it uses skip connections to avoid the vanishing gradient problem. Second, each block (collection of layers) in a ResNet can be treated as learning a residual function.\n",
        "\n",
        "Mathematically, a neural network can be thought of as a series of operations that maps an input (like an image of a dog) to an output (like the label \"dog\"). In math-speak a mapping from an input to an output is called a function.\n",
        "\n",
        "In the functional interpretation of machine learning, there is a ground truth function in your brain somewhere that maps images of dogs to the label \"dog\", and neural networks are just elaborate ways of learining that function. \n",
        "\n",
        "If you were to subtract out the true function from the function learned by a network, you'd be left with the residual error or \"residual function\". ResNet tries to learn the original function, then the residual function, then the residual of the residual, and so on with creative use of skip layers.\n",
        "\n",
        "One fortunate side-effect of this structure is that the model is somewhat robust to damage. If you were to remove one of the middle layers, the model would still work, just less well.\n",
        "\n",
        "In this section we'll remove various layers of an pre-trained ResNet and see what happens."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "9L76OdyxjLbx",
        "cellView": "form"
      },
      "source": [
        "# @title Download imagenette\n",
        "!rm -r imagenette*\n",
        "!wget https://s3.amazonaws.com/fast-ai-imageclas/imagenette2-320.tgz\n",
        "!tar -xf imagenette2-320.tgz\n",
        "!rm -r imagenette2-320.tgz"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "1NJV1MHBjbOr"
      },
      "source": [
        "#@title Set Up Textual ImageNet labels\n",
        "dict_map={0: 'tench, Tinca tinca',\n",
        " 1: 'goldfish, Carassius auratus',\n",
        " 2: 'great white shark, white shark, man-eater, man-eating shark, Carcharodon carcharias',\n",
        " 3: 'tiger shark, Galeocerdo cuvieri',\n",
        " 4: 'hammerhead, hammerhead shark',\n",
        " 5: 'electric ray, crampfish, numbfish, torpedo',\n",
        " 6: 'stingray',\n",
        " 7: 'cock',\n",
        " 8: 'hen',\n",
        " 9: 'ostrich, Struthio camelus',\n",
        " 10: 'brambling, Fringilla montifringilla',\n",
        " 11: 'goldfinch, Carduelis carduelis',\n",
        " 12: 'house finch, linnet, Carpodacus mexicanus',\n",
        " 13: 'junco, snowbird',\n",
        " 14: 'indigo bunting, indigo finch, indigo bird, Passerina cyanea',\n",
        " 15: 'robin, American robin, Turdus migratorius',\n",
        " 16: 'bulbul',\n",
        " 17: 'jay',\n",
        " 18: 'magpie',\n",
        " 19: 'chickadee',\n",
        " 20: 'water ouzel, dipper',\n",
        " 21: 'kite',\n",
        " 22: 'bald eagle, American eagle, Haliaeetus leucocephalus',\n",
        " 23: 'vulture',\n",
        " 24: 'great grey owl, great gray owl, Strix nebulosa',\n",
        " 25: 'European fire salamander, Salamandra salamandra',\n",
        " 26: 'common newt, Triturus vulgaris',\n",
        " 27: 'eft',\n",
        " 28: 'spotted salamander, Ambystoma maculatum',\n",
        " 29: 'axolotl, mud puppy, Ambystoma mexicanum',\n",
        " 30: 'bullfrog, Rana catesbeiana',\n",
        " 31: 'tree frog, tree-frog',\n",
        " 32: 'tailed frog, bell toad, ribbed toad, tailed toad, Ascaphus trui',\n",
        " 33: 'loggerhead, loggerhead turtle, Caretta caretta',\n",
        " 34: 'leatherback turtle, leatherback, leathery turtle, Dermochelys coriacea',\n",
        " 35: 'mud turtle',\n",
        " 36: 'terrapin',\n",
        " 37: 'box turtle, box tortoise',\n",
        " 38: 'banded gecko',\n",
        " 39: 'common iguana, iguana, Iguana iguana',\n",
        " 40: 'American chameleon, anole, Anolis carolinensis',\n",
        " 41: 'whiptail, whiptail lizard',\n",
        " 42: 'agama',\n",
        " 43: 'frilled lizard, Chlamydosaurus kingi',\n",
        " 44: 'alligator lizard',\n",
        " 45: 'Gila monster, Heloderma suspectum',\n",
        " 46: 'green lizard, Lacerta viridis',\n",
        " 47: 'African chameleon, Chamaeleo chamaeleon',\n",
        " 48: 'Komodo dragon, Komodo lizard, dragon lizard, giant lizard, Varanus komodoensis',\n",
        " 49: 'African crocodile, Nile crocodile, Crocodylus niloticus',\n",
        " 50: 'American alligator, Alligator mississipiensis',\n",
        " 51: 'triceratops',\n",
        " 52: 'thunder snake, worm snake, Carphophis amoenus',\n",
        " 53: 'ringneck snake, ring-necked snake, ring snake',\n",
        " 54: 'hognose snake, puff adder, sand viper',\n",
        " 55: 'green snake, grass snake',\n",
        " 56: 'king snake, kingsnake',\n",
        " 57: 'garter snake, grass snake',\n",
        " 58: 'water snake',\n",
        " 59: 'vine snake',\n",
        " 60: 'night snake, Hypsiglena torquata',\n",
        " 61: 'boa constrictor, Constrictor constrictor',\n",
        " 62: 'rock python, rock snake, Python sebae',\n",
        " 63: 'Indian cobra, Naja naja',\n",
        " 64: 'green mamba',\n",
        " 65: 'sea snake',\n",
        " 66: 'horned viper, cerastes, sand viper, horned asp, Cerastes cornutus',\n",
        " 67: 'diamondback, diamondback rattlesnake, Crotalus adamanteus',\n",
        " 68: 'sidewinder, horned rattlesnake, Crotalus cerastes',\n",
        " 69: 'trilobite',\n",
        " 70: 'harvestman, daddy longlegs, Phalangium opilio',\n",
        " 71: 'scorpion',\n",
        " 72: 'black and gold garden spider, Argiope aurantia',\n",
        " 73: 'barn spider, Araneus cavaticus',\n",
        " 74: 'garden spider, Aranea diademata',\n",
        " 75: 'black widow, Latrodectus mactans',\n",
        " 76: 'tarantula',\n",
        " 77: 'wolf spider, hunting spider',\n",
        " 78: 'tick',\n",
        " 79: 'centipede',\n",
        " 80: 'black grouse',\n",
        " 81: 'ptarmigan',\n",
        " 82: 'ruffed grouse, partridge, Bonasa umbellus',\n",
        " 83: 'prairie chicken, prairie grouse, prairie fowl',\n",
        " 84: 'peacock',\n",
        " 85: 'quail',\n",
        " 86: 'partridge',\n",
        " 87: 'African grey, African gray, Psittacus erithacus',\n",
        " 88: 'macaw',\n",
        " 89: 'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita',\n",
        " 90: 'lorikeet',\n",
        " 91: 'coucal',\n",
        " 92: 'bee eater',\n",
        " 93: 'hornbill',\n",
        " 94: 'hummingbird',\n",
        " 95: 'jacamar',\n",
        " 96: 'toucan',\n",
        " 97: 'drake',\n",
        " 98: 'red-breasted merganser, Mergus serrator',\n",
        " 99: 'goose',\n",
        " 100: 'black swan, Cygnus atratus',\n",
        " 101: 'tusker',\n",
        " 102: 'echidna, spiny anteater, anteater',\n",
        " 103: 'platypus, duckbill, duckbilled platypus, duck-billed platypus, Ornithorhynchus anatinus',\n",
        " 104: 'wallaby, brush kangaroo',\n",
        " 105: 'koala, koala bear, kangaroo bear, native bear, Phascolarctos cinereus',\n",
        " 106: 'wombat',\n",
        " 107: 'jellyfish',\n",
        " 108: 'sea anemone, anemone',\n",
        " 109: 'brain coral',\n",
        " 110: 'flatworm, platyhelminth',\n",
        " 111: 'nematode, nematode worm, roundworm',\n",
        " 112: 'conch',\n",
        " 113: 'snail',\n",
        " 114: 'slug',\n",
        " 115: 'sea slug, nudibranch',\n",
        " 116: 'chiton, coat-of-mail shell, sea cradle, polyplacophore',\n",
        " 117: 'chambered nautilus, pearly nautilus, nautilus',\n",
        " 118: 'Dungeness crab, Cancer magister',\n",
        " 119: 'rock crab, Cancer irroratus',\n",
        " 120: 'fiddler crab',\n",
        " 121: 'king crab, Alaska crab, Alaskan king crab, Alaska king crab, Paralithodes camtschatica',\n",
        " 122: 'American lobster, Northern lobster, Maine lobster, Homarus americanus',\n",
        " 123: 'spiny lobster, langouste, rock lobster, crawfish, crayfish, sea crawfish',\n",
        " 124: 'crayfish, crawfish, crawdad, crawdaddy',\n",
        " 125: 'hermit crab',\n",
        " 126: 'isopod',\n",
        " 127: 'white stork, Ciconia ciconia',\n",
        " 128: 'black stork, Ciconia nigra',\n",
        " 129: 'spoonbill',\n",
        " 130: 'flamingo',\n",
        " 131: 'little blue heron, Egretta caerulea',\n",
        " 132: 'American egret, great white heron, Egretta albus',\n",
        " 133: 'bittern',\n",
        " 134: 'crane',\n",
        " 135: 'limpkin, Aramus pictus',\n",
        " 136: 'European gallinule, Porphyrio porphyrio',\n",
        " 137: 'American coot, marsh hen, mud hen, water hen, Fulica americana',\n",
        " 138: 'bustard',\n",
        " 139: 'ruddy turnstone, Arenaria interpres',\n",
        " 140: 'red-backed sandpiper, dunlin, Erolia alpina',\n",
        " 141: 'redshank, Tringa totanus',\n",
        " 142: 'dowitcher',\n",
        " 143: 'oystercatcher, oyster catcher',\n",
        " 144: 'pelican',\n",
        " 145: 'king penguin, Aptenodytes patagonica',\n",
        " 146: 'albatross, mollymawk',\n",
        " 147: 'grey whale, gray whale, devilfish, Eschrichtius gibbosus, Eschrichtius robustus',\n",
        " 148: 'killer whale, killer, orca, grampus, sea wolf, Orcinus orca',\n",
        " 149: 'dugong, Dugong dugon',\n",
        " 150: 'sea lion',\n",
        " 151: 'Chihuahua',\n",
        " 152: 'Japanese spaniel',\n",
        " 153: 'Maltese dog, Maltese terrier, Maltese',\n",
        " 154: 'Pekinese, Pekingese, Peke',\n",
        " 155: 'Shih-Tzu',\n",
        " 156: 'Blenheim spaniel',\n",
        " 157: 'papillon',\n",
        " 158: 'toy terrier',\n",
        " 159: 'Rhodesian ridgeback',\n",
        " 160: 'Afghan hound, Afghan',\n",
        " 161: 'basset, basset hound',\n",
        " 162: 'beagle',\n",
        " 163: 'bloodhound, sleuthhound',\n",
        " 164: 'bluetick',\n",
        " 165: 'black-and-tan coonhound',\n",
        " 166: 'Walker hound, Walker foxhound',\n",
        " 167: 'English foxhound',\n",
        " 168: 'redbone',\n",
        " 169: 'borzoi, Russian wolfhound',\n",
        " 170: 'Irish wolfhound',\n",
        " 171: 'Italian greyhound',\n",
        " 172: 'whippet',\n",
        " 173: 'Ibizan hound, Ibizan Podenco',\n",
        " 174: 'Norwegian elkhound, elkhound',\n",
        " 175: 'otterhound, otter hound',\n",
        " 176: 'Saluki, gazelle hound',\n",
        " 177: 'Scottish deerhound, deerhound',\n",
        " 178: 'Weimaraner',\n",
        " 179: 'Staffordshire bullterrier, Staffordshire bull terrier',\n",
        " 180: 'American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier',\n",
        " 181: 'Bedlington terrier',\n",
        " 182: 'Border terrier',\n",
        " 183: 'Kerry blue terrier',\n",
        " 184: 'Irish terrier',\n",
        " 185: 'Norfolk terrier',\n",
        " 186: 'Norwich terrier',\n",
        " 187: 'Yorkshire terrier',\n",
        " 188: 'wire-haired fox terrier',\n",
        " 189: 'Lakeland terrier',\n",
        " 190: 'Sealyham terrier, Sealyham',\n",
        " 191: 'Airedale, Airedale terrier',\n",
        " 192: 'cairn, cairn terrier',\n",
        " 193: 'Australian terrier',\n",
        " 194: 'Dandie Dinmont, Dandie Dinmont terrier',\n",
        " 195: 'Boston bull, Boston terrier',\n",
        " 196: 'miniature schnauzer',\n",
        " 197: 'giant schnauzer',\n",
        " 198: 'standard schnauzer',\n",
        " 199: 'Scotch terrier, Scottish terrier, Scottie',\n",
        " 200: 'Tibetan terrier, chrysanthemum dog',\n",
        " 201: 'silky terrier, Sydney silky',\n",
        " 202: 'soft-coated wheaten terrier',\n",
        " 203: 'West Highland white terrier',\n",
        " 204: 'Lhasa, Lhasa apso',\n",
        " 205: 'flat-coated retriever',\n",
        " 206: 'curly-coated retriever',\n",
        " 207: 'golden retriever',\n",
        " 208: 'Labrador retriever',\n",
        " 209: 'Chesapeake Bay retriever',\n",
        " 210: 'German short-haired pointer',\n",
        " 211: 'vizsla, Hungarian pointer',\n",
        " 212: 'English setter',\n",
        " 213: 'Irish setter, red setter',\n",
        " 214: 'Gordon setter',\n",
        " 215: 'Brittany spaniel',\n",
        " 216: 'clumber, clumber spaniel',\n",
        " 217: 'English springer, English springer spaniel',\n",
        " 218: 'Welsh springer spaniel',\n",
        " 219: 'cocker spaniel, English cocker spaniel, cocker',\n",
        " 220: 'Sussex spaniel',\n",
        " 221: 'Irish water spaniel',\n",
        " 222: 'kuvasz',\n",
        " 223: 'schipperke',\n",
        " 224: 'groenendael',\n",
        " 225: 'malinois',\n",
        " 226: 'briard',\n",
        " 227: 'kelpie',\n",
        " 228: 'komondor',\n",
        " 229: 'Old English sheepdog, bobtail',\n",
        " 230: 'Shetland sheepdog, Shetland sheep dog, Shetland',\n",
        " 231: 'collie',\n",
        " 232: 'Border collie',\n",
        " 233: 'Bouvier des Flandres, Bouviers des Flandres',\n",
        " 234: 'Rottweiler',\n",
        " 235: 'German shepherd, German shepherd dog, German police dog, alsatian',\n",
        " 236: 'Doberman, Doberman pinscher',\n",
        " 237: 'miniature pinscher',\n",
        " 238: 'Greater Swiss Mountain dog',\n",
        " 239: 'Bernese mountain dog',\n",
        " 240: 'Appenzeller',\n",
        " 241: 'EntleBucher',\n",
        " 242: 'boxer',\n",
        " 243: 'bull mastiff',\n",
        " 244: 'Tibetan mastiff',\n",
        " 245: 'French bulldog',\n",
        " 246: 'Great Dane',\n",
        " 247: 'Saint Bernard, St Bernard',\n",
        " 248: 'Eskimo dog, husky',\n",
        " 249: 'malamute, malemute, Alaskan malamute',\n",
        " 250: 'Siberian husky',\n",
        " 251: 'dalmatian, coach dog, carriage dog',\n",
        " 252: 'affenpinscher, monkey pinscher, monkey dog',\n",
        " 253: 'basenji',\n",
        " 254: 'pug, pug-dog',\n",
        " 255: 'Leonberg',\n",
        " 256: 'Newfoundland, Newfoundland dog',\n",
        " 257: 'Great Pyrenees',\n",
        " 258: 'Samoyed, Samoyede',\n",
        " 259: 'Pomeranian',\n",
        " 260: 'chow, chow chow',\n",
        " 261: 'keeshond',\n",
        " 262: 'Brabancon griffon',\n",
        " 263: 'Pembroke, Pembroke Welsh corgi',\n",
        " 264: 'Cardigan, Cardigan Welsh corgi',\n",
        " 265: 'toy poodle',\n",
        " 266: 'miniature poodle',\n",
        " 267: 'standard poodle',\n",
        " 268: 'Mexican hairless',\n",
        " 269: 'timber wolf, grey wolf, gray wolf, Canis lupus',\n",
        " 270: 'white wolf, Arctic wolf, Canis lupus tundrarum',\n",
        " 271: 'red wolf, maned wolf, Canis rufus, Canis niger',\n",
        " 272: 'coyote, prairie wolf, brush wolf, Canis latrans',\n",
        " 273: 'dingo, warrigal, warragal, Canis dingo',\n",
        " 274: 'dhole, Cuon alpinus',\n",
        " 275: 'African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus',\n",
        " 276: 'hyena, hyaena',\n",
        " 277: 'red fox, Vulpes vulpes',\n",
        " 278: 'kit fox, Vulpes macrotis',\n",
        " 279: 'Arctic fox, white fox, Alopex lagopus',\n",
        " 280: 'grey fox, gray fox, Urocyon cinereoargenteus',\n",
        " 281: 'tabby, tabby cat',\n",
        " 282: 'tiger cat',\n",
        " 283: 'Persian cat',\n",
        " 284: 'Siamese cat, Siamese',\n",
        " 285: 'Egyptian cat',\n",
        " 286: 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor',\n",
        " 287: 'lynx, catamount',\n",
        " 288: 'leopard, Panthera pardus',\n",
        " 289: 'snow leopard, ounce, Panthera uncia',\n",
        " 290: 'jaguar, panther, Panthera onca, Felis onca',\n",
        " 291: 'lion, king of beasts, Panthera leo',\n",
        " 292: 'tiger, Panthera tigris',\n",
        " 293: 'cheetah, chetah, Acinonyx jubatus',\n",
        " 294: 'brown bear, bruin, Ursus arctos',\n",
        " 295: 'American black bear, black bear, Ursus americanus, Euarctos americanus',\n",
        " 296: 'ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus',\n",
        " 297: 'sloth bear, Melursus ursinus, Ursus ursinus',\n",
        " 298: 'mongoose',\n",
        " 299: 'meerkat, mierkat',\n",
        " 300: 'tiger beetle',\n",
        " 301: 'ladybug, ladybeetle, lady beetle, ladybird, ladybird beetle',\n",
        " 302: 'ground beetle, carabid beetle',\n",
        " 303: 'long-horned beetle, longicorn, longicorn beetle',\n",
        " 304: 'leaf beetle, chrysomelid',\n",
        " 305: 'dung beetle',\n",
        " 306: 'rhinoceros beetle',\n",
        " 307: 'weevil',\n",
        " 308: 'fly',\n",
        " 309: 'bee',\n",
        " 310: 'ant, emmet, pismire',\n",
        " 311: 'grasshopper, hopper',\n",
        " 312: 'cricket',\n",
        " 313: 'walking stick, walkingstick, stick insect',\n",
        " 314: 'cockroach, roach',\n",
        " 315: 'mantis, mantid',\n",
        " 316: 'cicada, cicala',\n",
        " 317: 'leafhopper',\n",
        " 318: 'lacewing, lacewing fly',\n",
        " 319: \"dragonfly, darning needle, devil's darning needle, sewing needle, snake feeder, snake doctor, mosquito hawk, skeeter hawk\",\n",
        " 320: 'damselfly',\n",
        " 321: 'admiral',\n",
        " 322: 'ringlet, ringlet butterfly',\n",
        " 323: 'monarch, monarch butterfly, milkweed butterfly, Danaus plexippus',\n",
        " 324: 'cabbage butterfly',\n",
        " 325: 'sulphur butterfly, sulfur butterfly',\n",
        " 326: 'lycaenid, lycaenid butterfly',\n",
        " 327: 'starfish, sea star',\n",
        " 328: 'sea urchin',\n",
        " 329: 'sea cucumber, holothurian',\n",
        " 330: 'wood rabbit, cottontail, cottontail rabbit',\n",
        " 331: 'hare',\n",
        " 332: 'Angora, Angora rabbit',\n",
        " 333: 'hamster',\n",
        " 334: 'porcupine, hedgehog',\n",
        " 335: 'fox squirrel, eastern fox squirrel, Sciurus niger',\n",
        " 336: 'marmot',\n",
        " 337: 'beaver',\n",
        " 338: 'guinea pig, Cavia cobaya',\n",
        " 339: 'sorrel',\n",
        " 340: 'zebra',\n",
        " 341: 'hog, pig, grunter, squealer, Sus scrofa',\n",
        " 342: 'wild boar, boar, Sus scrofa',\n",
        " 343: 'warthog',\n",
        " 344: 'hippopotamus, hippo, river horse, Hippopotamus amphibius',\n",
        " 345: 'ox',\n",
        " 346: 'water buffalo, water ox, Asiatic buffalo, Bubalus bubalis',\n",
        " 347: 'bison',\n",
        " 348: 'ram, tup',\n",
        " 349: 'bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis',\n",
        " 350: 'ibex, Capra ibex',\n",
        " 351: 'hartebeest',\n",
        " 352: 'impala, Aepyceros melampus',\n",
        " 353: 'gazelle',\n",
        " 354: 'Arabian camel, dromedary, Camelus dromedarius',\n",
        " 355: 'llama',\n",
        " 356: 'weasel',\n",
        " 357: 'mink',\n",
        " 358: 'polecat, fitch, foulmart, foumart, Mustela putorius',\n",
        " 359: 'black-footed ferret, ferret, Mustela nigripes',\n",
        " 360: 'otter',\n",
        " 361: 'skunk, polecat, wood pussy',\n",
        " 362: 'badger',\n",
        " 363: 'armadillo',\n",
        " 364: 'three-toed sloth, ai, Bradypus tridactylus',\n",
        " 365: 'orangutan, orang, orangutang, Pongo pygmaeus',\n",
        " 366: 'gorilla, Gorilla gorilla',\n",
        " 367: 'chimpanzee, chimp, Pan troglodytes',\n",
        " 368: 'gibbon, Hylobates lar',\n",
        " 369: 'siamang, Hylobates syndactylus, Symphalangus syndactylus',\n",
        " 370: 'guenon, guenon monkey',\n",
        " 371: 'patas, hussar monkey, Erythrocebus patas',\n",
        " 372: 'baboon',\n",
        " 373: 'macaque',\n",
        " 374: 'langur',\n",
        " 375: 'colobus, colobus monkey',\n",
        " 376: 'proboscis monkey, Nasalis larvatus',\n",
        " 377: 'marmoset',\n",
        " 378: 'capuchin, ringtail, Cebus capucinus',\n",
        " 379: 'howler monkey, howler',\n",
        " 380: 'titi, titi monkey',\n",
        " 381: 'spider monkey, Ateles geoffroyi',\n",
        " 382: 'squirrel monkey, Saimiri sciureus',\n",
        " 383: 'Madagascar cat, ring-tailed lemur, Lemur catta',\n",
        " 384: 'indri, indris, Indri indri, Indri brevicaudatus',\n",
        " 385: 'Indian elephant, Elephas maximus',\n",
        " 386: 'African elephant, Loxodonta africana',\n",
        " 387: 'lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens',\n",
        " 388: 'giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca',\n",
        " 389: 'barracouta, snoek',\n",
        " 390: 'eel',\n",
        " 391: 'coho, cohoe, coho salmon, blue jack, silver salmon, Oncorhynchus kisutch',\n",
        " 392: 'rock beauty, Holocanthus tricolor',\n",
        " 393: 'anemone fish',\n",
        " 394: 'sturgeon',\n",
        " 395: 'gar, garfish, garpike, billfish, Lepisosteus osseus',\n",
        " 396: 'lionfish',\n",
        " 397: 'puffer, pufferfish, blowfish, globefish',\n",
        " 398: 'abacus',\n",
        " 399: 'abaya',\n",
        " 400: \"academic gown, academic robe, judge's robe\",\n",
        " 401: 'accordion, piano accordion, squeeze box',\n",
        " 402: 'acoustic guitar',\n",
        " 403: 'aircraft carrier, carrier, flattop, attack aircraft carrier',\n",
        " 404: 'airliner',\n",
        " 405: 'airship, dirigible',\n",
        " 406: 'altar',\n",
        " 407: 'ambulance',\n",
        " 408: 'amphibian, amphibious vehicle',\n",
        " 409: 'analog clock',\n",
        " 410: 'apiary, bee house',\n",
        " 411: 'apron',\n",
        " 412: 'ashcan, trash can, garbage can, wastebin, ash bin, ash-bin, ashbin, dustbin, trash barrel, trash bin',\n",
        " 413: 'assault rifle, assault gun',\n",
        " 414: 'backpack, back pack, knapsack, packsack, rucksack, haversack',\n",
        " 415: 'bakery, bakeshop, bakehouse',\n",
        " 416: 'balance beam, beam',\n",
        " 417: 'balloon',\n",
        " 418: 'ballpoint, ballpoint pen, ballpen, Biro',\n",
        " 419: 'Band Aid',\n",
        " 420: 'banjo',\n",
        " 421: 'bannister, banister, balustrade, balusters, handrail',\n",
        " 422: 'barbell',\n",
        " 423: 'barber chair',\n",
        " 424: 'barbershop',\n",
        " 425: 'barn',\n",
        " 426: 'barometer',\n",
        " 427: 'barrel, cask',\n",
        " 428: 'barrow, garden cart, lawn cart, wheelbarrow',\n",
        " 429: 'baseball',\n",
        " 430: 'basketball',\n",
        " 431: 'bassinet',\n",
        " 432: 'bassoon',\n",
        " 433: 'bathing cap, swimming cap',\n",
        " 434: 'bath towel',\n",
        " 435: 'bathtub, bathing tub, bath, tub',\n",
        " 436: 'beach wagon, station wagon, wagon, estate car, beach waggon, station waggon, waggon',\n",
        " 437: 'beacon, lighthouse, beacon light, pharos',\n",
        " 438: 'beaker',\n",
        " 439: 'bearskin, busby, shako',\n",
        " 440: 'beer bottle',\n",
        " 441: 'beer glass',\n",
        " 442: 'bell cote, bell cot',\n",
        " 443: 'bib',\n",
        " 444: 'bicycle-built-for-two, tandem bicycle, tandem',\n",
        " 445: 'bikini, two-piece',\n",
        " 446: 'binder, ring-binder',\n",
        " 447: 'binoculars, field glasses, opera glasses',\n",
        " 448: 'birdhouse',\n",
        " 449: 'boathouse',\n",
        " 450: 'bobsled, bobsleigh, bob',\n",
        " 451: 'bolo tie, bolo, bola tie, bola',\n",
        " 452: 'bonnet, poke bonnet',\n",
        " 453: 'bookcase',\n",
        " 454: 'bookshop, bookstore, bookstall',\n",
        " 455: 'bottlecap',\n",
        " 456: 'bow',\n",
        " 457: 'bow tie, bow-tie, bowtie',\n",
        " 458: 'brass, memorial tablet, plaque',\n",
        " 459: 'brassiere, bra, bandeau',\n",
        " 460: 'breakwater, groin, groyne, mole, bulwark, seawall, jetty',\n",
        " 461: 'breastplate, aegis, egis',\n",
        " 462: 'broom',\n",
        " 463: 'bucket, pail',\n",
        " 464: 'buckle',\n",
        " 465: 'bulletproof vest',\n",
        " 466: 'bullet train, bullet',\n",
        " 467: 'butcher shop, meat market',\n",
        " 468: 'cab, hack, taxi, taxicab',\n",
        " 469: 'caldron, cauldron',\n",
        " 470: 'candle, taper, wax light',\n",
        " 471: 'cannon',\n",
        " 472: 'canoe',\n",
        " 473: 'can opener, tin opener',\n",
        " 474: 'cardigan',\n",
        " 475: 'car mirror',\n",
        " 476: 'carousel, carrousel, merry-go-round, roundabout, whirligig',\n",
        " 477: \"carpenter's kit, tool kit\",\n",
        " 478: 'carton',\n",
        " 479: 'car wheel',\n",
        " 480: 'cash machine, cash dispenser, automated teller machine, automatic teller machine, automated teller, automatic teller, ATM',\n",
        " 481: 'cassette',\n",
        " 482: 'cassette player',\n",
        " 483: 'castle',\n",
        " 484: 'catamaran',\n",
        " 485: 'CD player',\n",
        " 486: 'cello, violoncello',\n",
        " 487: 'cellular telephone, cellular phone, cellphone, cell, mobile phone',\n",
        " 488: 'chain',\n",
        " 489: 'chainlink fence',\n",
        " 490: 'chain mail, ring mail, mail, chain armor, chain armour, ring armor, ring armour',\n",
        " 491: 'chain saw, chainsaw',\n",
        " 492: 'chest',\n",
        " 493: 'chiffonier, commode',\n",
        " 494: 'chime, bell, gong',\n",
        " 495: 'china cabinet, china closet',\n",
        " 496: 'Christmas stocking',\n",
        " 497: 'church, church building',\n",
        " 498: 'cinema, movie theater, movie theatre, movie house, picture palace',\n",
        " 499: 'cleaver, meat cleaver, chopper',\n",
        " 500: 'cliff dwelling',\n",
        " 501: 'cloak',\n",
        " 502: 'clog, geta, patten, sabot',\n",
        " 503: 'cocktail shaker',\n",
        " 504: 'coffee mug',\n",
        " 505: 'coffeepot',\n",
        " 506: 'coil, spiral, volute, whorl, helix',\n",
        " 507: 'combination lock',\n",
        " 508: 'computer keyboard, keypad',\n",
        " 509: 'confectionery, confectionary, candy store',\n",
        " 510: 'container ship, containership, container vessel',\n",
        " 511: 'convertible',\n",
        " 512: 'corkscrew, bottle screw',\n",
        " 513: 'cornet, horn, trumpet, trump',\n",
        " 514: 'cowboy boot',\n",
        " 515: 'cowboy hat, ten-gallon hat',\n",
        " 516: 'cradle',\n",
        " 517: 'crane',\n",
        " 518: 'crash helmet',\n",
        " 519: 'crate',\n",
        " 520: 'crib, cot',\n",
        " 521: 'Crock Pot',\n",
        " 522: 'croquet ball',\n",
        " 523: 'crutch',\n",
        " 524: 'cuirass',\n",
        " 525: 'dam, dike, dyke',\n",
        " 526: 'desk',\n",
        " 527: 'desktop computer',\n",
        " 528: 'dial telephone, dial phone',\n",
        " 529: 'diaper, nappy, napkin',\n",
        " 530: 'digital clock',\n",
        " 531: 'digital watch',\n",
        " 532: 'dining table, board',\n",
        " 533: 'dishrag, dishcloth',\n",
        " 534: 'dishwasher, dish washer, dishwashing machine',\n",
        " 535: 'disk brake, disc brake',\n",
        " 536: 'dock, dockage, docking facility',\n",
        " 537: 'dogsled, dog sled, dog sleigh',\n",
        " 538: 'dome',\n",
        " 539: 'doormat, welcome mat',\n",
        " 540: 'drilling platform, offshore rig',\n",
        " 541: 'drum, membranophone, tympan',\n",
        " 542: 'drumstick',\n",
        " 543: 'dumbbell',\n",
        " 544: 'Dutch oven',\n",
        " 545: 'electric fan, blower',\n",
        " 546: 'electric guitar',\n",
        " 547: 'electric locomotive',\n",
        " 548: 'entertainment center',\n",
        " 549: 'envelope',\n",
        " 550: 'espresso maker',\n",
        " 551: 'face powder',\n",
        " 552: 'feather boa, boa',\n",
        " 553: 'file, file cabinet, filing cabinet',\n",
        " 554: 'fireboat',\n",
        " 555: 'fire engine, fire truck',\n",
        " 556: 'fire screen, fireguard',\n",
        " 557: 'flagpole, flagstaff',\n",
        " 558: 'flute, transverse flute',\n",
        " 559: 'folding chair',\n",
        " 560: 'football helmet',\n",
        " 561: 'forklift',\n",
        " 562: 'fountain',\n",
        " 563: 'fountain pen',\n",
        " 564: 'four-poster',\n",
        " 565: 'freight car',\n",
        " 566: 'French horn, horn',\n",
        " 567: 'frying pan, frypan, skillet',\n",
        " 568: 'fur coat',\n",
        " 569: 'garbage truck, dustcart',\n",
        " 570: 'gasmask, respirator, gas helmet',\n",
        " 571: 'gas pump, gasoline pump, petrol pump, island dispenser',\n",
        " 572: 'goblet',\n",
        " 573: 'go-kart',\n",
        " 574: 'golf ball',\n",
        " 575: 'golfcart, golf cart',\n",
        " 576: 'gondola',\n",
        " 577: 'gong, tam-tam',\n",
        " 578: 'gown',\n",
        " 579: 'grand piano, grand',\n",
        " 580: 'greenhouse, nursery, glasshouse',\n",
        " 581: 'grille, radiator grille',\n",
        " 582: 'grocery store, grocery, food market, market',\n",
        " 583: 'guillotine',\n",
        " 584: 'hair slide',\n",
        " 585: 'hair spray',\n",
        " 586: 'half track',\n",
        " 587: 'hammer',\n",
        " 588: 'hamper',\n",
        " 589: 'hand blower, blow dryer, blow drier, hair dryer, hair drier',\n",
        " 590: 'hand-held computer, hand-held microcomputer',\n",
        " 591: 'handkerchief, hankie, hanky, hankey',\n",
        " 592: 'hard disc, hard disk, fixed disk',\n",
        " 593: 'harmonica, mouth organ, harp, mouth harp',\n",
        " 594: 'harp',\n",
        " 595: 'harvester, reaper',\n",
        " 596: 'hatchet',\n",
        " 597: 'holster',\n",
        " 598: 'home theater, home theatre',\n",
        " 599: 'honeycomb',\n",
        " 600: 'hook, claw',\n",
        " 601: 'hoopskirt, crinoline',\n",
        " 602: 'horizontal bar, high bar',\n",
        " 603: 'horse cart, horse-cart',\n",
        " 604: 'hourglass',\n",
        " 605: 'iPod',\n",
        " 606: 'iron, smoothing iron',\n",
        " 607: \"jack-o'-lantern\",\n",
        " 608: 'jean, blue jean, denim',\n",
        " 609: 'jeep, landrover',\n",
        " 610: 'jersey, T-shirt, tee shirt',\n",
        " 611: 'jigsaw puzzle',\n",
        " 612: 'jinrikisha, ricksha, rickshaw',\n",
        " 613: 'joystick',\n",
        " 614: 'kimono',\n",
        " 615: 'knee pad',\n",
        " 616: 'knot',\n",
        " 617: 'lab coat, laboratory coat',\n",
        " 618: 'ladle',\n",
        " 619: 'lampshade, lamp shade',\n",
        " 620: 'laptop, laptop computer',\n",
        " 621: 'lawn mower, mower',\n",
        " 622: 'lens cap, lens cover',\n",
        " 623: 'letter opener, paper knife, paperknife',\n",
        " 624: 'library',\n",
        " 625: 'lifeboat',\n",
        " 626: 'lighter, light, igniter, ignitor',\n",
        " 627: 'limousine, limo',\n",
        " 628: 'liner, ocean liner',\n",
        " 629: 'lipstick, lip rouge',\n",
        " 630: 'Loafer',\n",
        " 631: 'lotion',\n",
        " 632: 'loudspeaker, speaker, speaker unit, loudspeaker system, speaker system',\n",
        " 633: \"loupe, jeweler's loupe\",\n",
        " 634: 'lumbermill, sawmill',\n",
        " 635: 'magnetic compass',\n",
        " 636: 'mailbag, postbag',\n",
        " 637: 'mailbox, letter box',\n",
        " 638: 'maillot',\n",
        " 639: 'maillot, tank suit',\n",
        " 640: 'manhole cover',\n",
        " 641: 'maraca',\n",
        " 642: 'marimba, xylophone',\n",
        " 643: 'mask',\n",
        " 644: 'matchstick',\n",
        " 645: 'maypole',\n",
        " 646: 'maze, labyrinth',\n",
        " 647: 'measuring cup',\n",
        " 648: 'medicine chest, medicine cabinet',\n",
        " 649: 'megalith, megalithic structure',\n",
        " 650: 'microphone, mike',\n",
        " 651: 'microwave, microwave oven',\n",
        " 652: 'military uniform',\n",
        " 653: 'milk can',\n",
        " 654: 'minibus',\n",
        " 655: 'miniskirt, mini',\n",
        " 656: 'minivan',\n",
        " 657: 'missile',\n",
        " 658: 'mitten',\n",
        " 659: 'mixing bowl',\n",
        " 660: 'mobile home, manufactured home',\n",
        " 661: 'Model T',\n",
        " 662: 'modem',\n",
        " 663: 'monastery',\n",
        " 664: 'monitor',\n",
        " 665: 'moped',\n",
        " 666: 'mortar',\n",
        " 667: 'mortarboard',\n",
        " 668: 'mosque',\n",
        " 669: 'mosquito net',\n",
        " 670: 'motor scooter, scooter',\n",
        " 671: 'mountain bike, all-terrain bike, off-roader',\n",
        " 672: 'mountain tent',\n",
        " 673: 'mouse, computer mouse',\n",
        " 674: 'mousetrap',\n",
        " 675: 'moving van',\n",
        " 676: 'muzzle',\n",
        " 677: 'nail',\n",
        " 678: 'neck brace',\n",
        " 679: 'necklace',\n",
        " 680: 'nipple',\n",
        " 681: 'notebook, notebook computer',\n",
        " 682: 'obelisk',\n",
        " 683: 'oboe, hautboy, hautbois',\n",
        " 684: 'ocarina, sweet potato',\n",
        " 685: 'odometer, hodometer, mileometer, milometer',\n",
        " 686: 'oil filter',\n",
        " 687: 'organ, pipe organ',\n",
        " 688: 'oscilloscope, scope, cathode-ray oscilloscope, CRO',\n",
        " 689: 'overskirt',\n",
        " 690: 'oxcart',\n",
        " 691: 'oxygen mask',\n",
        " 692: 'packet',\n",
        " 693: 'paddle, boat paddle',\n",
        " 694: 'paddlewheel, paddle wheel',\n",
        " 695: 'padlock',\n",
        " 696: 'paintbrush',\n",
        " 697: \"pajama, pyjama, pj's, jammies\",\n",
        " 698: 'palace',\n",
        " 699: 'panpipe, pandean pipe, syrinx',\n",
        " 700: 'paper towel',\n",
        " 701: 'parachute, chute',\n",
        " 702: 'parallel bars, bars',\n",
        " 703: 'park bench',\n",
        " 704: 'parking meter',\n",
        " 705: 'passenger car, coach, carriage',\n",
        " 706: 'patio, terrace',\n",
        " 707: 'pay-phone, pay-station',\n",
        " 708: 'pedestal, plinth, footstall',\n",
        " 709: 'pencil box, pencil case',\n",
        " 710: 'pencil sharpener',\n",
        " 711: 'perfume, essence',\n",
        " 712: 'Petri dish',\n",
        " 713: 'photocopier',\n",
        " 714: 'pick, plectrum, plectron',\n",
        " 715: 'pickelhaube',\n",
        " 716: 'picket fence, paling',\n",
        " 717: 'pickup, pickup truck',\n",
        " 718: 'pier',\n",
        " 719: 'piggy bank, penny bank',\n",
        " 720: 'pill bottle',\n",
        " 721: 'pillow',\n",
        " 722: 'ping-pong ball',\n",
        " 723: 'pinwheel',\n",
        " 724: 'pirate, pirate ship',\n",
        " 725: 'pitcher, ewer',\n",
        " 726: \"plane, carpenter's plane, woodworking plane\",\n",
        " 727: 'planetarium',\n",
        " 728: 'plastic bag',\n",
        " 729: 'plate rack',\n",
        " 730: 'plow, plough',\n",
        " 731: \"plunger, plumber's helper\",\n",
        " 732: 'Polaroid camera, Polaroid Land camera',\n",
        " 733: 'pole',\n",
        " 734: 'police van, police wagon, paddy wagon, patrol wagon, wagon, black Maria',\n",
        " 735: 'poncho',\n",
        " 736: 'pool table, billiard table, snooker table',\n",
        " 737: 'pop bottle, soda bottle',\n",
        " 738: 'pot, flowerpot',\n",
        " 739: \"potter's wheel\",\n",
        " 740: 'power drill',\n",
        " 741: 'prayer rug, prayer mat',\n",
        " 742: 'printer',\n",
        " 743: 'prison, prison house',\n",
        " 744: 'projectile, missile',\n",
        " 745: 'projector',\n",
        " 746: 'puck, hockey puck',\n",
        " 747: 'punching bag, punch bag, punching ball, punchball',\n",
        " 748: 'purse',\n",
        " 749: 'quill, quill pen',\n",
        " 750: 'quilt, comforter, comfort, puff',\n",
        " 751: 'racer, race car, racing car',\n",
        " 752: 'racket, racquet',\n",
        " 753: 'radiator',\n",
        " 754: 'radio, wireless',\n",
        " 755: 'radio telescope, radio reflector',\n",
        " 756: 'rain barrel',\n",
        " 757: 'recreational vehicle, RV, R.V.',\n",
        " 758: 'reel',\n",
        " 759: 'reflex camera',\n",
        " 760: 'refrigerator, icebox',\n",
        " 761: 'remote control, remote',\n",
        " 762: 'restaurant, eating house, eating place, eatery',\n",
        " 763: 'revolver, six-gun, six-shooter',\n",
        " 764: 'rifle',\n",
        " 765: 'rocking chair, rocker',\n",
        " 766: 'rotisserie',\n",
        " 767: 'rubber eraser, rubber, pencil eraser',\n",
        " 768: 'rugby ball',\n",
        " 769: 'rule, ruler',\n",
        " 770: 'running shoe',\n",
        " 771: 'safe',\n",
        " 772: 'safety pin',\n",
        " 773: 'saltshaker, salt shaker',\n",
        " 774: 'sandal',\n",
        " 775: 'sarong',\n",
        " 776: 'sax, saxophone',\n",
        " 777: 'scabbard',\n",
        " 778: 'scale, weighing machine',\n",
        " 779: 'school bus',\n",
        " 780: 'schooner',\n",
        " 781: 'scoreboard',\n",
        " 782: 'screen, CRT screen',\n",
        " 783: 'screw',\n",
        " 784: 'screwdriver',\n",
        " 785: 'seat belt, seatbelt',\n",
        " 786: 'sewing machine',\n",
        " 787: 'shield, buckler',\n",
        " 788: 'shoe shop, shoe-shop, shoe store',\n",
        " 789: 'shoji',\n",
        " 790: 'shopping basket',\n",
        " 791: 'shopping cart',\n",
        " 792: 'shovel',\n",
        " 793: 'shower cap',\n",
        " 794: 'shower curtain',\n",
        " 795: 'ski',\n",
        " 796: 'ski mask',\n",
        " 797: 'sleeping bag',\n",
        " 798: 'slide rule, slipstick',\n",
        " 799: 'sliding door',\n",
        " 800: 'slot, one-armed bandit',\n",
        " 801: 'snorkel',\n",
        " 802: 'snowmobile',\n",
        " 803: 'snowplow, snowplough',\n",
        " 804: 'soap dispenser',\n",
        " 805: 'soccer ball',\n",
        " 806: 'sock',\n",
        " 807: 'solar dish, solar collector, solar furnace',\n",
        " 808: 'sombrero',\n",
        " 809: 'soup bowl',\n",
        " 810: 'space bar',\n",
        " 811: 'space heater',\n",
        " 812: 'space shuttle',\n",
        " 813: 'spatula',\n",
        " 814: 'speedboat',\n",
        " 815: \"spider web, spider's web\",\n",
        " 816: 'spindle',\n",
        " 817: 'sports car, sport car',\n",
        " 818: 'spotlight, spot',\n",
        " 819: 'stage',\n",
        " 820: 'steam locomotive',\n",
        " 821: 'steel arch bridge',\n",
        " 822: 'steel drum',\n",
        " 823: 'stethoscope',\n",
        " 824: 'stole',\n",
        " 825: 'stone wall',\n",
        " 826: 'stopwatch, stop watch',\n",
        " 827: 'stove',\n",
        " 828: 'strainer',\n",
        " 829: 'streetcar, tram, tramcar, trolley, trolley car',\n",
        " 830: 'stretcher',\n",
        " 831: 'studio couch, day bed',\n",
        " 832: 'stupa, tope',\n",
        " 833: 'submarine, pigboat, sub, U-boat',\n",
        " 834: 'suit, suit of clothes',\n",
        " 835: 'sundial',\n",
        " 836: 'sunglass',\n",
        " 837: 'sunglasses, dark glasses, shades',\n",
        " 838: 'sunscreen, sunblock, sun blocker',\n",
        " 839: 'suspension bridge',\n",
        " 840: 'swab, swob, mop',\n",
        " 841: 'sweatshirt',\n",
        " 842: 'swimming trunks, bathing trunks',\n",
        " 843: 'swing',\n",
        " 844: 'switch, electric switch, electrical switch',\n",
        " 845: 'syringe',\n",
        " 846: 'table lamp',\n",
        " 847: 'tank, army tank, armored combat vehicle, armoured combat vehicle',\n",
        " 848: 'tape player',\n",
        " 849: 'teapot',\n",
        " 850: 'teddy, teddy bear',\n",
        " 851: 'television, television system',\n",
        " 852: 'tennis ball',\n",
        " 853: 'thatch, thatched roof',\n",
        " 854: 'theater curtain, theatre curtain',\n",
        " 855: 'thimble',\n",
        " 856: 'thresher, thrasher, threshing machine',\n",
        " 857: 'throne',\n",
        " 858: 'tile roof',\n",
        " 859: 'toaster',\n",
        " 860: 'tobacco shop, tobacconist shop, tobacconist',\n",
        " 861: 'toilet seat',\n",
        " 862: 'torch',\n",
        " 863: 'totem pole',\n",
        " 864: 'tow truck, tow car, wrecker',\n",
        " 865: 'toyshop',\n",
        " 866: 'tractor',\n",
        " 867: 'trailer truck, tractor trailer, trucking rig, rig, articulated lorry, semi',\n",
        " 868: 'tray',\n",
        " 869: 'trench coat',\n",
        " 870: 'tricycle, trike, velocipede',\n",
        " 871: 'trimaran',\n",
        " 872: 'tripod',\n",
        " 873: 'triumphal arch',\n",
        " 874: 'trolleybus, trolley coach, trackless trolley',\n",
        " 875: 'trombone',\n",
        " 876: 'tub, vat',\n",
        " 877: 'turnstile',\n",
        " 878: 'typewriter keyboard',\n",
        " 879: 'umbrella',\n",
        " 880: 'unicycle, monocycle',\n",
        " 881: 'upright, upright piano',\n",
        " 882: 'vacuum, vacuum cleaner',\n",
        " 883: 'vase',\n",
        " 884: 'vault',\n",
        " 885: 'velvet',\n",
        " 886: 'vending machine',\n",
        " 887: 'vestment',\n",
        " 888: 'viaduct',\n",
        " 889: 'violin, fiddle',\n",
        " 890: 'volleyball',\n",
        " 891: 'waffle iron',\n",
        " 892: 'wall clock',\n",
        " 893: 'wallet, billfold, notecase, pocketbook',\n",
        " 894: 'wardrobe, closet, press',\n",
        " 895: 'warplane, military plane',\n",
        " 896: 'washbasin, handbasin, washbowl, lavabo, wash-hand basin',\n",
        " 897: 'washer, automatic washer, washing machine',\n",
        " 898: 'water bottle',\n",
        " 899: 'water jug',\n",
        " 900: 'water tower',\n",
        " 901: 'whiskey jug',\n",
        " 902: 'whistle',\n",
        " 903: 'wig',\n",
        " 904: 'window screen',\n",
        " 905: 'window shade',\n",
        " 906: 'Windsor tie',\n",
        " 907: 'wine bottle',\n",
        " 908: 'wing',\n",
        " 909: 'wok',\n",
        " 910: 'wooden spoon',\n",
        " 911: 'wool, woolen, woollen',\n",
        " 912: 'worm fence, snake fence, snake-rail fence, Virginia fence',\n",
        " 913: 'wreck',\n",
        " 914: 'yawl',\n",
        " 915: 'yurt',\n",
        " 916: 'web site, website, internet site, site',\n",
        " 917: 'comic book',\n",
        " 918: 'crossword puzzle, crossword',\n",
        " 919: 'street sign',\n",
        " 920: 'traffic light, traffic signal, stoplight',\n",
        " 921: 'book jacket, dust cover, dust jacket, dust wrapper',\n",
        " 922: 'menu',\n",
        " 923: 'plate',\n",
        " 924: 'guacamole',\n",
        " 925: 'consomme',\n",
        " 926: 'hot pot, hotpot',\n",
        " 927: 'trifle',\n",
        " 928: 'ice cream, icecream',\n",
        " 929: 'ice lolly, lolly, lollipop, popsicle',\n",
        " 930: 'French loaf',\n",
        " 931: 'bagel, beigel',\n",
        " 932: 'pretzel',\n",
        " 933: 'cheeseburger',\n",
        " 934: 'hotdog, hot dog, red hot',\n",
        " 935: 'mashed potato',\n",
        " 936: 'head cabbage',\n",
        " 937: 'broccoli',\n",
        " 938: 'cauliflower',\n",
        " 939: 'zucchini, courgette',\n",
        " 940: 'spaghetti squash',\n",
        " 941: 'acorn squash',\n",
        " 942: 'butternut squash',\n",
        " 943: 'cucumber, cuke',\n",
        " 944: 'artichoke, globe artichoke',\n",
        " 945: 'bell pepper',\n",
        " 946: 'cardoon',\n",
        " 947: 'mushroom',\n",
        " 948: 'Granny Smith',\n",
        " 949: 'strawberry',\n",
        " 950: 'orange',\n",
        " 951: 'lemon',\n",
        " 952: 'fig',\n",
        " 953: 'pineapple, ananas',\n",
        " 954: 'banana',\n",
        " 955: 'jackfruit, jak, jack',\n",
        " 956: 'custard apple',\n",
        " 957: 'pomegranate',\n",
        " 958: 'hay',\n",
        " 959: 'carbonara',\n",
        " 960: 'chocolate sauce, chocolate syrup',\n",
        " 961: 'dough',\n",
        " 962: 'meat loaf, meatloaf',\n",
        " 963: 'pizza, pizza pie',\n",
        " 964: 'potpie',\n",
        " 965: 'burrito',\n",
        " 966: 'red wine',\n",
        " 967: 'espresso',\n",
        " 968: 'cup',\n",
        " 969: 'eggnog',\n",
        " 970: 'alp',\n",
        " 971: 'bubble',\n",
        " 972: 'cliff, drop, drop-off',\n",
        " 973: 'coral reef',\n",
        " 974: 'geyser',\n",
        " 975: 'lakeside, lakeshore',\n",
        " 976: 'promontory, headland, head, foreland',\n",
        " 977: 'sandbar, sand bar',\n",
        " 978: 'seashore, coast, seacoast, sea-coast',\n",
        " 979: 'valley, vale',\n",
        " 980: 'volcano',\n",
        " 981: 'ballplayer, baseball player',\n",
        " 982: 'groom, bridegroom',\n",
        " 983: 'scuba diver',\n",
        " 984: 'rapeseed',\n",
        " 985: 'daisy',\n",
        " 986: \"yellow lady's slipper, yellow lady-slipper, Cypripedium calceolus, Cypripedium parviflorum\",\n",
        " 987: 'corn',\n",
        " 988: 'acorn',\n",
        " 989: 'hip, rose hip, rosehip',\n",
        " 990: 'buckeye, horse chestnut, conker',\n",
        " 991: 'coral fungus',\n",
        " 992: 'agaric',\n",
        " 993: 'gyromitra',\n",
        " 994: 'stinkhorn, carrion fungus',\n",
        " 995: 'earthstar',\n",
        " 996: 'hen-of-the-woods, hen of the woods, Polyporus frondosus, Grifola frondosa',\n",
        " 997: 'bolete',\n",
        " 998: 'ear, spike, capitulum',\n",
        " 999: 'toilet tissue, toilet paper, bathroom tissue'}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "daWTpEh5nTl5"
      },
      "source": [
        "# @title Map Imagenette Labels to Imagenet Labels\n",
        "dir_to_imagenet_index = {\n",
        "    'n03888257': 1,\n",
        "    'n03425413': 571,\n",
        "    'n03394916': 566,\n",
        "    'n03000684': 491,\n",
        "    'n02102040': 217,\n",
        "    'n03445777': 574,\n",
        "    'n03417042': 569,\n",
        "    'n03028079': 497,\n",
        "    'n02979186': 482,\n",
        "    'n01440764': 701\n",
        "}\n",
        "\n",
        "dir_index_to_imagenet_label = {}\n",
        "ordered_dirs = sorted(list(dir_to_imagenet_index.keys()))\n",
        "\n",
        "for dir_index, dir_name in enumerate(ordered_dirs):\n",
        "    dir_index_to_imagenet_label[dir_index] = dir_to_imagenet_index[dir_name]\n",
        "\n",
        "dir_index_to_imagenet_label\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "w_RnqY64qcpL",
        "cellView": "form"
      },
      "source": [
        "# @title Prepare Imagenette Data\n",
        "val_transform = transforms.Compose((transforms.Resize((256, 256)),\n",
        "                                     transforms.ToTensor()))    \n",
        "\n",
        "imagenette_val = ImageFolder('imagenette2-320/val', transform=val_transform)\n",
        "\n",
        "train_transform = transforms.Compose((transforms.Resize((256, 256)),\n",
        "                                     transforms.ToTensor()))    \n",
        "\n",
        "imagenette_train = ImageFolder('imagenette2-320/train', transform=train_transform)\n",
        "random_indices = random.sample(range(len(imagenette_train)), 400)\n",
        "imagenette_train_subset = torch.utils.data.Subset(imagenette_train, random_indices)\n",
        "\n",
        "\n",
        "\n",
        "\n",
        "# Subset to only one tenth of the data for faster runtime\n",
        "random_indices = random.sample(range(len(imagenette_val)), int(len(imagenette_val) * .1))\n",
        "imagenette_val = torch.utils.data.Subset(imagenette_val, random_indices)\n",
        "\n",
        "\n",
        "\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "9MpCIRBLrpg7"
      },
      "source": [
        "imagenette_train_loader = torch.utils.data.DataLoader(imagenette_train_subset, \n",
        "                                                      batch_size=16,\n",
        "                                                      shuffle=True)\n",
        "\n",
        "imagenette_val_loader = torch.utils.data.DataLoader(imagenette_val, \n",
        "                                                    batch_size=16, \n",
        "                                                    shuffle=True)\n",
        "\n",
        "dataiter = iter(imagenette_val_loader)\n",
        "images, labels = dataiter.next()\n",
        "\n",
        "# show images\n",
        "plt.figure(figsize=(8,8))\n",
        "plt.imshow(make_grid(images, nrow=4).permute(1,2,0))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "eb3uemoHAwqo",
        "cellView": "form"
      },
      "source": [
        "# @title eval_imagenette function\n",
        "def eval_imagenette(resnet, data_loader, dataset_length):\n",
        "    resnet.eval()\n",
        "    with torch.no_grad():\n",
        "        loss_sum = 0\n",
        "        total_1_correct = 0\n",
        "        total_5_correct = 0\n",
        "        total = dataset_length\n",
        "        for batch in tqdm.notebook.tqdm(data_loader):\n",
        "            images, labels = batch\n",
        "            \n",
        "            # Map the imagenette labels onto the network's output\n",
        "            for i, label in enumerate(labels):\n",
        "                labels[i] = dir_index_to_imagenet_label[label.item()]\n",
        "            \n",
        "            images = images.to(device)\n",
        "            labels = labels.to(device)\n",
        "            output = resnet(images)\n",
        "\n",
        "            # Calculate top-5 accuracy\n",
        "            # Implementation from https://github.com/bearpaw/pytorch-classification/blob/cc9106d598ff1fe375cc030873ceacfea0499d77/utils/eval.py\n",
        "            batch_size = labels.size(0)\n",
        "\n",
        "            _, predictions = output.topk(5, 1, True, True)\n",
        "            predictions = predictions.t()\n",
        "\n",
        "            top_k_correct = predictions.eq(labels.view(1, -1).expand_as(predictions))\n",
        "            top_k_correct = top_k_correct.sum()\n",
        " \n",
        "            predictions = torch.argmax(output, dim=1)\n",
        "            top_1_correct = torch.sum(predictions == labels)\n",
        "            total_1_correct += top_1_correct\n",
        "            total_5_correct += top_k_correct\n",
        "\n",
        "        top_1_acc = total_1_correct / total\n",
        "        top_5_acc = total_5_correct / total\n",
        "\n",
        "        return top_1_acc, top_5_acc\n",
        "\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7VOtJfmnCAO7",
        "cellView": "form"
      },
      "source": [
        "# @title Imagenette Train Loop\n",
        "def imagenette_train_loop(model, optimizer, train_loader, loss_fn):\n",
        "    loss_fn = nn.CrossEntropyLoss()\n",
        "    for epoch in tqdm.notebook.tqdm(range(5)):\n",
        "    # Set model to use the imagenette classifier head\n",
        "        model.train()\n",
        "        # Train on a batch of images\n",
        "        for imagenette_batch in train_loader:\n",
        "            images, labels = imagenette_batch\n",
        "\n",
        "            # Convert labels from imagenette indices to imagenet labels\n",
        "            for i, label in enumerate(labels):\n",
        "                labels[i] = dir_index_to_imagenet_label[label.item()]\n",
        "\n",
        "            images = images.to(device)\n",
        "            labels = labels.to(device)\n",
        "            output = model(images)\n",
        "            optimizer.zero_grad()\n",
        "            loss = loss_fn(output, labels)\n",
        "            loss.backward()\n",
        "            optimizer.step()\n",
        "        \n",
        "    return model"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "KppDUL9U5SiA"
      },
      "source": [
        "# This identity module returns the input, and so can be used to delete intermediate layers\n",
        "# without rewriting the forward function.\n",
        "# For more info read this discussion \n",
        "# https://discuss.pytorch.org/t/how-to-delete-layer-in-pretrained-model/17648\n",
        "class Identity(nn.Module):\n",
        "    def __init__(self):\n",
        "        super(Identity, self).__init__()\n",
        "        \n",
        "    def forward(self, x):\n",
        "        return x"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zsXTOWsU5dHH"
      },
      "source": [
        "This cell creates a ResNet model pretrained on [ImageNet](http://www.image-net.org/), a 1000 class image prediction dataset. The model is then trained to make predictions on [Imagenette](https://github.com/fastai/imagenette), a small subset of ImageNet classes that is useful for demonstrations and prototyping."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "OZ0w4PoT0ZEk"
      },
      "source": [
        "# Original network\n",
        "top_1_accuracies = []\n",
        "top_5_accuracies = []\n",
        "\n",
        "# Instantiate a pretrained resnet model\n",
        "resnet = torchvision.models.resnet18(pretrained=True).to(device)\n",
        "resnet_opt = torch.optim.Adam(resnet.parameters(), lr=1e-4)\n",
        "loss_fn = nn.CrossEntropyLoss()\n",
        "\n",
        "imagenette_train_loop(resnet, resnet_opt, imagenette_train_loader, loss_fn)\n",
        "\n",
        "top_1_acc, top_5_acc = eval_imagenette(resnet, imagenette_val_loader, len(imagenette_val))\n",
        "top_1_accuracies.append(top_1_acc.item())\n",
        "top_5_accuracies.append(top_5_acc.item())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SOq-ha6QISwp"
      },
      "source": [
        "To find out which layers are most important, we'll remove a block (roughly half the layer) from each and see how they perform."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "hX70ms044m9G"
      },
      "source": [
        "layers = ['unchanged', 'layer1', 'layer2', 'layer3', 'layer4']\n",
        "\n",
        "for layer in layers[1:]:\n",
        "    layer_to_remove = getattr(resnet, layer)\n",
        "    # Store block so we can add it back after we're done evaluating\n",
        "    block_backup = copy.deepcopy(layer_to_remove[1])\n",
        "\n",
        "    # Remove one of the blocks in the layer by setting it to the identity function\n",
        "    layer_to_remove[1] = Identity()\n",
        "\n",
        "    top_1_acc, top_5_acc = eval_imagenette(resnet, imagenette_val_loader, len(imagenette_val))\n",
        "    top_1_accuracies.append(top_1_acc.item())\n",
        "    top_5_accuracies.append(top_5_acc.item())\n",
        "\n",
        "    # Restore the block that was removed\n",
        "    layer_to_remove[1] = block_backup"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "TtaGh79R3axZ"
      },
      "source": [
        "plt.plot(layers, top_1_accuracies, label='Top-1 Accuracy')\n",
        "plt.plot(layers, top_5_accuracies, label='Top-5 Accuracy')\n",
        "plt.ylabel('Accuracy')\n",
        "plt.legend()\n",
        "\n",
        "plt.title('The Effects of Deleting Blocks in a ResNet')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xQXtDt6j6MSK"
      },
      "source": [
        "Removing layers hurts accuracy, but the model stil performs way above random chance."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9EcDIcmN6b5i"
      },
      "source": [
        "### Exercise 4\n",
        "Why do you think different layers have different importances? "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "mXpZK-FS6qQj",
        "cellView": "form"
      },
      "source": [
        "layer_importance = '' #@param {type:\"string\"}\n",
        "\n",
        "t4 = time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "25m9tfRF3tNN"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "# In the ResNet frame of thinking later residuals represent less error than earlier residuals.\n",
        "# However, this trend (removing earlier layers does more damage than removing later layers) \n",
        "# should occur other architectures as well as the later layers build off of the features represented\n",
        "# in the earlier ones"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "I64SYYF_NAAf"
      },
      "source": [
        "## Futher Reading\n",
        "\n",
        "This line of thinking also leads to other interesting research topics like the network saturation experiments in [this paper](https://arxiv.org/abs/2010.15327)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "18J4y7LOu5XO"
      },
      "source": [
        "---\n",
        "# Section 5: Compute/Performance Tradeoff and Facial Recognition\n",
        "\n",
        "*Estimated Time Elapsed: 90 minutes*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Q3DJcjD4D8As",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Large Convnets and Facial Recognition\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"W25_R44LvR4\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Gq5uMCtm1AOB"
      },
      "source": [
        "As CNNs continued to develop, the ideas in the ResNet and [Highway Network](https://arxiv.org/abs/1505.00387) papers were taken to their logical extreme: skip-connections everywhere."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "mvcZBdF7xVM8"
      },
      "source": [
        "[ResNet vs ResNext](https://arxiv.org/abs/1611.05431)\n",
        "\n",
        "<img height=300 src=https://paperswithcode.com/media/methods/Screen_Shot_2020-06-06_at_4.32.52_PM_iXtkYE5.png>\n",
        "\n",
        "[Densenet Architecture](https://arxiv.org/pdf/1608.06993.pdf)\n",
        "\n",
        "<img src=https://paperswithcode.com/media/methods/Screen_Shot_2020-06-20_at_11.35.53_PM_KroVKVL.png>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AumDaC9A1bmo"
      },
      "source": [
        "As the models got larger and number of connections increased so did the computational costs involved. In the modern era of image processing, there is a tradeoff between model performance and computational cost. Models can reach extremely high performance on many problems, but achieving state of the art results requires [huge amounts of compute power](https://arxiv.org/pdf/1810.00736.pdf).\n",
        "\n",
        "![compute_vs_performance.png]()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CFc0_wRS4oov"
      },
      "source": [
        "## Facial Recognition\n",
        "\n",
        "One application of large CNNs is facial recognition. The problem forumulation in facial recognition is a little different from the image classification we've seen so far. In image recognition we don't want to have a fixed number of individuals that the model can learn. If that were the case then to learn a new person it would be necessary to modify the output portion of the architecture and retrain to account for the new person. \n",
        "\n",
        "Instead, we train a model to learn an embedding where images from the same individual are close to each other embedded space, and images corresponding to different people are far apart. To achieve this, facial recognitions typically use a triplet loss that compares and two images from the same individual (the \"anchor\" and \"positive\" images) and a negative image from a different individual (the \"negative\" image). The loss requires the distance between the anchor and negative points to be greater than a margin $\\alpha$ + the distance between the anchor and positive points.\n",
        "\n",
        "In this section we'll load a pretrained facial recognition model called [FaceNet](https://github.com/timesler/facenet-pytorch), then use it to demonstrate how it differentiates between individuals."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "Yag0L7hO7GN0"
      },
      "source": [
        "# @title Download Data\n",
        "!git clone --quiet https://github.com/ben-heil/cis_522_data.git\n",
        "!tar -xzf cis_522_data/archive.tar.gz\n",
        "!tar -xzf cis_522_data/faces.tar.gz"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "Vjxv6xT46aAe"
      },
      "source": [
        "# @title Display Images \n",
        "# @markdown Here are the source images of Bruce Lee, Neil Patrick Harris, and Pam Grier\n",
        "train_transform = transforms.Compose((transforms.Resize((256, 256)),\n",
        "                                     transforms.ToTensor()))    \n",
        "\n",
        "face_dataset = ImageFolder('faces', transform=train_transform)\n",
        "\n",
        "image_count = len(face_dataset) \n",
        "\n",
        "face_loader = torch.utils.data.DataLoader(face_dataset,\n",
        "                                          batch_size=45,\n",
        "                                          shuffle=False)\n",
        "\n",
        "dataiter = iter(face_loader)\n",
        "images, labels = dataiter.next()\n",
        "\n",
        "# show images\n",
        "plt.figure(figsize=(15,15))\n",
        "plt.imshow(make_grid(images, nrow=15).permute(1,2,0))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Iqb29ME68Nk6"
      },
      "source": [
        "# Load network\n",
        "resnet = InceptionResnetV1(pretrained='vggface2').eval()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "m8DAf9cbKba4"
      },
      "source": [
        "# @title Image Preprocessing Function\n",
        "def process_images(image_dir: str):\n",
        "    \"\"\"\n",
        "    This function returns two tensors for the given image dir: one usable for inputting into the \n",
        "    facenet model, and one that is [0,1] scaled for visualizing\n",
        "\n",
        "    Parameters:\n",
        "        image_dir: The glob corresponding to images in a directory\n",
        "\n",
        "    Returns:\n",
        "        model_tensor: A image_count x channels x height x width tensor scaled to between -1 and 1,\n",
        "                      with the faces detected and cropped to the center using mtcnn\n",
        "        display_tensor: A transformed version of the model tensor scaled to between 0 and 1\n",
        "    \"\"\"\n",
        "    mtcnn = MTCNN(image_size=256, margin=32)\n",
        "    images = []\n",
        "    for img_path in glob.glob(image_dir):\n",
        "        img = Image.open(img_path)\n",
        "        # Normalize and crop image\n",
        "        img_cropped = mtcnn(img)\n",
        "        images.append(img_cropped)\n",
        "\n",
        "    model_tensor = torch.stack(images)\n",
        "    display_tensor = model_tensor / (model_tensor.max() * 2)\n",
        "    display_tensor += .5\n",
        "\n",
        "    return model_tensor, display_tensor"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "F4u16dAuniUt"
      },
      "source": [
        "Now that we have our images loaded, we need to preprocess them. To make the images easier for the network to learn, we crop them to include just faces."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DqQjyTf5IGnW"
      },
      "source": [
        "bruce_tensor, bruce_display = process_images('faces/bruce/*.jpg')\n",
        "neil_tensor, neil_display = process_images('faces/neil/*.jpg')\n",
        "pam_tensor, pam_display = process_images('faces/pam/*.jpg')\n",
        "\n",
        "\n",
        "display_tensor = torch.cat((bruce_display, neil_display, pam_display))\n",
        "\n",
        "plt.figure(figsize=(15,15))\n",
        "plt.imshow(make_grid(display_tensor, nrow=15).permute(1, 2,0,))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fDkksaMVomiV"
      },
      "source": [
        "## Embedding the Images\n",
        "We can now take the pictures and feed them into FaceNet to see where they are embedded relative to each other."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "VuEb9nbgH85B"
      },
      "source": [
        "# Calculate embedding (unsqueeze to add batch dimension)\n",
        "resnet.classify = False\n",
        "bruce_embeddings = resnet(bruce_tensor)\n",
        "neil_embeddings = resnet(neil_tensor)\n",
        "pam_embeddings = resnet(pam_tensor)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "x16peSq_uBZd"
      },
      "source": [
        "We run PCA on the embedding to reduce the number of dimensions to 2 for easier plotting. Be aware that the true embeddings are more than 2-dimensional."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "4TcSc37PUBhA"
      },
      "source": [
        "embedding_tensor = torch.cat((bruce_embeddings, neil_embeddings, pam_embeddings))\n",
        "pca = sklearn.decomposition.PCA(n_components=2)\n",
        "pca_tensor = pca.fit_transform(embedding_tensor.detach().numpy())\n",
        "\n",
        "labels = ['Bruce Lee'] * 15 + ['Neil Patrick Hariss'] * 15 + ['Pam Grier']* 15\n",
        "colors = ['green'] * 15 + ['orange'] * 15 + ['purple'] * 15"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "sFKhHpZkUeo-"
      },
      "source": [
        "plt.scatter(pca_tensor[:,0], pca_tensor[:,1], c=colors)\n",
        "green_patch = mpatches.Patch(color='green', label='Bruce Lee')\n",
        "orange_patch = mpatches.Patch(color='orange', label='Neil Patrick Harris')\n",
        "purple_patch = mpatches.Patch(color='purple', label='Pam Grier')\n",
        "\n",
        "plt.title('PCA Representation of the Image Embeddings')\n",
        "plt.legend(handles=[green_patch, orange_patch, purple_patch])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vgraLhUuuQGd"
      },
      "source": [
        "Great! The images corresponding to each individual are separated from each other in the embedding space!"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fd4tfaokudg1"
      },
      "source": [
        "### Exercise 5\n",
        "PCA gives a rough idea the distances between each point, but we can be more precise. In this exercise you'll calculate the pairwise distance between each image's embedding."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "wkXvwITcvrma"
      },
      "source": [
        "# to_remove solution\n",
        "def calculate_pairwise_distances(embedding_tensor: torch.tensor):\n",
        "    \"\"\"\n",
        "    This function calculates the pairwise distance between each image embedding in a tensor\n",
        "\n",
        "    Parameters:\n",
        "        embedding_tensor: A num_images x embedding_dimension tensor\n",
        "\n",
        "    Returns:\n",
        "        distances: A num_images x num_images tensor containing the pairwise distances between each \n",
        "                   image embedding\n",
        "\n",
        "    Hint: the function torch.cdist makes this exercise a one-liner, though there are several ways to\n",
        "          calculate pairwise distances\n",
        "    \"\"\"\n",
        "    ####################################################################\n",
        "    # Fill in all missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Implement the pairwise distance function\")\n",
        "    #################################################################### \n",
        "    distances = ...\n",
        "\n",
        "    return distances\n",
        "\n",
        "### Uncomment below to test your function\n",
        "\n",
        "#distances = calculate_pairwise_distances(embedding_tensor)\n",
        "\n",
        "# plt.figure(figsize=(8,8))\n",
        "# plt.imshow(distances.detach().numpy())\n",
        "# plt.annotate('Bruce', (3,-.5), fontsize=24, va='bottom')\n",
        "# plt.annotate('Neil', (20,-.5), fontsize=24, va='bottom')\n",
        "# plt.annotate('Pam', (35,-.5), fontsize=24, va='bottom')\n",
        "# plt.annotate('Bruce', (-.5,10), fontsize=24, rotation=90, ha='right')\n",
        "# plt.annotate('Neil', (-.5,24), fontsize=24, rotation=90, ha='right')\n",
        "# plt.annotate('Pam', (-.5,39), fontsize=24, rotation=90, ha='right')\n",
        "# plt.colorbar()\n",
        "# plt.axis('off')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "NZLY6fklX-JB"
      },
      "source": [
        "# to_remove solution\n",
        "def calculate_pairwise_distances(embedding_tensor: torch.tensor):\n",
        "    \"\"\"\n",
        "    This function calculates the pairwise distance between each image embedding in a tensor\n",
        "\n",
        "    Parameters:\n",
        "        embedding_tensor: A num_images x embedding_dimension tensor\n",
        "\n",
        "    Returns:\n",
        "        distances: A num_images x num_images tensor containing the pairwise distances between each \n",
        "                   image embedding\n",
        "\n",
        "    Hint: the function torch.cdist makes this exercise a one-liner, though there are several ways to\n",
        "          calculate pairwise distances\n",
        "    \"\"\"\n",
        "    distances = torch.cdist(embedding_tensor, embedding_tensor)\n",
        "\n",
        "    return distances\n",
        "\n",
        "distances = calculate_pairwise_distances(embedding_tensor)\n",
        "\n",
        "plt.figure(figsize=(8,8))\n",
        "plt.imshow(distances.detach().numpy())\n",
        "plt.annotate('Bruce', (3,-.5), fontsize=24, va='bottom')\n",
        "plt.annotate('Neil', (20,-.5), fontsize=24, va='bottom')\n",
        "plt.annotate('Pam', (35,-.5), fontsize=24, va='bottom')\n",
        "plt.annotate('Bruce', (-.5,10), fontsize=24, rotation=90, ha='right')\n",
        "plt.annotate('Neil', (-.5,24), fontsize=24, rotation=90, ha='right')\n",
        "plt.annotate('Pam', (-.5,39), fontsize=24, rotation=90, ha='right')\n",
        "plt.colorbar()\n",
        "plt.axis('off')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "h8ZJx8Ig0pJP"
      },
      "source": [
        "As you can see, FaceNet is working! It separates images from different individuals, and keeps images from the same individual together. "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cw_E96U-3bTZ"
      },
      "source": [
        "## Facial Recognition Ethics\n",
        "Popular facial recognition datasets like VGGFace2 and CASIA-WebFace consist primarily of caucasian faces. \n",
        "As a result, even state of the art facial recognition models [substantially underperform](https://openaccess.thecvf.com/content_ICCV_2019/papers/Wang_Racial_Faces_in_the_Wild_Reducing_Racial_Bias_by_Information_ICCV_2019_paper.pdf) when attempting to recognize faces of other races.\n",
        "\n",
        "Given the implications that poor model performance can have in fields like security and criminal justice, it's very important to be aware of these limitations if you're going to be building facial recognition systems."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ROWqHGymvUro"
      },
      "source": [
        "---\n",
        "# Section 6:Transfer Learning and Domain Adaptation\n",
        "\n",
        "*Estimated Time Elapsed: 110 minutes*\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ScnAYB7fEHCX",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Domain Adaptation\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"t4ZK2mhfIn8\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Gyu1d2Vu4urq"
      },
      "source": [
        "The most common way large image models are trained in practice is via a type of transfer learning called \"fine-tuning\". Fine-tuning involves a network pretrained on a large dataset like ImageNet, then retraining that network on your task of choice. \n",
        "\n",
        "While training a network twice sounds like a strange thing to do, the model ends up training faster on the target domain. There are also other benefits such as [robustness to noise](https://arxiv.org/pdf/1901.09960.pdf) that are the subject of [active research](https://arxiv.org/abs/2008.11687).\n",
        "\n",
        "In this section we will demonstrate fine-tuning by taking a model trained on imagenet and teaching it to classify Pokemon."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2J0mGTjMMO3E"
      },
      "source": [
        "## Download and prepare the data"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "iAQTBhfpGizm"
      },
      "source": [
        "# This dataset is downloaded in the facial recognition section\n",
        "!ls small_pokemon_dataset/"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "x5mYOJzWH6SL",
        "cellView": "form"
      },
      "source": [
        "# @title Display Example Images \n",
        "train_transform = transforms.Compose((transforms.Resize((256, 256)),\n",
        "                                     transforms.ToTensor()))    \n",
        "\n",
        "pokemon_dataset = ImageFolder('small_pokemon_dataset', transform=train_transform)\n",
        "\n",
        "image_count = len(pokemon_dataset) \n",
        "train_indices = []\n",
        "test_indices = []\n",
        "for i in range(image_count):\n",
        "    # Put ten percent of the images in the test set\n",
        "    if random.random() < .1:\n",
        "        test_indices.append(i)\n",
        "    else:\n",
        "        train_indices.append(i)\n",
        "\n",
        "pokemon_test_set = torch.utils.data.Subset(pokemon_dataset, test_indices)  \n",
        "pokemon_train_set = torch.utils.data.Subset(pokemon_dataset, train_indices)  \n",
        "\n",
        "pokemon_train_loader = torch.utils.data.DataLoader(pokemon_train_set,\n",
        "                                                   batch_size=16,\n",
        "                                                   shuffle=True,)\n",
        "pokemon_test_loader = torch.utils.data.DataLoader(pokemon_test_set,\n",
        "                                                  batch_size=16)\n",
        "\n",
        "dataiter = iter(pokemon_train_loader)\n",
        "images, labels = dataiter.next()\n",
        "\n",
        "# show images\n",
        "plt.imshow(make_grid(images, nrow=4).permute(1,2,0))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0LyZoi80OL-i"
      },
      "source": [
        "## Pretrained Resnet\n",
        "\n",
        "It is common in computer vision to take a large model trained on a large dataset (often ImageNet) and fine-tune it to perform a different task. In this case we're using a pre-trained ResNet model to classify types of Pokemon."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "A1yYP4_TMogp"
      },
      "source": [
        "resnet = torchvision.models.resnet18(pretrained=True).to(device)\n",
        "optimizer = torch.optim.Adam(resnet.parameters(), lr=1e-4)\n",
        "loss_fn = nn.CrossEntropyLoss()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "B3_0LkquL2PV"
      },
      "source": [
        "pretrained_accs = []\n",
        "for epoch in range(10):\n",
        "    # Train loop\n",
        "    for batch in pokemon_train_loader:\n",
        "        images, labels = batch\n",
        "        images = images.to(device)\n",
        "        labels = labels.to(device)\n",
        "        \n",
        "        optimizer.zero_grad()\n",
        "        output = resnet(images)\n",
        "        loss = loss_fn(output, labels)\n",
        "        loss.backward()\n",
        "        optimizer.step()\n",
        "\n",
        "    # Eval loop\n",
        "    with torch.no_grad():\n",
        "        loss_sum = 0\n",
        "        total_correct = 0\n",
        "        total = len(pokemon_test_set)\n",
        "        for batch in pokemon_test_loader:\n",
        "            images, labels = batch\n",
        "            images = images.to(device)\n",
        "            labels = labels.to(device)\n",
        "            output = resnet(images)\n",
        "            loss = loss_fn(output, labels)\n",
        "            loss_sum += loss.item()\n",
        "\n",
        "            predictions = torch.argmax(output, dim=1)\n",
        "            \n",
        "            num_correct = torch.sum(predictions == labels)\n",
        "            total_correct += num_correct\n",
        "\n",
        "        # Plot accuracy\n",
        "        pretrained_accs.append(total_correct / total)\n",
        "        plt.plot(pretrained_accs)\n",
        "        plt.xlabel('epoch')\n",
        "        plt.ylabel('accuracy')\n",
        "        plt.title('Pokemon prediction accuracy')\n",
        "        display.clear_output(wait=True)\n",
        "        display.display(plt.gcf())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uRjh0LVEOPG_"
      },
      "source": [
        "## Randomly initialized ResNet"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "n37ElLOOOJkV"
      },
      "source": [
        "resnet = torchvision.models.resnet18(pretrained=False).to(device)\n",
        "\n",
        "optimizer = torch.optim.Adam(resnet.parameters(), lr=1e-4)\n",
        "\n",
        "loss_fn = nn.CrossEntropyLoss()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "NESEA-reOJkX"
      },
      "source": [
        "scratch_accs = []\n",
        "for epoch in range(10):\n",
        "    # Train loop\n",
        "    for batch in pokemon_train_loader:\n",
        "        images, labels = batch\n",
        "        images = images.to(device)\n",
        "        labels = labels.to(device)\n",
        "\n",
        "        optimizer.zero_grad()\n",
        "        output = resnet(images)\n",
        "        loss = loss_fn(output, labels)\n",
        "        loss.backward()\n",
        "        optimizer.step()\n",
        "\n",
        "    # Eval loop\n",
        "    with torch.no_grad():\n",
        "        loss_sum = 0\n",
        "        total_correct = 0\n",
        "        total = len(pokemon_test_set)\n",
        "        for batch in pokemon_test_loader:\n",
        "            images, labels = batch\n",
        "            images = images.to(device)\n",
        "            labels = labels.to(device)\n",
        "            output = resnet(images)\n",
        "            loss = loss_fn(output, labels)\n",
        "            loss_sum += loss.item()\n",
        "\n",
        "            predictions = torch.argmax(output, dim=1)\n",
        "            \n",
        "            num_correct = torch.sum(predictions == labels)\n",
        "            total_correct += num_correct\n",
        "\n",
        "        scratch_accs.append(total_correct / total)\n",
        "        plt.plot(scratch_accs)\n",
        "        plt.xlabel('epoch')\n",
        "        plt.ylabel('accuracy')\n",
        "        plt.title('Pokemon prediction accuracy')\n",
        "        \n",
        "        display.clear_output(wait=True)\n",
        "        display.display(plt.gcf())"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "77nZqUDvZizN"
      },
      "source": [
        "## Head to Head Comparison\n",
        "Starting from a randomly initialized network works less well, especially in the case of small datsets. Note that the model converges more slowly and less evenly"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "clJzmyy5X1ob"
      },
      "source": [
        "plt.plot(scratch_accs, label='From Scratch')\n",
        "plt.plot(pretrained_accs, label='Pretrained')\n",
        "plt.title('Pokemon prediction accuracy')\n",
        "plt.legend()\n",
        "plt.show()\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NI8Bmg6NqF_T"
      },
      "source": [
        "## Exercise 6\n",
        "\n",
        "Why might pretrained models outperform models trained from scratch? In what cases would you expect them to be worse?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "FkDvxkGmJuZL"
      },
      "source": [
        "when_pretraining_works = '' #@param {type:\"string\"}\n",
        "\n",
        "t5 = time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "wdTh7qov4JHu"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "# 1. The closer your pretraining and target data domains are, the better pretraining will work\n",
        "# 2. The more pretraining data you have, the better pretraining will work\n",
        "# 3. The better your model is able to take advantage of your pretraining data (that is to say\n",
        "#    the larger your model is assuming you have enough data), the better pretraing will work\n",
        "\n",
        "# Pretraining isn't necessarily always a benefit though. If you source domain is very different from\n",
        "# the domain you're trying to predict, your models might learn unhelpful features. \n",
        "# Additionally, if you have a lot of training data in your target domain, pretraining data might \n",
        "# cause your model to converge to a local minimum (this process is referred to as ossification in \n",
        "# the Scaling Laws for Transfer paper cited in the Further Reading section)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GTDXJkly-oDx"
      },
      "source": [
        "### Further Reading\n",
        "Supervised pretraining as you've seen here is useful, but there are several other ways of using outside data to improve your models. The ones that are particularly popular right now are [contrastive learning](https://arxiv.org/pdf/2002.05709.pdf) techniques that learn features from unsupervised image data and all sorts of methods of masking input for training [transformer models](https://arxiv.org/pdf/1810.04805.pdf).\n",
        "\n",
        "There is also a [recent paper](https://arxiv.org/abs/2102.01293) that seeks to quantify the relationship between model size, pretraining dataset size, training dataset size, and performance."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zZ70aYrkvYjS"
      },
      "source": [
        "---\n",
        "# Section 7: Lifelong Learning\n",
        "*Estimated Time Elapsed: 130 minutes*\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "lYDhFj_5ERAY"
      },
      "source": [
        "#@title Video: Continual Learning\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"50prJUXLOPw\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SY8BpZ57PtoS"
      },
      "source": [
        "Pretraining is great, but how does fine-tuning affect the model's ability to do the original task?\n",
        "For humans, learning one thing doesn't cause you to forget another, but what about DL models? \n",
        "Let's try it out."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GL_Zvy3iRP25"
      },
      "source": [
        "## Model setup"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6VfQxy65n5K8"
      },
      "source": [
        "In this section we use different classifier heads (fully connected layers) for the different tasks. One head looks at image features produced by the resnet and tries to determine which ImageNet class they correspond to, while the other looks at the features and determines which Pokemon they correspond to.\n",
        "\n",
        "We can switch these heads out at will based on which task we are attempting. This method is more frequently used in [multi-task learning](https://ruder.io/multi-task/), but is useful here as well."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "X42TGrqcQO92"
      },
      "source": [
        "resnet = torchvision.models.resnet18(pretrained=True).to(device)\n",
        "\n",
        "imagenet_classifier_head = resnet.fc\n",
        "pokemon_classifier_head = nn.Linear(in_features=512, out_features=9).to(device)\n",
        "\n",
        "pokemon_parameters = list(resnet.parameters()) + list(pokemon_classifier_head.parameters())\n",
        "pokemon_optimizer = torch.optim.Adam(pokemon_parameters, lr=1e-4)\n",
        "\n",
        "imagenette_optimizer = torch.optim.Adam(resnet.parameters(), lr=1e-4)\n",
        "\n",
        "loss_fn = nn.CrossEntropyLoss()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UIubpQBh8uZo"
      },
      "source": [
        "### Training on Imagenette\n",
        "To ensure that our model performs well at predicting our Imagenette classes, we train it on Imagenette."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "gyDO-fUr_AAu"
      },
      "source": [
        "top_1_accs = []\n",
        "top_5_accs = []\n",
        "\n",
        "for epoch in tqdm.notebook.tqdm(range(5)):\n",
        "    # Set model to use the imagenette classifier head\n",
        "    \n",
        "    resnet.train()\n",
        "    # Train on a batch of images\n",
        "    for imagenette_batch in imagenette_train_loader:\n",
        "        images, labels = imagenette_batch\n",
        "\n",
        "        # Convert labels from imagenette indices to imagenet labels\n",
        "        for i, label in enumerate(labels):\n",
        "            labels[i] = dir_index_to_imagenet_label[label.item()]\n",
        "\n",
        "        images = images.to(device)\n",
        "        labels = labels.to(device)\n",
        "        output = resnet(images)\n",
        "        imagenette_optimizer.zero_grad()\n",
        "        loss = loss_fn(output, labels)\n",
        "        loss.backward()\n",
        "        imagenette_optimizer.step()\n",
        "\n",
        "    resnet.eval()\n",
        "    top_1_acc, top_5_acc = eval_imagenette(resnet, imagenette_val_loader, len(imagenette_val))\n",
        "    top_1_accs.append(top_1_acc.item())\n",
        "    top_5_accs.append(top_5_acc.item())\n",
        "\n",
        "    plt.plot(top_1_accs, label='Top-1 Accuracy')\n",
        "    plt.plot(top_5_accs, label='Top-5 Accuracy')\n",
        "    plt.xlabel('epoch')\n",
        "    plt.ylabel('accuracy')\n",
        "    plt.legend()\n",
        "    plt.title('Imagenette Prediction Accuracy')\n",
        "    display.clear_output(wait=True)\n",
        "    display.display(plt.gcf())\n",
        "    plt.clf()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XW9eeY-S9AhP"
      },
      "source": [
        "## Training on Pokemon images"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Cqv2qt4JPpO-"
      },
      "source": [
        "## Exercise 7\n",
        "Now we've trained our model and we know it works well on Imagenette. What do you expect to happen as we train the model on a different dataset? "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "iiEyzdII9gJ8",
        "cellView": "form"
      },
      "source": [
        "model_transfer = '' #@param {type:\"string\"}\n",
        "\n",
        "t6 = time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fpFamoZn775Y"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "# There is no correct answer here, just guess before you run the code :)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "MB9uZRTHNtdU",
        "cellView": "form"
      },
      "source": [
        "# @title Retraining the Network on Pokemon\n",
        "\n",
        "pokemon_accs = []\n",
        "imagenette_top_1 = []\n",
        "imagenette_top_5 = []\n",
        "\n",
        "for epoch in range(5):\n",
        "    # Train loop\n",
        "    resnet.fc = pokemon_classifier_head\n",
        "    resnet.train()\n",
        "    for batch in pokemon_train_loader:\n",
        "        images, labels = batch\n",
        "        images = images.to(device)\n",
        "        labels = labels.to(device)\n",
        "\n",
        "        output = resnet(images)\n",
        "\n",
        "        pokemon_optimizer.zero_grad()\n",
        "\n",
        "        loss = loss_fn(output, labels)\n",
        "        loss.backward()\n",
        "        pokemon_optimizer.step()\n",
        "\n",
        "    # Eval model on pokemon \n",
        "    resnet.eval()\n",
        "    with torch.no_grad():\n",
        "        loss_sum = 0\n",
        "        total_correct = 0\n",
        "        total = len(pokemon_test_set)\n",
        "        for batch in pokemon_test_loader:\n",
        "            images, labels = batch\n",
        "            images = images.to(device)\n",
        "            labels = labels.to(device)\n",
        "            output = resnet(images)\n",
        "            loss = loss_fn(output, labels)\n",
        "            loss_sum += loss.item()\n",
        "\n",
        "            predictions = torch.argmax(output, dim=1)\n",
        "            \n",
        "            num_correct = torch.sum(predictions == labels)\n",
        "            total_correct += num_correct\n",
        "        pokemon_accs.append(total_correct / total)\n",
        "    \n",
        "    # Eval model on imagenette\n",
        "    resnet.fc = imagenet_classifier_head.to(device)\n",
        "    top_1, top_5 = eval_imagenette(resnet, imagenette_val_loader, len(imagenette_val))\n",
        "    imagenette_top_1.append(top_1.item())\n",
        "    imagenette_top_5.append(top_5.item())\n",
        "\n",
        "    trained_top_1 = [top_1_accs[-1]] * len(imagenette_top_1)\n",
        "    plt.figure(figsize=(6,6))\n",
        "    plt.plot(pokemon_accs, label='Pokemon Prediction Accuracy')\n",
        "    plt.plot(imagenette_top_1, label='Imagenette Top-1 Accuracy')\n",
        "    plt.plot(imagenette_top_5, label='Imagenette Top-5 Accuracy')\n",
        "    plt.plot(trained_top_1, '--', label='Imagenette T-1 Accuracy Before Start')\n",
        "    plt.xlabel('epoch')\n",
        "    plt.ylabel('accuracy')\n",
        "    plt.legend()\n",
        "    plt.title('Forgetting Imagenette by Learning Pokemon')\n",
        "    \n",
        "    display.clear_output(wait=True)\n",
        "    display.display(plt.gcf())\n",
        "    plt.clf()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZGIdkNMK9pLk"
      },
      "source": [
        "Our network is now less accurate in predicting imagenette classes! This phenomenon is known as catastrophic forgetting."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "tAhlHwbxEk8A"
      },
      "source": [
        "---\n",
        "# Section 7.5: Wrap Up"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "ywMS2G-pEq4o"
      },
      "source": [
        "#@title Video: Week 7 Wrap Up\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"YJPKCrV7Tkk\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EREwBG43va3l"
      },
      "source": [
        "---\n",
        "# Section 8 (Time Permitting): Preventing Catastrophic Forgetting\n",
        "\n",
        "*Estimated Time Elapsed: 140 minutes*\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EWFPG5mu-EbA"
      },
      "source": [
        "In this exercise we'll implement a simple but effective method of preventing catastrophic forgetting known as replay. In replay, instead of only training on our task of interest, we also go back and train on previous tasks so they aren't forgotten."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "4alveMzSm1OS"
      },
      "source": [
        "# Prep model\n",
        "resnet = torchvision.models.resnet18(pretrained=True).to(device)\n",
        "\n",
        "imagenette_classifier_head = resnet.fc\n",
        "pokemon_classifier_head = nn.Linear(in_features=512, out_features=9).to(device)\n",
        "\n",
        "pokemon_parameters = list(resnet.parameters()) + list(pokemon_classifier_head.parameters())\n",
        "pokemon_optimizer = torch.optim.Adam(pokemon_parameters, lr=1e-4)\n",
        "\n",
        "imagenette_optimizer = torch.optim.Adam(resnet.parameters(), lr=1e-4)\n",
        "\n",
        "loss_fn = nn.CrossEntropyLoss()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "PkkpfDTozLh_"
      },
      "source": [
        "def eval_pokemon(model, pokemon_test_loader, total_images):\n",
        "    with torch.no_grad():\n",
        "        total_correct = 0\n",
        "        total = 0\n",
        "        for batch in pokemon_test_loader:\n",
        "            images, labels = batch\n",
        "            images = images.to(device)\n",
        "            labels = labels.to(device)\n",
        "            output = model(images)\n",
        "\n",
        "            predictions = torch.argmax(output, dim=1)\n",
        "            \n",
        "            num_correct = torch.sum(predictions == labels)\n",
        "            total += labels.shape[0]\n",
        "            total_correct += num_correct\n",
        "\n",
        "        print(total, total_images, total_correct)\n",
        "\n",
        "        return total_correct / total_images\n",
        "\n",
        "def train_one_epoch_pokemon(model, loader, optimizer, loss_fn):\n",
        "    for batch in tqdm.notebook.tqdm(pokemon_train_loader):\n",
        "        images, labels = batch\n",
        "        images = images.to(device)\n",
        "        labels = labels.to(device)\n",
        "        output = model(images)\n",
        "        optimizer.zero_grad()\n",
        "        loss = loss_fn(output, labels)\n",
        "        loss.backward()\n",
        "        optimizer.step()\n",
        "\n",
        "def train_one_epoch_imagenette(model, loader, optimizer, loss_fn):\n",
        "    for batch in tqdm.notebook.tqdm(loader):\n",
        "        images, labels = batch\n",
        "\n",
        "        # Convert labels from imagenette indices to imagenet labels\n",
        "        for i, label in enumerate(labels):\n",
        "            # dir_index_to_imagenet_label comes from a few sections ago.\n",
        "            # In good programming practice it would be passed as a parameter, but\n",
        "            # that would be more confusing than helpful. If this line throws an error,\n",
        "            # just use run all above\n",
        "            labels[i] = dir_index_to_imagenet_label[label.item()]\n",
        "\n",
        "        images = images.to(device)\n",
        "        labels = labels.to(device)\n",
        "        output = resnet(images)\n",
        "        imagenette_optimizer.zero_grad()\n",
        "        loss = loss_fn(output, labels)\n",
        "        loss.backward()\n",
        "        imagenette_optimizer.step()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3HxKJ5NI7nB4"
      },
      "source": [
        "### Exercise 8"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fU3Gpgd19mWu"
      },
      "source": [
        "def train_one_epoch_with_replay(model: nn.Module,\n",
        "                                pokemon_train_loader: torch.utils.data.DataLoader,\n",
        "                                imagenette_train_loader: torch.utils.data.DataLoader,\n",
        "                                pokemon_classifier_head: nn.Module,\n",
        "                                imagenette_classifier_head: nn.Module,\n",
        "                                pokemon_optimizer: torch.optim.Optimizer,\n",
        "                                imagenette_optimizer: torch.optim.Optimizer\n",
        "                                ):\n",
        "    \"\"\"\n",
        "    Write the code required to train for one epoch on both the pokemon and imagenette datasets.\n",
        "    This is called \"replay\" because you continue training on old data. In practice replay would be\n",
        "    done less often that every epoch, but alternating epochs of pokemon and imagenette training\n",
        "    works for our purposes.\n",
        "\n",
        "    Tips: The `train_one_epoch_pokemon` and `train_one_epoch_imagenette` functions will be useful \n",
        "          here.\n",
        "          Don't forget to use `model.fc = pokemon_classifier_head` or `imagenette_classifier_head`\n",
        "          before training on their respective datasets.\n",
        "\n",
        "    Parameters:\n",
        "        model: The model to train\n",
        "        pokemon_train_loader: The DataLoader with pokemon data to be used in training\n",
        "        imagenette_train_loader: The DataLoader with image data to be used in training\n",
        "        pokemon_classifier_head: The fully connected layer used for predicting which pokemon is \n",
        "                                 present in an image\n",
        "        imagenette_classifier_head: The fully connected layer used to predict which imagenette\n",
        "                                    class is present in an image\n",
        "        pokemon_optimizer: The optimizer used to update the weights of the resnet and pokemon \n",
        "                           classifier head\n",
        "        imagenette_optimizer: The optimizer used to update the weights of the resnet and imagenette\n",
        "                              classifier head\n",
        "    \n",
        "    Returns:\n",
        "        None\n",
        "    \"\"\"\n",
        "\n",
        "    ####################################################################\n",
        "    # Fill in all missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Implement the train_one_epoch_with_replay function\")\n",
        "    #################################################################### \n",
        "\n",
        "    model.fc = ...\n",
        "    model.train()\n",
        "\n",
        "    ...\n",
        "\n",
        "    # Set model to use the imagenette classifier head\n",
        "    model.fc = ...\n",
        "    model.train()\n",
        "\n",
        "    ...\n",
        "\n",
        "\n",
        "### Uncomment below to test your function\n",
        "# top_1_accs = []\n",
        "# top_5_accs = []\n",
        "# pokemon_accs = []\n",
        "\n",
        "# for epoch in range(5):\n",
        "#     # Set model to use the pokemon classification output layer\n",
        "    \n",
        "#     train_one_epoch_with_replay(resnet, pokemon_train_loader, imagenette_train_loader, \n",
        "#                                 pokemon_classifier_head, imagenette_classifier_head, \n",
        "#                                 pokemon_optimizer, imagenette_optimizer)\n",
        "\n",
        "#     resnet.eval()\n",
        "#     top_1, top_5 = eval_imagenette(resnet, imagenette_val_loader, len(imagenette_val))\n",
        "#     top_1_accs.append(top_1.item())\n",
        "#     top_5_accs.append(top_5.item())\n",
        "\n",
        "#     # Evaluate model on pokemon \n",
        "#     resnet.eval()\n",
        "#     resnet.fc = pokemon_classifier_head\n",
        "#     pokemon_acc = eval_pokemon(resnet, pokemon_test_loader, len(pokemon_test_set))\n",
        "#     pokemon_accs.append(pokemon_acc)\n",
        "    \n",
        "#     plt.plot(top_1_accs, label='Top-1 Accuracy')\n",
        "#     plt.plot(top_5_accs, label='Top-5 Accuracy')\n",
        "#     plt.plot(pokemon_accs, label='Pokemon Prediction Accuracy')\n",
        "#     plt.xlabel('epoch')\n",
        "#     plt.ylabel('accuracy')\n",
        "#     plt.legend()\n",
        "#     plt.title('Preventing Catastrophic Forgetting')\n",
        "#     display.clear_output(wait=True)\n",
        "#     display.display(plt.gcf())\n",
        "#     plt.clf()\n",
        "    "
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "1jpQwbo1kbd-"
      },
      "source": [
        "# to_remove solution\n",
        "\n",
        "def train_one_epoch_with_replay(model: nn.Module,\n",
        "                                pokemon_train_loader: torch.utils.data.DataLoader,\n",
        "                                imagenette_train_loader: torch.utils.data.DataLoader,\n",
        "                                pokemon_classifier_head: nn.Module,\n",
        "                                imagenette_classifier_head: nn.Module,\n",
        "                                pokemon_optimizer: torch.optim.Optimizer,\n",
        "                                imagenette_optimizer: torch.optim.Optimizer\n",
        "                                ):\n",
        "    \"\"\"\n",
        "    Write the code required to train for one epoch on both the pokemon and imagenette datasets.\n",
        "    This is called \"replay\" because you continue training on old data. In practice replay would be\n",
        "    done less often that every epoch, but alternating epochs of pokemon and imagenette training\n",
        "    works for our purposes.\n",
        "\n",
        "    Tips: The `train_one_epoch_pokemon` and `train_one_epoch_imagenette` functions will be useful \n",
        "          here.\n",
        "          Don't forget to use `model.fc = pokemon_classifier_head` or `imagenette_classifier_head`\n",
        "          before training on their respective datasets.\n",
        "\n",
        "    Parameters:\n",
        "        model: The model to train\n",
        "        pokemon_train_loader: The DataLoader with pokemon data to be used in training\n",
        "        imagenette_train_loader: The DataLoader with image data to be used in training\n",
        "        pokemon_classifier_head: The fully connected layer used for predicting which pokemon is \n",
        "                                 present in an image\n",
        "        imagenette_classifier_head: The fully connected layer used to predict which imagenette\n",
        "                                    class is present in an image\n",
        "        pokemon_optimizer: The optimizer used to update the weights of the resnet and pokemon \n",
        "                           classifier head\n",
        "        imagenette_optimizer: The optimizer used to update the weights of the resnet and imagenette\n",
        "                              classifier head\n",
        "    \n",
        "    Returns:\n",
        "        None\n",
        "    \"\"\"\n",
        "\n",
        "    model.fc = pokemon_classifier_head\n",
        "    model.train()\n",
        "\n",
        "    train_one_epoch_pokemon(model, pokemon_train_loader, pokemon_optimizer, loss_fn)\n",
        "\n",
        "    # Set model to use the imagenette classifier head\n",
        "    model.fc = imagenette_classifier_head\n",
        "    model.train()\n",
        "\n",
        "    train_one_epoch_imagenette(model, imagenette_train_loader, imagenette_optimizer, loss_fn)\n",
        "\n",
        "\n",
        "### Uncomment below to test your function\n",
        "top_1_accs = []\n",
        "top_5_accs = []\n",
        "pokemon_accs = []\n",
        "\n",
        "for epoch in range(5):\n",
        "    # Set model to use the pokemon classification output layer\n",
        "    \n",
        "    train_one_epoch_with_replay(resnet, pokemon_train_loader, imagenette_train_loader, \n",
        "                                pokemon_classifier_head, imagenette_classifier_head, \n",
        "                                pokemon_optimizer, imagenette_optimizer)\n",
        "\n",
        "    resnet.eval()\n",
        "    top_1, top_5 = eval_imagenette(resnet, imagenette_val_loader, len(imagenette_val))\n",
        "    top_1_accs.append(top_1.item())\n",
        "    top_5_accs.append(top_5.item())\n",
        "\n",
        "    # Evaluate model on pokemon \n",
        "    resnet.eval()\n",
        "    resnet.fc = pokemon_classifier_head\n",
        "    pokemon_acc = eval_pokemon(resnet, pokemon_test_loader, len(pokemon_test_set))\n",
        "    pokemon_accs.append(pokemon_acc)\n",
        "    \n",
        "    plt.plot(top_1_accs, label='Top-1 Accuracy')\n",
        "    plt.plot(top_5_accs, label='Top-5 Accuracy')\n",
        "    plt.plot(pokemon_accs, label='Pokemon Prediction Accuracy')\n",
        "    plt.xlabel('epoch')\n",
        "    plt.ylabel('accuracy')\n",
        "    plt.legend()\n",
        "    plt.title('Preventing Catastrophic Forgetting')\n",
        "    display.clear_output(wait=True)\n",
        "    display.display(plt.gcf())\n",
        "    plt.clf()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ENDv6kJNvfUt"
      },
      "source": [
        "# Response Form"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "VySnMd0POirw",
        "cellView": "form"
      },
      "source": [
        "import time\n",
        "import numpy as np\n",
        "import urllib.parse\n",
        "from IPython.display import IFrame\n",
        "\n",
        "t7 = time.time()\n",
        "\n",
        "#@markdown #Run Cell to Show Airtable Form\n",
        "#@markdown ##**Confirm your answers and then click \"Submit\"**\n",
        "\n",
        "\n",
        "def prefill_form(src, fields: dict):\n",
        "  '''\n",
        "  src: the original src url to embed the form\n",
        "  fields: a dictionary of field:value pairs,\n",
        "  e.g. {\"pennkey\": my_pennkey, \"location\": my_location}\n",
        "  '''\n",
        "  prefill_fields = {}\n",
        "  for key in fields:\n",
        "      new_key = 'prefill_' + key\n",
        "      prefill_fields[new_key] = fields[key]\n",
        "  prefills = urllib.parse.urlencode(prefill_fields)\n",
        "  src = src + prefills\n",
        "  return src\n",
        "\n",
        "\n",
        "#autofill time if it is not present\n",
        "try: t0;\n",
        "except NameError: t0 = time.time()\n",
        "try: t1;\n",
        "except NameError: t1 = time.time()\n",
        "try: t2;\n",
        "except NameError: t2 = time.time()\n",
        "try: t3;\n",
        "except NameError: t3 = time.time()\n",
        "try: t4;\n",
        "except NameError: t4 = time.time()\n",
        "try: t5;\n",
        "except NameError: t5 = time.time()\n",
        "try: t6;\n",
        "except NameError: t6 = time.time()\n",
        "try: t7;\n",
        "except NameError: t7 = time.time()\n",
        "\n",
        "#autofill fields if they are not present\n",
        "#a missing pennkey and pod will result in an Airtable warning\n",
        "#which is easily fixed user-side.\n",
        "try: my_pennkey;\n",
        "except NameError: my_pennkey = \"\"\n",
        "\n",
        "try: my_pod;\n",
        "except NameError: my_pod = \"Select\"\n",
        "\n",
        "try: parameter_efficiency;\n",
        "except NameError: parameter_efficiency = \"\"\n",
        "\n",
        "try: filter_response;\n",
        "except NameError: filter_response = \"\"\n",
        "\n",
        "try: filter_roles;\n",
        "except NameError: filter_roles = \"\"\n",
        "\n",
        "try: layer_importance;\n",
        "except NameError: layer_importance = \"\"\n",
        "\n",
        "try: when_pretraining_works;\n",
        "except NameError: when_pretraining_works = \"\"\n",
        "\n",
        "try: model_transfer;\n",
        "except NameError: model_transfer = \"\"\n",
        "\n",
        "\n",
        "times = np.array([t1,t2,t3,t4,t5,t6,t7])-t0\n",
        "\n",
        "fields = {\"pennkey\": my_pennkey,\n",
        "          \"pod\": my_pod,\n",
        "          \"parameter_efficiency\":parameter_efficiency,\n",
        "          \"filter_response\": filter_response,\n",
        "          \"filter_roles\":filter_roles,\n",
        "          \"layer_importance\": layer_importance,\n",
        "          \"when_pretraining_works\": when_pretraining_works,\n",
        "          \"model_transfer\": model_transfer,\n",
        "          \"cumulative_times\": times}\n",
        "\n",
        "src = \"https://airtable.com/embed/shrG83qaDeu4ugzUT?\"\n",
        "\n",
        "\n",
        "#now instead of the original source url, we do: src = prefill_form(src, fields)\n",
        "display.display(IFrame(src = prefill_form(src, fields), width = 800, height = 400))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HKn5d3CCC05w"
      },
      "source": [
        "## Feedback\n",
        "How could this session have been better? How happy are you in your group? How do you feel right now?\n",
        "\n",
        "Feel free to use the embeded form below or use this link:\n",
        "<a target=\"_blank\" rel=\"noopener noreferrer\" href=\"https://airtable.com/shrNSJ5ECXhNhsYss\">https://airtable.com/shrNSJ5ECXhNhsYss</a>"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HIvhG6VZ8zez"
      },
      "source": [
        "display.display(IFrame(src=\"https://airtable.com/embed/shrNSJ5ECXhNhsYss?backgroundColor=red\", width = 800, height = 400))"
      ],
      "execution_count": null,
      "outputs": []
    }
  ]
}
