{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "name": "convnet-vgg16.ipynb",
      "provenance": [],
      "collapsed_sections": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.7.3"
    },
    "toc": {
      "nav_menu": {},
      "number_sections": true,
      "sideBar": true,
      "skip_h1_title": false,
      "title_cell": "Table of Contents",
      "title_sidebar": "Contents",
      "toc_cell": true,
      "toc_position": {
        "height": "calc(100% - 180px)",
        "left": "10px",
        "top": "150px",
        "width": "371px"
      },
      "toc_section_display": true,
      "toc_window_display": true
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "UEBilEjLj5wY"
      },
      "source": [
        "Deep Learning Models -- A collection of various deep learning architectures, models, and tips for TensorFlow and PyTorch in Jupyter Notebooks.\n",
        "- Author: Sebastian Raschka\n",
        "- GitHub Repository: https://github.com/rasbt/deeplearning-models"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "vD5Yd8QdbjLu",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "!pip install -q IPython\n",
        "!pip install -q ipykernel\n",
        "!pip install -q watermark\n",
        "!pip install -q matplotlib\n",
        "!pip install -q sklearn\n",
        "!pip install -q pandas\n",
        "!pip install -q pydot\n",
        "!pip install -q hiddenlayer\n",
        "!pip install -q graphviz"
      ],
      "execution_count": 8,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab_type": "code",
        "id": "GOzuY8Yvj5wb",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 155
        },
        "outputId": "d2b1a65d-d2ed-45b3-88a7-f0a8da497364"
      },
      "source": [
        "%load_ext watermark\n",
        "%watermark -a 'Sebastian Raschka' -v -p torch"
      ],
      "execution_count": 9,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "The watermark extension is already loaded. To reload it, use:\n",
            "  %reload_ext watermark\n",
            "Sebastian Raschka \n",
            "\n",
            "CPython 3.6.9\n",
            "IPython 5.5.0\n",
            "\n",
            "torch 1.5.1+cu101\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "rH4XmErYj5wm"
      },
      "source": [
        "# LeNet-5 MNIST Digits Classifier"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KZ13j6-wbgap",
        "colab_type": "text"
      },
      "source": [
        "This notebook implements the classic LeNet-5 convolutional network [1] and applies it to MNIST digit classification. The basic architecture is shown in the figure below:\n",
        "\n",
        "![](https://github.com/DeepSE/deeplearning-models/blob/master/pytorch_ipynb/images/lenet/lenet-5_1.jpg?raw=1)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "W56oRydEbgaq",
        "colab_type": "text"
      },
      "source": [
        "\n",
        "\n",
        "LeNet-5 is commonly regarded as the pioneer of convolutional neural networks, consisting of a very simple architecture (by modern standards). In total, LeNet-5 consists of only 7 layers. 3 out of these 7 layers are convolutional layers (C1, C3, C5), which are connected by two average pooling layers (S2 & S4). The penultimate layer is a fully connexted layer (F6), which is followed by the final output layer. The additional details are summarized below:\n",
        "\n",
        "- All convolutional layers use 5x5 kernels with stride 1.\n",
        "- The two average pooling (subsampling) layers are 2x2 pixels wide with stride 1.\n",
        "- Throughrout the network, tanh sigmoid activation functions are used. (**In this notebook, we replace these with ReLU activations**)\n",
        "- The output layer uses 10 custom Euclidean Radial Basis Function neurons for the output layer. (**In this notebook, we replace these with softmax activations**)\n",
        "- The input size is 32x32; here, we rescale the MNIST images from 28x28 to 32x32 to match this input dimension. Alternatively, we would have to change the \n",
        "achieve error rate below 1% on the MNIST data set, which was very close to the state of the art at the time (produced by a boosted ensemble of three LeNet-4 networks).\n",
        "\n",
        "\n",
        "### References\n",
        "\n",
        "- [1] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, november 1998."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "MkoGLH_Tj5wn"
      },
      "source": [
        "## Imports"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab_type": "code",
        "id": "ORj09gnrj5wp",
        "colab": {}
      },
      "source": [
        "import os\n",
        "import time\n",
        "\n",
        "import numpy as np\n",
        "import pandas as pd\n",
        "\n",
        "import torch\n",
        "import torch.nn as nn\n",
        "import torch.nn.functional as F\n",
        "from torch.utils.data import DataLoader\n",
        "\n",
        "from torchvision import datasets\n",
        "from torchvision import transforms\n",
        "\n",
        "import matplotlib.pyplot as plt\n",
        "from PIL import Image\n",
        "\n",
        "\n",
        "if torch.cuda.is_available():\n",
        "    torch.backends.cudnn.deterministic = True"
      ],
      "execution_count": 10,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "I6hghKPxj5w0"
      },
      "source": [
        "## Model Settings"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab_type": "code",
        "id": "NnT0sZIwj5wu",
        "colab": {}
      },
      "source": [
        "##########################\n",
        "### SETTINGS\n",
        "##########################\n",
        "\n",
        "# Hyperparameters\n",
        "RANDOM_SEED = 1\n",
        "LEARNING_RATE = 0.001\n",
        "BATCH_SIZE = 128\n",
        "NUM_EPOCHS = 10\n",
        "\n",
        "# Architecture\n",
        "NUM_FEATURES = 32*32\n",
        "NUM_CLASSES = 10\n",
        "\n",
        "# Other\n",
        "DEVICE = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
        "GRAYSCALE = True"
      ],
      "execution_count": 11,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nqGHxsLYbgax",
        "colab_type": "text"
      },
      "source": [
        "### MNIST Dataset"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "GrXDcKY_bgax",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 52
        },
        "outputId": "0550f902-18ce-4afa-9017-535593f837b1"
      },
      "source": [
        "##########################\n",
        "### MNIST DATASET\n",
        "##########################\n",
        "\n",
        "resize_transform = transforms.Compose([transforms.Resize((32, 32)),\n",
        "                                       transforms.ToTensor()])\n",
        "\n",
        "# Note transforms.ToTensor() scales input images\n",
        "# to 0-1 range\n",
        "train_dataset = datasets.MNIST(root='data', \n",
        "                               train=True, \n",
        "                               transform=resize_transform,\n",
        "                               download=True)\n",
        "\n",
        "test_dataset = datasets.MNIST(root='data', \n",
        "                              train=False, \n",
        "                              transform=resize_transform)\n",
        "\n",
        "\n",
        "train_loader = DataLoader(dataset=train_dataset, \n",
        "                          batch_size=BATCH_SIZE, \n",
        "                          shuffle=True)\n",
        "\n",
        "test_loader = DataLoader(dataset=test_dataset, \n",
        "                         batch_size=BATCH_SIZE, \n",
        "                         shuffle=False)\n",
        "\n",
        "# Checking the dataset\n",
        "for images, labels in train_loader:  \n",
        "    print('Image batch dimensions:', images.shape)\n",
        "    print('Image label dimensions:', labels.shape)\n",
        "    break"
      ],
      "execution_count": 12,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Image batch dimensions: torch.Size([128, 1, 32, 32])\n",
            "Image label dimensions: torch.Size([128])\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0I2z-IQObga0",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 52
        },
        "outputId": "34983c6b-134f-4870-9dd1-67f462baea89"
      },
      "source": [
        "device = torch.device(DEVICE)\n",
        "torch.manual_seed(0)\n",
        "\n",
        "for epoch in range(2):\n",
        "\n",
        "    for batch_idx, (x, y) in enumerate(train_loader):\n",
        "        \n",
        "        print('Epoch:', epoch+1, end='')\n",
        "        print(' | Batch index:', batch_idx, end='')\n",
        "        print(' | Batch size:', y.size()[0])\n",
        "        \n",
        "        x = x.to(device)\n",
        "        y = y.to(device)\n",
        "        break"
      ],
      "execution_count": 13,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Epoch: 1 | Batch index: 0 | Batch size: 128\n",
            "Epoch: 2 | Batch index: 0 | Batch size: 128\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zPD2Colubga2",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "##########################\n",
        "### MODEL\n",
        "##########################\n",
        "\n",
        "\n",
        "class LeNet5(nn.Module):\n",
        "\n",
        "    def __init__(self, num_classes, grayscale=False):\n",
        "        super(LeNet5, self).__init__()\n",
        "        \n",
        "        self.grayscale = grayscale\n",
        "        self.num_classes = num_classes\n",
        "\n",
        "        if self.grayscale:\n",
        "            in_channels = 1\n",
        "        else:\n",
        "            in_channels = 3\n",
        "\n",
        "        self.features = nn.Sequential(\n",
        "            \n",
        "            nn.Conv2d(in_channels, 6, kernel_size=5),\n",
        "            nn.Tanh(),\n",
        "            nn.MaxPool2d(kernel_size=2),\n",
        "            nn.Conv2d(6, 16, kernel_size=5),\n",
        "            nn.Tanh(),\n",
        "            nn.MaxPool2d(kernel_size=2)\n",
        "        )\n",
        "\n",
        "        self.classifier = nn.Sequential(\n",
        "            nn.Linear(16*5*5, 120),\n",
        "            nn.Tanh(),\n",
        "            nn.Linear(120, 84),\n",
        "            nn.Tanh(),\n",
        "            nn.Linear(84, num_classes),\n",
        "        )\n",
        "\n",
        "\n",
        "    def forward(self, x):\n",
        "        x = self.features(x)\n",
        "        x = torch.flatten(x, 1)\n",
        "        logits = self.classifier(x)\n",
        "        probas = F.softmax(logits, dim=1)\n",
        "        return logits, probas"
      ],
      "execution_count": 14,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab_type": "code",
        "id": "_lza9t_uj5w1",
        "colab": {}
      },
      "source": [
        "torch.manual_seed(RANDOM_SEED)\n",
        "\n",
        "model = LeNet5(NUM_CLASSES, GRAYSCALE)\n",
        "model.to(DEVICE)\n",
        "\n",
        "optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)  "
      ],
      "execution_count": 15,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "SsCKIE-3bssZ",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 185
        },
        "outputId": "33cab188-b78e-4979-c524-bc34d80c9ace"
      },
      "source": [
        "import hiddenlayer as hl\n",
        "hl.build_graph(model, torch.zeros([128, 1, 32, 32]).to(device))"
      ],
      "execution_count": 16,
      "outputs": [
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "<hiddenlayer.graph.Graph at 0x7fec03390f98>"
            ],
            "image/svg+xml": "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\n<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n<!-- Generated by graphviz version 2.40.1 (20161225.0304)\n -->\n<!-- Title: %3 Pages: 1 -->\n<svg width=\"1874pt\" height=\"108pt\"\n viewBox=\"0.00 0.00 1874.00 108.00\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n<g id=\"graph0\" class=\"graph\" transform=\"scale(1 1) rotate(0) translate(72 72)\">\n<title>%3</title>\n<polygon fill=\"#ffffff\" stroke=\"transparent\" points=\"-72,36 -72,-72 1802,-72 1802,36 -72,36\"/>\n<!-- /outputs/11 -->\n<g id=\"node1\" class=\"node\">\n<title>/outputs/11</title>\n<polygon fill=\"#e8e8e8\" stroke=\"#000000\" points=\"55,-36 0,-36 0,0 55,0 55,-36\"/>\n<text text-anchor=\"start\" x=\"8.5\" y=\"-15\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">Conv5x5</text>\n</g>\n<!-- /outputs/12 -->\n<g id=\"node2\" class=\"node\">\n<title>/outputs/12</title>\n<polygon fill=\"#e8e8e8\" stroke=\"#000000\" points=\"203,-36 149,-36 149,0 203,0 203,-36\"/>\n<text text-anchor=\"start\" x=\"166\" y=\"-15\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">Tanh</text>\n</g>\n<!-- /outputs/11&#45;&gt;/outputs/12 -->\n<g id=\"edge1\" class=\"edge\">\n<title>/outputs/11&#45;&gt;/outputs/12</title>\n<path fill=\"none\" stroke=\"#000000\" d=\"M55.0837,-18C78.6004,-18 112.6203,-18 138.5884,-18\"/>\n<polygon fill=\"#000000\" stroke=\"#000000\" points=\"138.8148,-21.5001 148.8148,-18 138.8148,-14.5001 138.8148,-21.5001\"/>\n<text text-anchor=\"middle\" x=\"102\" y=\"-21\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">128x6x28x28</text>\n</g>\n<!-- /outputs/13 -->\n<g id=\"node3\" class=\"node\">\n<title>/outputs/13</title>\n<polygon fill=\"#e8e8e8\" stroke=\"#000000\" points=\"367,-36 297,-36 297,0 367,0 367,-36\"/>\n<text text-anchor=\"start\" x=\"305\" y=\"-15\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">MaxPool2x2</text>\n</g>\n<!-- /outputs/12&#45;&gt;/outputs/13 -->\n<g id=\"edge2\" class=\"edge\">\n<title>/outputs/12&#45;&gt;/outputs/13</title>\n<path fill=\"none\" stroke=\"#000000\" d=\"M203.1731,-18C226.2401,-18 259.8114,-18 286.9005,-18\"/>\n<polygon fill=\"#000000\" stroke=\"#000000\" points=\"286.9298,-21.5001 296.9298,-18 286.9298,-14.5001 286.9298,-21.5001\"/>\n<text text-anchor=\"middle\" x=\"250\" y=\"-21\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">128x6x28x28</text>\n</g>\n<!-- /outputs/14 -->\n<g id=\"node4\" class=\"node\">\n<title>/outputs/14</title>\n<polygon fill=\"#e8e8e8\" stroke=\"#000000\" points=\"516,-36 461,-36 461,0 516,0 516,-36\"/>\n<text text-anchor=\"start\" x=\"469.5\" y=\"-15\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">Conv5x5</text>\n</g>\n<!-- /outputs/13&#45;&gt;/outputs/14 -->\n<g id=\"edge3\" class=\"edge\">\n<title>/outputs/13&#45;&gt;/outputs/14</title>\n<path fill=\"none\" stroke=\"#000000\" d=\"M367.1217,-18C391.9926,-18 425.317,-18 450.7558,-18\"/>\n<polygon fill=\"#000000\" stroke=\"#000000\" points=\"450.7787,-21.5001 460.7786,-18 450.7786,-14.5001 450.7787,-21.5001\"/>\n<text text-anchor=\"middle\" x=\"414\" y=\"-21\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">128x6x14x14</text>\n</g>\n<!-- /outputs/15 -->\n<g id=\"node5\" class=\"node\">\n<title>/outputs/15</title>\n<polygon fill=\"#e8e8e8\" stroke=\"#000000\" points=\"670,-36 616,-36 616,0 670,0 670,-36\"/>\n<text text-anchor=\"start\" x=\"633\" y=\"-15\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">Tanh</text>\n</g>\n<!-- /outputs/14&#45;&gt;/outputs/15 -->\n<g id=\"edge4\" class=\"edge\">\n<title>/outputs/14&#45;&gt;/outputs/15</title>\n<path fill=\"none\" stroke=\"#000000\" d=\"M516.1215,-18C541.1348,-18 578.1865,-18 605.8004,-18\"/>\n<polygon fill=\"#000000\" stroke=\"#000000\" points=\"605.8768,-21.5001 615.8768,-18 605.8767,-14.5001 605.8768,-21.5001\"/>\n<text text-anchor=\"middle\" x=\"566\" y=\"-21\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">128x16x10x10</text>\n</g>\n<!-- /outputs/16 -->\n<g id=\"node6\" class=\"node\">\n<title>/outputs/16</title>\n<polygon fill=\"#e8e8e8\" stroke=\"#000000\" points=\"840,-36 770,-36 770,0 840,0 840,-36\"/>\n<text text-anchor=\"start\" x=\"778\" y=\"-15\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">MaxPool2x2</text>\n</g>\n<!-- /outputs/15&#45;&gt;/outputs/16 -->\n<g id=\"edge5\" class=\"edge\">\n<title>/outputs/15&#45;&gt;/outputs/16</title>\n<path fill=\"none\" stroke=\"#000000\" d=\"M670.1152,-18C694.5025,-18 730.8163,-18 759.6234,-18\"/>\n<polygon fill=\"#000000\" stroke=\"#000000\" points=\"759.8609,-21.5001 769.8609,-18 759.8609,-14.5001 759.8609,-21.5001\"/>\n<text text-anchor=\"middle\" x=\"720\" y=\"-21\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">128x16x10x10</text>\n</g>\n<!-- /outputs/17 -->\n<g id=\"node7\" class=\"node\">\n<title>/outputs/17</title>\n<polygon fill=\"#e8e8e8\" stroke=\"#000000\" points=\"983,-36 929,-36 929,0 983,0 983,-36\"/>\n<text text-anchor=\"start\" x=\"942\" y=\"-15\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">Flatten</text>\n</g>\n<!-- /outputs/16&#45;&gt;/outputs/17 -->\n<g id=\"edge6\" class=\"edge\">\n<title>/outputs/16&#45;&gt;/outputs/17</title>\n<path fill=\"none\" stroke=\"#000000\" d=\"M840.022,-18C863.5486,-18 894.5202,-18 918.5391,-18\"/>\n<polygon fill=\"#000000\" stroke=\"#000000\" points=\"918.6988,-21.5001 928.6988,-18 918.6987,-14.5001 918.6988,-21.5001\"/>\n<text text-anchor=\"middle\" x=\"884.5\" y=\"-21\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">128x16x5x5</text>\n</g>\n<!-- /outputs/18 -->\n<g id=\"node8\" class=\"node\">\n<title>/outputs/18</title>\n<polygon fill=\"#e8e8e8\" stroke=\"#000000\" points=\"1110,-36 1056,-36 1056,0 1110,0 1110,-36\"/>\n<text text-anchor=\"start\" x=\"1070\" y=\"-15\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">Linear</text>\n</g>\n<!-- /outputs/17&#45;&gt;/outputs/18 -->\n<g id=\"edge7\" class=\"edge\">\n<title>/outputs/17&#45;&gt;/outputs/18</title>\n<path fill=\"none\" stroke=\"#000000\" d=\"M983.2447,-18C1001.492,-18 1025.746,-18 1045.7693,-18\"/>\n<polygon fill=\"#000000\" stroke=\"#000000\" points=\"1045.8004,-21.5001 1055.8004,-18 1045.8003,-14.5001 1045.8004,-21.5001\"/>\n<text text-anchor=\"middle\" x=\"1019.5\" y=\"-21\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">128x400</text>\n</g>\n<!-- /outputs/19 -->\n<g id=\"node9\" class=\"node\">\n<title>/outputs/19</title>\n<polygon fill=\"#e8e8e8\" stroke=\"#000000\" points=\"1237,-36 1183,-36 1183,0 1237,0 1237,-36\"/>\n<text text-anchor=\"start\" x=\"1200\" y=\"-15\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">Tanh</text>\n</g>\n<!-- /outputs/18&#45;&gt;/outputs/19 -->\n<g id=\"edge8\" class=\"edge\">\n<title>/outputs/18&#45;&gt;/outputs/19</title>\n<path fill=\"none\" stroke=\"#000000\" d=\"M1110.2447,-18C1128.492,-18 1152.746,-18 1172.7693,-18\"/>\n<polygon fill=\"#000000\" stroke=\"#000000\" points=\"1172.8004,-21.5001 1182.8004,-18 1172.8003,-14.5001 1172.8004,-21.5001\"/>\n<text text-anchor=\"middle\" x=\"1146.5\" y=\"-21\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">128x120</text>\n</g>\n<!-- /outputs/20 -->\n<g id=\"node10\" class=\"node\">\n<title>/outputs/20</title>\n<polygon fill=\"#e8e8e8\" stroke=\"#000000\" points=\"1364,-36 1310,-36 1310,0 1364,0 1364,-36\"/>\n<text text-anchor=\"start\" x=\"1324\" y=\"-15\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">Linear</text>\n</g>\n<!-- /outputs/19&#45;&gt;/outputs/20 -->\n<g id=\"edge9\" class=\"edge\">\n<title>/outputs/19&#45;&gt;/outputs/20</title>\n<path fill=\"none\" stroke=\"#000000\" d=\"M1237.2447,-18C1255.492,-18 1279.746,-18 1299.7693,-18\"/>\n<polygon fill=\"#000000\" stroke=\"#000000\" points=\"1299.8004,-21.5001 1309.8004,-18 1299.8003,-14.5001 1299.8004,-21.5001\"/>\n<text text-anchor=\"middle\" x=\"1273.5\" y=\"-21\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">128x120</text>\n</g>\n<!-- /outputs/21 -->\n<g id=\"node11\" class=\"node\">\n<title>/outputs/21</title>\n<polygon fill=\"#e8e8e8\" stroke=\"#000000\" points=\"1486,-36 1432,-36 1432,0 1486,0 1486,-36\"/>\n<text text-anchor=\"start\" x=\"1449\" y=\"-15\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">Tanh</text>\n</g>\n<!-- /outputs/20&#45;&gt;/outputs/21 -->\n<g id=\"edge10\" class=\"edge\">\n<title>/outputs/20&#45;&gt;/outputs/21</title>\n<path fill=\"none\" stroke=\"#000000\" d=\"M1364.0758,-18C1381.0553,-18 1403.1767,-18 1421.7924,-18\"/>\n<polygon fill=\"#000000\" stroke=\"#000000\" points=\"1422,-21.5001 1431.9999,-18 1421.9999,-14.5001 1422,-21.5001\"/>\n<text text-anchor=\"middle\" x=\"1398\" y=\"-21\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">128x84</text>\n</g>\n<!-- /outputs/22 -->\n<g id=\"node12\" class=\"node\">\n<title>/outputs/22</title>\n<polygon fill=\"#e8e8e8\" stroke=\"#000000\" points=\"1608,-36 1554,-36 1554,0 1608,0 1608,-36\"/>\n<text text-anchor=\"start\" x=\"1568\" y=\"-15\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">Linear</text>\n</g>\n<!-- /outputs/21&#45;&gt;/outputs/22 -->\n<g id=\"edge11\" class=\"edge\">\n<title>/outputs/21&#45;&gt;/outputs/22</title>\n<path fill=\"none\" stroke=\"#000000\" d=\"M1486.0758,-18C1503.0553,-18 1525.1767,-18 1543.7924,-18\"/>\n<polygon fill=\"#000000\" stroke=\"#000000\" points=\"1544,-21.5001 1553.9999,-18 1543.9999,-14.5001 1544,-21.5001\"/>\n<text text-anchor=\"middle\" x=\"1520\" y=\"-21\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">128x84</text>\n</g>\n<!-- /outputs/23 -->\n<g id=\"node13\" class=\"node\">\n<title>/outputs/23</title>\n<polygon fill=\"#e8e8e8\" stroke=\"#000000\" points=\"1730,-36 1676,-36 1676,0 1730,0 1730,-36\"/>\n<text text-anchor=\"start\" x=\"1686\" y=\"-15\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">Softmax</text>\n</g>\n<!-- /outputs/22&#45;&gt;/outputs/23 -->\n<g id=\"edge12\" class=\"edge\">\n<title>/outputs/22&#45;&gt;/outputs/23</title>\n<path fill=\"none\" stroke=\"#000000\" d=\"M1608.0758,-18C1625.0553,-18 1647.1767,-18 1665.7924,-18\"/>\n<polygon fill=\"#000000\" stroke=\"#000000\" points=\"1666,-21.5001 1675.9999,-18 1665.9999,-14.5001 1666,-21.5001\"/>\n<text text-anchor=\"middle\" x=\"1642\" y=\"-21\" font-family=\"Times\" font-size=\"10.00\" fill=\"#000000\">128x10</text>\n</g>\n</g>\n</svg>\n"
          },
          "metadata": {
            "tags": []
          },
          "execution_count": 16
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "RAodboScj5w6"
      },
      "source": [
        "## Training"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab_type": "code",
        "id": "Dzh3ROmRj5w7",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000
        },
        "outputId": "325158bc-5f09-4fd9-99e7-7c867cfa976d"
      },
      "source": [
        "def compute_accuracy(model, data_loader, device):\n",
        "    correct_pred, num_examples = 0, 0\n",
        "    for i, (features, targets) in enumerate(data_loader):\n",
        "            \n",
        "        features = features.to(device)\n",
        "        targets = targets.to(device)\n",
        "\n",
        "        logits, probas = model(features)\n",
        "        _, predicted_labels = torch.max(probas, 1)\n",
        "        num_examples += targets.size(0)\n",
        "        correct_pred += (predicted_labels == targets).sum()\n",
        "    return correct_pred.float()/num_examples * 100\n",
        "    \n",
        "\n",
        "start_time = time.time()\n",
        "for epoch in range(NUM_EPOCHS):\n",
        "    \n",
        "    model.train()\n",
        "    for batch_idx, (features, targets) in enumerate(train_loader):\n",
        "        \n",
        "        features = features.to(DEVICE)\n",
        "        targets = targets.to(DEVICE)\n",
        "            \n",
        "        ### FORWARD AND BACK PROP\n",
        "        logits, probas = model(features)\n",
        "        cost = F.cross_entropy(logits, targets)\n",
        "        optimizer.zero_grad()\n",
        "        \n",
        "        cost.backward()\n",
        "        \n",
        "        ### UPDATE MODEL PARAMETERS\n",
        "        optimizer.step()\n",
        "        \n",
        "        ### LOGGING\n",
        "        if not batch_idx % 50:\n",
        "            print ('Epoch: %03d/%03d | Batch %04d/%04d | Cost: %.4f' \n",
        "                   %(epoch+1, NUM_EPOCHS, batch_idx, \n",
        "                     len(train_loader), cost))\n",
        "\n",
        "        \n",
        "\n",
        "    model.eval()\n",
        "    with torch.set_grad_enabled(False): # save memory during inference\n",
        "        print('Epoch: %03d/%03d | Train: %.3f%%' % (\n",
        "              epoch+1, NUM_EPOCHS, \n",
        "              compute_accuracy(model, train_loader, device=DEVICE)))\n",
        "        \n",
        "    print('Time elapsed: %.2f min' % ((time.time() - start_time)/60))\n",
        "    \n",
        "print('Total Training Time: %.2f min' % ((time.time() - start_time)/60))"
      ],
      "execution_count": 17,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Epoch: 001/010 | Batch 0000/0469 | Cost: 2.3055\n",
            "Epoch: 001/010 | Batch 0050/0469 | Cost: 0.5465\n",
            "Epoch: 001/010 | Batch 0100/0469 | Cost: 0.3708\n",
            "Epoch: 001/010 | Batch 0150/0469 | Cost: 0.3408\n",
            "Epoch: 001/010 | Batch 0200/0469 | Cost: 0.1298\n",
            "Epoch: 001/010 | Batch 0250/0469 | Cost: 0.1857\n",
            "Epoch: 001/010 | Batch 0300/0469 | Cost: 0.0940\n",
            "Epoch: 001/010 | Batch 0350/0469 | Cost: 0.1851\n",
            "Epoch: 001/010 | Batch 0400/0469 | Cost: 0.1424\n",
            "Epoch: 001/010 | Batch 0450/0469 | Cost: 0.0622\n",
            "Epoch: 001/010 | Train: 96.660%\n",
            "Time elapsed: 0.19 min\n",
            "Epoch: 002/010 | Batch 0000/0469 | Cost: 0.0661\n",
            "Epoch: 002/010 | Batch 0050/0469 | Cost: 0.1021\n",
            "Epoch: 002/010 | Batch 0100/0469 | Cost: 0.0811\n",
            "Epoch: 002/010 | Batch 0150/0469 | Cost: 0.1708\n",
            "Epoch: 002/010 | Batch 0200/0469 | Cost: 0.0639\n",
            "Epoch: 002/010 | Batch 0250/0469 | Cost: 0.0768\n",
            "Epoch: 002/010 | Batch 0300/0469 | Cost: 0.0424\n",
            "Epoch: 002/010 | Batch 0350/0469 | Cost: 0.0944\n",
            "Epoch: 002/010 | Batch 0400/0469 | Cost: 0.0303\n",
            "Epoch: 002/010 | Batch 0450/0469 | Cost: 0.0687\n",
            "Epoch: 002/010 | Train: 98.228%\n",
            "Time elapsed: 0.37 min\n",
            "Epoch: 003/010 | Batch 0000/0469 | Cost: 0.0859\n",
            "Epoch: 003/010 | Batch 0050/0469 | Cost: 0.0320\n",
            "Epoch: 003/010 | Batch 0100/0469 | Cost: 0.0312\n",
            "Epoch: 003/010 | Batch 0150/0469 | Cost: 0.0588\n",
            "Epoch: 003/010 | Batch 0200/0469 | Cost: 0.0503\n",
            "Epoch: 003/010 | Batch 0250/0469 | Cost: 0.0484\n",
            "Epoch: 003/010 | Batch 0300/0469 | Cost: 0.0493\n",
            "Epoch: 003/010 | Batch 0350/0469 | Cost: 0.1142\n",
            "Epoch: 003/010 | Batch 0400/0469 | Cost: 0.0166\n",
            "Epoch: 003/010 | Batch 0450/0469 | Cost: 0.0300\n",
            "Epoch: 003/010 | Train: 98.742%\n",
            "Time elapsed: 0.56 min\n",
            "Epoch: 004/010 | Batch 0000/0469 | Cost: 0.1137\n",
            "Epoch: 004/010 | Batch 0050/0469 | Cost: 0.0244\n",
            "Epoch: 004/010 | Batch 0100/0469 | Cost: 0.0169\n",
            "Epoch: 004/010 | Batch 0150/0469 | Cost: 0.0103\n",
            "Epoch: 004/010 | Batch 0200/0469 | Cost: 0.0486\n",
            "Epoch: 004/010 | Batch 0250/0469 | Cost: 0.0435\n",
            "Epoch: 004/010 | Batch 0300/0469 | Cost: 0.0154\n",
            "Epoch: 004/010 | Batch 0350/0469 | Cost: 0.0602\n",
            "Epoch: 004/010 | Batch 0400/0469 | Cost: 0.0333\n",
            "Epoch: 004/010 | Batch 0450/0469 | Cost: 0.1404\n",
            "Epoch: 004/010 | Train: 98.900%\n",
            "Time elapsed: 0.75 min\n",
            "Epoch: 005/010 | Batch 0000/0469 | Cost: 0.0128\n",
            "Epoch: 005/010 | Batch 0050/0469 | Cost: 0.0533\n",
            "Epoch: 005/010 | Batch 0100/0469 | Cost: 0.0491\n",
            "Epoch: 005/010 | Batch 0150/0469 | Cost: 0.0052\n",
            "Epoch: 005/010 | Batch 0200/0469 | Cost: 0.0456\n",
            "Epoch: 005/010 | Batch 0250/0469 | Cost: 0.0046\n",
            "Epoch: 005/010 | Batch 0300/0469 | Cost: 0.0513\n",
            "Epoch: 005/010 | Batch 0350/0469 | Cost: 0.0696\n",
            "Epoch: 005/010 | Batch 0400/0469 | Cost: 0.0772\n",
            "Epoch: 005/010 | Batch 0450/0469 | Cost: 0.0186\n",
            "Epoch: 005/010 | Train: 99.383%\n",
            "Time elapsed: 0.93 min\n",
            "Epoch: 006/010 | Batch 0000/0469 | Cost: 0.0357\n",
            "Epoch: 006/010 | Batch 0050/0469 | Cost: 0.0069\n",
            "Epoch: 006/010 | Batch 0100/0469 | Cost: 0.0158\n",
            "Epoch: 006/010 | Batch 0150/0469 | Cost: 0.0710\n",
            "Epoch: 006/010 | Batch 0200/0469 | Cost: 0.0097\n",
            "Epoch: 006/010 | Batch 0250/0469 | Cost: 0.0364\n",
            "Epoch: 006/010 | Batch 0300/0469 | Cost: 0.0043\n",
            "Epoch: 006/010 | Batch 0350/0469 | Cost: 0.0227\n",
            "Epoch: 006/010 | Batch 0400/0469 | Cost: 0.0093\n",
            "Epoch: 006/010 | Batch 0450/0469 | Cost: 0.0924\n",
            "Epoch: 006/010 | Train: 99.445%\n",
            "Time elapsed: 1.12 min\n",
            "Epoch: 007/010 | Batch 0000/0469 | Cost: 0.0091\n",
            "Epoch: 007/010 | Batch 0050/0469 | Cost: 0.0074\n",
            "Epoch: 007/010 | Batch 0100/0469 | Cost: 0.0052\n",
            "Epoch: 007/010 | Batch 0150/0469 | Cost: 0.0106\n",
            "Epoch: 007/010 | Batch 0200/0469 | Cost: 0.0277\n",
            "Epoch: 007/010 | Batch 0250/0469 | Cost: 0.0222\n",
            "Epoch: 007/010 | Batch 0300/0469 | Cost: 0.0284\n",
            "Epoch: 007/010 | Batch 0350/0469 | Cost: 0.0064\n",
            "Epoch: 007/010 | Batch 0400/0469 | Cost: 0.0231\n",
            "Epoch: 007/010 | Batch 0450/0469 | Cost: 0.0030\n",
            "Epoch: 007/010 | Train: 99.568%\n",
            "Time elapsed: 1.31 min\n",
            "Epoch: 008/010 | Batch 0000/0469 | Cost: 0.0032\n",
            "Epoch: 008/010 | Batch 0050/0469 | Cost: 0.0060\n",
            "Epoch: 008/010 | Batch 0100/0469 | Cost: 0.0035\n",
            "Epoch: 008/010 | Batch 0150/0469 | Cost: 0.0079\n",
            "Epoch: 008/010 | Batch 0200/0469 | Cost: 0.0293\n",
            "Epoch: 008/010 | Batch 0250/0469 | Cost: 0.0188\n",
            "Epoch: 008/010 | Batch 0300/0469 | Cost: 0.0048\n",
            "Epoch: 008/010 | Batch 0350/0469 | Cost: 0.0091\n",
            "Epoch: 008/010 | Batch 0400/0469 | Cost: 0.0166\n",
            "Epoch: 008/010 | Batch 0450/0469 | Cost: 0.0083\n",
            "Epoch: 008/010 | Train: 99.750%\n",
            "Time elapsed: 1.50 min\n",
            "Epoch: 009/010 | Batch 0000/0469 | Cost: 0.0017\n",
            "Epoch: 009/010 | Batch 0050/0469 | Cost: 0.0049\n",
            "Epoch: 009/010 | Batch 0100/0469 | Cost: 0.0067\n",
            "Epoch: 009/010 | Batch 0150/0469 | Cost: 0.0008\n",
            "Epoch: 009/010 | Batch 0200/0469 | Cost: 0.0137\n",
            "Epoch: 009/010 | Batch 0250/0469 | Cost: 0.0136\n",
            "Epoch: 009/010 | Batch 0300/0469 | Cost: 0.0031\n",
            "Epoch: 009/010 | Batch 0350/0469 | Cost: 0.0038\n",
            "Epoch: 009/010 | Batch 0400/0469 | Cost: 0.0059\n",
            "Epoch: 009/010 | Batch 0450/0469 | Cost: 0.0009\n",
            "Epoch: 009/010 | Train: 99.782%\n",
            "Time elapsed: 1.69 min\n",
            "Epoch: 010/010 | Batch 0000/0469 | Cost: 0.0047\n",
            "Epoch: 010/010 | Batch 0050/0469 | Cost: 0.0339\n",
            "Epoch: 010/010 | Batch 0100/0469 | Cost: 0.0058\n",
            "Epoch: 010/010 | Batch 0150/0469 | Cost: 0.0185\n",
            "Epoch: 010/010 | Batch 0200/0469 | Cost: 0.0123\n",
            "Epoch: 010/010 | Batch 0250/0469 | Cost: 0.0188\n",
            "Epoch: 010/010 | Batch 0300/0469 | Cost: 0.0094\n",
            "Epoch: 010/010 | Batch 0350/0469 | Cost: 0.0065\n",
            "Epoch: 010/010 | Batch 0400/0469 | Cost: 0.0028\n",
            "Epoch: 010/010 | Batch 0450/0469 | Cost: 0.0095\n",
            "Epoch: 010/010 | Train: 99.850%\n",
            "Time elapsed: 1.87 min\n",
            "Total Training Time: 1.87 min\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "colab_type": "text",
        "id": "paaeEQHQj5xC"
      },
      "source": [
        "## Evaluation"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "colab_type": "code",
        "id": "gzQMWKq5j5xE",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 35
        },
        "outputId": "d29dd2e1-3b5c-47f1-e75a-4cca0b643a4a"
      },
      "source": [
        "with torch.set_grad_enabled(False): # save memory during inference\n",
        "    print('Test accuracy: %.2f%%' % (compute_accuracy(model, test_loader, device=DEVICE)))"
      ],
      "execution_count": 18,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Test accuracy: 98.86%\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Zyk2AnqHbgbB",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 266
        },
        "outputId": "5ac02079-5293-4c16-fde3-dce63ebc1343"
      },
      "source": [
        "for batch_idx, (features, targets) in enumerate(test_loader):\n",
        "\n",
        "    features = features\n",
        "    targets = targets\n",
        "    break\n",
        "    \n",
        "    \n",
        "nhwc_img = np.transpose(features[0], axes=(1, 2, 0))\n",
        "nhw_img = np.squeeze(nhwc_img.numpy(), axis=2)\n",
        "plt.imshow(nhw_img, cmap='Greys');"
      ],
      "execution_count": 19,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "image/png": "iVBORw0KGgoAAAANSUhEUgAAAPsAAAD5CAYAAADhukOtAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjIsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+WH4yJAAAQZ0lEQVR4nO3dfYxUZZbH8e+h7VZAAYFe7PCyjYgaM1lBOwQVJ+hklCWTIMlKNMEoMcNkMyarGf/wJUE32Rhns2r4Y+MGFxzcIOqOGojRXV9CQgjGoXGRV1cR0RGxX0QDGhRpzv5Rl0zD1lPdXVW3qrvP75N0uuo5dfueXPj1rbpP33vN3RGR4W9EvRsQkdpQ2EWCUNhFglDYRYJQ2EWCUNhFgjinkoXNbAGwEmgA/t3dHy/1+okTJ3pra2slqxSREg4ePEh3d7cVq5UddjNrAP4V+CXwBbDNzDa6+97UMq2trbS3t5e7ShHpQ1tbW7JWydv4OcB+dz/g7ieAF4BFFfw8EclRJWGfDPy51/MvsjERGYRyP0BnZsvNrN3M2ru6uvJenYgkVBL2Q8DUXs+nZGNncPdV7t7m7m3Nzc0VrE5EKlFJ2LcBM81supk1AbcBG6vTlohUW9lH4939pJndA/w3ham3Ne6+p2qdiUhVVTTP7u6vA69XqRcRyZH+gk4kCIVdJAiFXSQIhV0kCIVdJAiFXSQIhV0kCIVdJAiFXSQIhV0kCIVdJAiFXSQIhV0kCIVdJAiFXSQIhV0kCIVdJAiFXSQIhV0kCIVdJAiFXSQIhV0kCIVdJAiFXSQIhV0kiIruCGNmB4FjQA9w0t3Td4IXkbqqKOyZG9y9uwo/R0RypLfxIkFUGnYH3jSz7Wa2vBoNiUg+Kn0bP8/dD5nZXwFvmdmH7r659wuyXwLLAaZNm1bh6kSkXBXt2d39UPa9E3gVmFPkNavcvc3d25qbmytZnYhUoOywm9loM7vg9GPgJmB3tRoTkeqq5G38JOBVMzv9c5539/+qSlciUnVlh93dDwBXVrEXEcmRpt5EglDYRYJQ2EWCUNhFglDYRYKoxokww4q7J2unTp0qOt7T05NcJpuaHLARI9K/h0v9zFSt3D5k+NCeXSQIhV0kCIVdJAiFXSQIhV0kCB2NP8uPP/6YrO3YsaPo+HPPPZdcZsyYMcna6NGjk7V58+Yla5dddlmyNnbs2AGvS2LQnl0kCIVdJAiFXSQIhV0kCIVdJAiFXSQITb2d5fvvv0/WVqxYUXR8+/btyWVKnYDS0NCQrD377LPJ2oQJE5K11OW6h/NlvM85J/3fePLkyUXHb7311uQykyZNKmtdg5327CJBKOwiQSjsIkEo7CJBKOwiQSjsIkH0OY9gZmuAXwGd7v6zbGw88CLQChwElrj7N/m1WTvnnXdesrZ06dKi47Nnz04uU2oa59tvv03WPv/882Ttww8/TNa2bt06oHGA8ePHJ2vd3d3JWqlr76WUmm4ste1LLVdqunTcuHEDGgdYsmRJsjbcp97+ACw4a+wB4B13nwm8kz0XkUGsz7Bn91s/ctbwImBt9ngtcEuV+xKRKiv3M/skdz+cPf6Kwh1dRWQQq/gAnRcutJ682LqZLTezdjNr7+rqqnR1IlKmcsPeYWYtANn3ztQL3X2Vu7e5e1tzc3OZqxORSpUb9o3AndnjO4EN1WlHRPLSn6m39cB8YKKZfQE8AjwOvGRmdwOfAem5iiGm1PTPokWLio7feOONyWVGjRqVrJ04cSJZO3bsWLLW0dGRrH300UdFxzs7k2++mDlzZrK2e/fuZK3aU2+lzuY7dOhQsrZy5cpkLbUdv/vuu+QypW4BNpT1GXZ3vz1R+kWVexGRHOkv6ESCUNhFglDYRYJQ2EWCUNhFghi6p/DkpNTUUOo+aqnxSlx00UXJ2owZM5K1q6++uuh4qXvYlboP3A033JCsnTp1KllLGTEivX8pNZW3efPmZK3UmWipP+S65ppryvp5Q5n27CJBKOwiQSjsIkEo7CJBKOwiQSjsIkEMzzmGYa7U9NXIkSMHNN6XUhejLEep6boDBw4ka2+//Xay1tjYmKylzlS8/PLLk8s0NTUla0OZ9uwiQSjsIkEo7CJBKOwiQSjsIkHoaLzU1PHjx5O1N998M1lbt25dsjZlypRkbdmyZUXHS11r0MyStaFMe3aRIBR2kSAUdpEgFHaRIBR2kSAUdpEg+nP7pzXAr4BOd/9ZNvYo8Gvg9G1ZH3L31/NqUoae1Akv+/btSy6zadOmZO3kyZPJWqnbV7W0tBQdH67Ta6X0Z8/+B2BBkfGn3H1W9qWgiwxyfYbd3TcDR2rQi4jkqJLP7PeY2U4zW2NmF1atIxHJRblhfxqYAcwCDgNPpF5oZsvNrN3M2ru6ulIvE5GclRV2d+9w9x53PwU8A8wp8dpV7t7m7m2pC/aLSP7KCruZ9T7EuRjYXZ12RCQv/Zl6Ww/MByaa2RfAI8B8M5sFOHAQ+E2OPcoQdORI8WO6zz//fHKZ1157LVmbP39+svbEE8lPkclr6JW6jt9w1WfY3f32IsOrc+hFRHIU79ebSFAKu0gQCrtIEAq7SBAKu0gQuuCk5OKTTz4pOv7xxx8nl5kwYUKyNm/evGRt6tSpyVrEKbYUbQmRIBR2kSAUdpEgFHaRIBR2kSAUdpEgNPUmZevp6UnWtm/fXnR8//79yWWuv/76ZG3x4sXJWlNTU7Imf6E9u0gQCrtIEAq7SBAKu0gQCrtIEDoaL2X79NNPk7UtW7YUHU/dFgrg2muvTdamT5+erEW8lVM5tGcXCUJhFwlCYRcJQmEXCUJhFwlCYRcJoj+3f5oKPAdMonC7p1XuvtLMxgMvAq0UbgG1xN2/ya9VyYu7J2tHjx5N1lavTt8YaOvWrUXHS53sUuoWTyNHjkzWpH/6s2c/CfzO3a8A5gK/NbMrgAeAd9x9JvBO9lxEBqk+w+7uh939/ezxMWAfMBlYBKzNXrYWuCWvJkWkcgP6zG5mrcBs4D1gkrsfzkpfUXibLyKDVL/DbmbnAy8D97r7GR/kvPChr+gHPzNbbmbtZtbe1dVVUbMiUr5+hd3MGikEfZ27v5INd5hZS1ZvATqLLevuq9y9zd3bmpubq9GziJShz7Bb4SyD1cA+d3+yV2kjcGf2+E5gQ/XbE5Fq6c9Zb9cBdwC7zGxHNvYQ8DjwkpndDXwGLMmnRamGUtNrJ06cSNbeeOONZG39+vUDXt9NN92UXObiiy9O1qRyfYbd3bcAqXMIf1HddkQkL/oLOpEgFHaRIBR2kSAUdpEgFHaRIHTBySB++umnZK3UhSMfe+yxZK27uztZW7Kk+Ezs3Llzk8uMHj06WZPKac8uEoTCLhKEwi4ShMIuEoTCLhKEwi4ShKbehpnU2WZHjhxJLnPXXXcla3v37k3WLr300mRt2bJlRcenTZuWXEbypT27SBAKu0gQCrtIEAq7SBAKu0gQOho/zKROePnyyy+Ty7S3tydrPT09ydqDDz6YrF155ZVFx5uampLLSL60ZxcJQmEXCUJhFwlCYRcJQmEXCUJhFwmiz6k3M5sKPEfhlswOrHL3lWb2KPBr4PStWR9y99fzalT+otTtmvbs2VN0/P77708u09jYmKw9/PDDydqCBQuStdT15Aq3DpR66M88+0ngd+7+vpldAGw3s7ey2lPu/i/5tSci1dKfe70dBg5nj4+Z2T5gct6NiUh1Degzu5m1ArOB97Khe8xsp5mtMbMLq9ybiFRRv8NuZucDLwP3uvtR4GlgBjCLwp7/icRyy82s3czau7q6ir1ERGqgX2E3s0YKQV/n7q8AuHuHu/e4+yngGWBOsWXdfZW7t7l7W3Nzc7X6FpEB6jPsVjh8uhrY5+5P9hpv6fWyxcDu6rcnItXSn6Px1wF3ALvMbEc29hBwu5nNojAddxD4TS4dyv9T6npyGzduLDr+7rvvJpcpNfW2cOHCZG3MmDHJWkNDQ7Im9dGfo/FbgGKTo5pTFxlC9Bd0IkEo7CJBKOwiQSjsIkEo7CJB6IKTg9Tx48eTtW3btiVrGzZsKDr+ww8/JJc599xzk7WxY8cmayNGaF8xlOhfSyQIhV0kCIVdJAiFXSQIhV0kCIVdJAhNvQ1SX3/9dbK2efPmZG3Xrl1Fx0udhTZu3LhkrdQZcTK0aM8uEoTCLhKEwi4ShMIuEoTCLhKEwi4ShKbeBqmjR48max0dHcla6l5q06ZNSy6zdOnSZO3CC9P3/tBZb0OL/rVEglDYRYJQ2EWCUNhFglDYRYLo82i8mZ0HbAbOzV7/R3d/xMymAy8AE4DtwB3ufiLPZiMZNWpUsnbJJZckazfffHPR8euuuy65zH333ZesNTU1JWupI/8yOPVnz/4jcKO7X0nh9swLzGwu8HvgKXe/BPgGuDu/NkWkUn2G3Qu+y542Zl8O3Aj8MRtfC9ySS4ciUhX9vT97Q3YH107gLeAT4Ft3P5m95Atgcj4tikg19Cvs7t7j7rOAKcAc4PL+rsDMlptZu5m1d3V1ldmmiFRqQEfj3f1bYBNwDTDOzE4f4JsCHEoss8rd29y9rbm5uaJmRaR8fYbdzJrNbFz2eCTwS2AfhdD/XfayO4HityIRkUGhPyfCtABrzayBwi+Hl9z9NTPbC7xgZv8E/A+wOsc+w2ltbU3WVqxYUbtGZNjoM+zuvhOYXWT8AIXP7yIyBOgv6ESCUNhFglDYRYJQ2EWCUNhFgjB3r93KzLqAz7KnE4Humq08TX2cSX2caaj18dfuXvSv12oa9jNWbNbu7m11Wbn6UB8B+9DbeJEgFHaRIOoZ9lV1XHdv6uNM6uNMw6aPun1mF5Ha0tt4kSDqEnYzW2Bm/2tm+83sgXr0kPVx0Mx2mdkOM2uv4XrXmFmnme3uNTbezN4ys4+z7+n7LuXbx6NmdijbJjvMbGEN+phqZpvMbK+Z7TGzf8jGa7pNSvRR021iZueZ2Z/M7IOsj3/Mxqeb2XtZbl40s/TVQItx95p+AQ0ULmt1MdAEfABcUes+sl4OAhPrsN6fA1cBu3uN/TPwQPb4AeD3derjUeD+Gm+PFuCq7PEFwEfAFbXeJiX6qOk2AQw4P3vcCLwHzAVeAm7Lxv8N+PuB/Nx67NnnAPvd/YAXLj39ArCoDn3UjbtvBo6cNbyIwoU7oUYX8Ez0UXPuftjd388eH6NwcZTJ1HiblOijpryg6hd5rUfYJwN/7vW8nherdOBNM9tuZsvr1MNpk9z9cPb4K2BSHXu5x8x2Zm/zc/840ZuZtVK4fsJ71HGbnNUH1Hib5HGR1+gH6Oa5+1XA3wK/NbOf17shKPxmp/CLqB6eBmZQuEfAYeCJWq3YzM4HXgbudfcz7lldy21SpI+abxOv4CKvKfUI+yFgaq/nyYtV5s3dD2XfO4FXqe+VdzrMrAUg+95ZjybcvSP7j3YKeIYabRMza6QQsHXu/ko2XPNtUqyPem2TbN0DvshrSj3Cvg2YmR1ZbAJuAzbWugkzG21mF5x+DNwE7C69VK42UrhwJ9TxAp6nw5VZTA22iRXuI7Ua2OfuT/Yq1XSbpPqo9TbJ7SKvtTrCeNbRxoUUjnR+Ajxcpx4upjAT8AGwp5Z9AOspvB38icJnr7sp3DPvHeBj4G1gfJ36+A9gF7CTQthaatDHPApv0XcCO7KvhbXeJiX6qOk2Af6GwkVcd1L4xbKi1//ZPwH7gf8Ezh3Iz9Vf0IkEEf0AnUgYCrtIEAq7SBAKu0gQCrtIEAq7SBAKu0gQCrtIEP8HTvEq5iT1YwQAAAAASUVORK5CYII=\n",
            "text/plain": [
              "<Figure size 432x288 with 1 Axes>"
            ]
          },
          "metadata": {
            "tags": [],
            "needs_background": "light"
          }
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0_CBz0VUbgbE",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 35
        },
        "outputId": "c8733a47-ad4d-454f-f605-312c7b246c0f"
      },
      "source": [
        "model.eval()\n",
        "logits, probas = model(features.to(device)[0, None])\n",
        "print('Probability 7 %.2f%%' % (probas[0][7]*100))"
      ],
      "execution_count": 20,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "Probability 7 100.00%\n"
          ],
          "name": "stdout"
        }
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "50V6aybobgbG",
        "colab_type": "code",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 104
        },
        "outputId": "be6678ea-1a6e-46ea-fd2b-915af91ca9a0"
      },
      "source": [
        "%watermark -iv"
      ],
      "execution_count": 21,
      "outputs": [
        {
          "output_type": "stream",
          "text": [
            "PIL.Image 7.0.0\n",
            "torch     1.5.1+cu101\n",
            "numpy     1.18.5\n",
            "pandas    1.0.5\n",
            "\n"
          ],
          "name": "stdout"
        }
      ]
    }
  ]
}