{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "name": "W4_Tutorial1.ipynb",
      "provenance": [],
      "collapsed_sections": [],
      "toc_visible": true,
      "include_colab_link": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/CIS-522/course-content/blob/main/tutorials/W04_Optimization/W4_Tutorial1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vMFtkpqfCN2f"
      },
      "source": [
        "# CIS-522 Week 4 Part 1\n",
        "# Optimization\n",
        "\n",
        "__Instructor__: Lyle Ungar\n",
        "\n",
        "__Content creator:__ Rongguang Wang\n",
        "\n",
        "__Content reviewer:__ Pooja Consul\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vNsW36ZEQNB4"
      },
      "source": [
        "---\n",
        "# Intro: Optimization and why it matters\n",
        "\n",
        "Video be watched **before** the pod meets.\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "aUg_CzvgDDp0",
        "cellView": "form"
      },
      "source": [
        "#@title Video : Introduction video\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"cqQ7dVSYn7c\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "import time\n",
        "try: t0;\n",
        "except NameError: t0=time.time()\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "854uzpETDJtP"
      },
      "source": [
        "## Objectives for today\n",
        "\n",
        "We show how gradient descent can be tweaked using minibatch, adaptive learning rate, and other techiniques to really speed up the optimization process, and hint at the theory behind it.\n",
        "\n",
        "0.   Make sure to optimize the right thing!\n",
        "1.   The optimization landscape; geometric intuition behind Stochastic Gradient Descent (SGD) and momentum\n",
        "2.   Select batch size for minibatch gradient descent \n",
        "3.   Know batch normalization strengths and weaknesses"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "cxCD0oTfKpXY",
        "cellView": "form"
      },
      "source": [
        "#@markdown What is your Pennkey and pod? (text, not numbers, e.g. bfranklin)\n",
        "my_pennkey = '' #@param {type:\"string\"}\n",
        "my_pod = 'Select' #@param ['Select', 'euclidean-wombat', 'sublime-newt', 'buoyant-unicorn', 'lackadaisical-manatee','indelible-stingray','superfluous-lyrebird','discreet-reindeer','quizzical-goldfish','astute-jellyfish','ubiquitous-cheetah','nonchalant-crocodile','fashionable-lemur','spiffy-eagle','electric-emu','quotidian-lion']\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0LMCVKg3EOSn"
      },
      "source": [
        "## Recap the experience from last week\n",
        "\n",
        "What did you learn last week. What questions do you have? [15 min discussion]"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0q2tL2Q5CAde",
        "cellView": "form"
      },
      "source": [
        "learning_from_previous_week = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "CBIJniOMXGpQ"
      },
      "source": [
        "*Estimated time: 20 minutes since start*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RV8-x3YyGDRw"
      },
      "source": [
        "---\n",
        "# Setup\n",
        "Note that some of the code for today can take up to an hour to run. We have therefore \"hidden\" that code and shown the resulting outputs.\n",
        "\n",
        "[Here](https://docs.google.com/presentation/d/1NSE9VQPKhWQMlRniuxrUiKeVbjhEf7_rxjAwbQ-qzeE/edit?usp=sharing) are the slides for today's videos (in case you want to take notes). **Do not read them now.**"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "upSbWUROGFvk"
      },
      "source": [
        "# imports\n",
        "from __future__ import print_function\n",
        "import torch\n",
        "import torch.nn as nn\n",
        "import torch.nn.functional as F\n",
        "import torch.optim as optim\n",
        "from torchvision import datasets, transforms\n",
        "from torch.optim.lr_scheduler import StepLR\n",
        "from torch.utils.data import Dataset\n",
        "import time\n",
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "from collections import defaultdict\n",
        "import requests\n",
        "import io\n",
        "from urllib.request import urlopen\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "yDKo45Q-Gn7I",
        "cellView": "form"
      },
      "source": [
        "# @title Figure Settings\n",
        "import ipywidgets as widgets\n",
        "%matplotlib inline \n",
        "fig_w, fig_h = (8, 6)\n",
        "plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})\n",
        "%config InlineBackend.figure_format = 'retina'\n",
        "SMALL_SIZE = 12\n",
        "\n",
        "\n",
        "plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/\"\n",
        "              \"course-content/master/nma.mplstyle\")\n",
        "\n",
        "# plt.rcParams.update(plt.rcParamsDefault)\n",
        "# plt.rc('font', size=SMALL_SIZE)          # controls default text sizes\n",
        "# plt.rc('axes', titlesize=SMALL_SIZE)     # fontsize of the axes title\n",
        "# plt.rc('axes', labelsize=SMALL_SIZE)    # fontsize of the x and y labels\n",
        "# plt.rc('xtick', labelsize=SMALL_SIZE)    # fontsize of the tick labels\n",
        "# plt.rc('ytick', labelsize=SMALL_SIZE)    # fontsize of the tick labels\n",
        "# plt.rc('legend', fontsize=SMALL_SIZE)    # legend fontsize\n",
        "# plt.rc('figure', titlesize=SMALL_SIZE)  # fontsize of the figure title"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3zcM51BP0lH8"
      },
      "source": [
        "---\n",
        "# Section 0: Before we get to work...\n",
        "# A cautionary tale\n",
        "Generally speaking, when optimizing make sure that you :\n",
        "- optimize for the right thing and \n",
        "- be aware of the potential unintended consequences that can arise from your optimization scheme\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KdBcepKW1QRw"
      },
      "source": [
        "Consider the following example:\n",
        "\n",
        "You are optimizing a food delivery service that will deliver 12 meals over a 3 hour evening period. Each person you deliver to will give you a rating of 1-3 stars, mostly based on whether the food is on time or late. Assume customers start at 3 stars and deduct one star for each quarter hour that the delivery is late. You have all 12 meals at hand (they are precooked, delivered cold for people to heat up) and it takes 15 minutes to deliver each meal.\n",
        "On days that things go right, you’re perfectly on time with all meals. \n",
        "\n",
        "If you get delayed by 15 minutes at the start of the night, what is an optimal strategy for maximizing the number of stars you get?\n",
        "\n",
        "What would might be a better loss function?\n",
        "\n",
        "[~3 minutes discussion]"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "jzmsYABdLeij",
        "cellView": "form"
      },
      "source": [
        "strategy = '' #@param {type:\"string\"}\n",
        "loss_fn = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "qgrf319TwKEJ"
      },
      "source": [
        "## Question of the week"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uL1YfyL4wOfc"
      },
      "source": [
        "**How can we improve gradient descent?** [10 min discussion]"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "y2DJBD6FESwG"
      },
      "source": [
        "Our aim is to find the lowest point  of the loss function. However, the actual direction to this minium is not known. We can only look locally, repeatedly taking small steps in the direction of steepest descent-- the negative of the gradient of the loss function w.r.t. the weights..\n",
        "\n",
        "We want to solve for\n",
        "$$\\min_wf(w)$$\n",
        "where the loss function $f$ is a continous and differetiable function.\n",
        "\n",
        "Gradient descent:\n",
        "$$w_{t+1}=w_t-\\eta\\nabla f(w_t)$$\n",
        "where \n",
        "\n",
        "*   $w_{t+1}$ is the updated weight of the $t$-th iteration,\n",
        "*   $w_t$ is the initial weight before the $t$-th iteration,\n",
        "*   $\\eta$ is the step size,\n",
        "*   $\\nabla f(w_t)$ is the gradient of the loss function $f$ with respect to all the weights $w_j$, $df/dw_j$, evaluated with the current weights $w_t$.\n",
        "\n",
        "In standard gradient descent, the loss function $f$ is the loss (L2 or maxent) between the neural net and the correct answer averaged over  all the observations. In Stochastic gradient descent, the loss is just for a simple observation. In mini-batch gradient descent, it is averaged over all of the point in the Batch, $B$.\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fXcdsdlaXdEm"
      },
      "source": [
        "*Estimated time: 35 minutes since start*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3qgCvTSsHQSw"
      },
      "source": [
        "---\n",
        "# Section 1: Why is optimization hard ? \n",
        "\n",
        "[~2min discussion]"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "GVbpq2e5VO_-",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Example of an optimization landscape\n",
        "\n",
        "try: t1;\n",
        "except NameError: t1=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"g0zOEcPix2w\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_LKWjcdrvaLz"
      },
      "source": [
        "Hint: think about what the optimization landscape can look like for more complex functions then take a look at the video and the interactive plot below to better understand why it is hard to find the global minima and not get stuck in a local minima."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "geVOjKbo5t1K"
      },
      "source": [
        "[Interactive visualization](https://losslandscape.com/explorer)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Rj6hrFFZ5xcq"
      },
      "source": [
        "So what do you think on the difficulty of optimization now ? \n",
        "\n",
        "[~2min discussion]"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "WAlYl6--RMmK"
      },
      "source": [
        "#@title Video : The difficulty of training a deep neural network\n",
        "video = YouTubeVideo(id=\"68VFZeWWe-s\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WeM9kVm2t95g"
      },
      "source": [
        "As you've just seen, the function we are looking to find a minimum of, can have a very complex landscape. \n",
        "  "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "GD8jBTXPzstJ",
        "cellView": "form"
      },
      "source": [
        "#@markdown **Student response**: Can you come up with some basic characteristics that we need for a good gradient descent algorithm ? \n",
        "characteristics_for_gd = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Ai0i9O9fXrDL"
      },
      "source": [
        "*Estimated time: 45 minutes since start*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "83AdYyW_XHpQ"
      },
      "source": [
        "---\n",
        "# Section 2: Minibatch stochastic gradient descent (SGD)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "LfpAMeyARW8a",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Minibatch\n",
        "\n",
        "try: t2;\n",
        "except NameError: t2=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"l4n7BZjNbTI\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZCCvrYFbMULT"
      },
      "source": [
        "In stochastic gradient descent, we replace the actual gradient vector with a stochastic estimation of the gradient vector. Specifically for a neural network, the stochastic estimation uses the gradient of the loss for a single data point (single instance).\n",
        "\n",
        "Given $f_i=l(x_i, y_i, w)$, the expected value of the $t$-th step of SGD is the same as the $t$-th step of full gradient descent.\n",
        "\n",
        "$$\\mathbb{E}[w_{t+1}]=w_t-\\eta \\mathbb{E}[\\nabla f_i(w_t)]=w_t-\\eta\\nabla f(w_t)$$\n",
        "\n",
        "where $i$ is chosen uniformly at random, thereby $f_i$ is a noisy but unbiased estimator of $f$.\n",
        "\n",
        "$$w_{t+1}=w_t-\\eta\\nabla f_i(w_t)$$\n",
        "\n",
        "We update the weights according to the gradient over $f_i$ (as opposed to the gradient over the total loss $f$).\n",
        "\n",
        "SGD advantages:\n",
        "*   The noise in the SGD update can prevent convergence to a bad (shallow) local minima.\n",
        "*   It is drastically cheaper to compute (as you don’t go over all data points).\n",
        "\n",
        "\n",
        "### Minibatching\n",
        "\n",
        "Often we are able to make better use of our hardware by using mini batches instead of single instances. We compute the loss over a mini-batch -- a set of randomly selected instances instead of calculating it over just one instance. This reduces the noise in the step update.\n",
        "\n",
        "Given the $t$th minibatch $B_t$ consisting of $k$ observations: \n",
        "\n",
        "$$w_{t+1}=w_t-\\eta \\frac{1}{|B_t|}\\sum_{i\\in B}\\nabla f_i(w_t)$$\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Vc7Yvs2mP0fB",
        "cellView": "form"
      },
      "source": [
        "#@markdown How would the plot of training error vs epochs differ for minibatch gradient descent when compared to stochastic gradient descent?\n",
        "sgd_vs_minibtach_plot = '' #@param {type:\"string\"}\n",
        "\n",
        "#@markdown What are the advantages of minibatch gradient descent over stochastic gradient descent? (Select all that apply)\n",
        "noisy_learning_process = False #@param {type:\"boolean\"}\n",
        "computationally_efficient = False #@param {type:\"boolean\"}\n",
        "increased_model_update_frequency  = False #@param {type:\"boolean\"}\n",
        "memory_efficient = False #@param {type:\"boolean\"}\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gUVn7rFF62uA"
      },
      "source": [
        "## Exercise 1: Finding the optimal minibatch size "
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wor4X13N66GA"
      },
      "source": [
        "### Exercise 1.1"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xDZ_GaUJ7DGd"
      },
      "source": [
        "One of the main constraints of training deep neural networks is the relative limited size of GPU memory. Being able to quickly estimate if your minibatch size can be held in that memory will save you time and out-of-memory errors.\n",
        "\n",
        "What do we need to store at training time? \n",
        "- outputs of intermediate layers (forward pass): \n",
        "- model parameters\n",
        "- error signal at each neuron\n",
        "- the gradient of parameters\n",
        "plus any extra memory needed by optimizer (e.g. for momentum)\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "353hjdOQS4rQ"
      },
      "source": [
        "\n",
        "Fully connected layers\n",
        "- #weights = #outputs x #inputs\n",
        "- #biases = #outputs"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-_2MRwC9Jnyt"
      },
      "source": [
        "\n",
        "This is dominated by the weights and their gradients. (You can confirm that there are far fewer node outputs than weights.) Assume we need to store the weights, their gradients and momentum, at 4 bytes/weight.  \n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "UXgo6xbBCs_g",
        "cellView": "form"
      },
      "source": [
        "#@markdown How many megabytes is this for the model specified in exercise 1.2?\n",
        "\n",
        "megabytes_1 = '' #@param {type:\"string\"}\n",
        "\n",
        "#@markdown If we also store a gradient for every observation in a minibatch of size 50 (to allow parallel processing), how many megabytes will now be needed?\n",
        "\n",
        "megabytes_2 = '' #@param {type:\"string\"}\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PPkFgx687A1R"
      },
      "source": [
        "### Exercise 1.2"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ssTaVDT9RYZu"
      },
      "source": [
        "We find the optimal minibatch size using a 2-hidden layer nerual network on the hand-written digit classification dataset (MNIST). There are 10 classes (0 - 9) in the dataset. We use stochastic gradient descent (SGD) algorithm to optimize the training phase.\n",
        "\n",
        "Plot test accuracy as a function of minibatch size (with constant wall time); explain the pattern."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "g4iAopPfGx6_"
      },
      "source": [
        "class Net(nn.Module):\n",
        "    def __init__(self):\n",
        "        super(Net, self).__init__()\n",
        "        self.fc1 = nn.Linear(784, 128)\n",
        "        self.fc2 = nn.Linear(128, 10)\n",
        "\n",
        "    def forward(self, x):\n",
        "        x = torch.flatten(x, 1)\n",
        "        x = self.fc1(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.fc2(x)\n",
        "        output = F.log_softmax(x, dim=1)\n",
        "        return output\n",
        "\n",
        "def train(args, model, device, train_loader, optimizer, epoch):\n",
        "    model.train()\n",
        "    for batch_idx, (data, target) in enumerate(train_loader):\n",
        "        data, target = data.to(device), target.to(device)\n",
        "        optimizer.zero_grad()\n",
        "        output = model(data)\n",
        "        loss = F.nll_loss(output, target)\n",
        "        loss.backward()\n",
        "        optimizer.step()\n",
        "        if batch_idx % args['log_interval'] == 0:\n",
        "            print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n",
        "                epoch, batch_idx * len(data), len(train_loader.dataset),\n",
        "                100. * batch_idx / len(train_loader), loss.item()))\n",
        "\n",
        "def test(model, device, test_loader):\n",
        "    model.eval()\n",
        "    test_loss = 0\n",
        "    correct = 0\n",
        "    with torch.no_grad():\n",
        "        for data, target in test_loader:\n",
        "            data, target = data.to(device), target.to(device)\n",
        "            output = model(data)\n",
        "            test_loss += F.nll_loss(output, target, reduction='sum').item()  # sum up batch loss\n",
        "            pred = output.argmax(dim=1, keepdim=True)  # get the index of the max log-probability\n",
        "            correct += pred.eq(target.view_as(pred)).sum().item()\n",
        "\n",
        "    test_loss /= len(test_loader.dataset)\n",
        "\n",
        "    print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.4f}%)\\n'.format(\n",
        "        test_loss, correct, len(test_loader.dataset),\n",
        "        100. * correct / len(test_loader.dataset)))\n",
        "    return 100. * correct / len(test_loader.dataset)\n",
        "    \n",
        "def main(args):\n",
        "    use_cuda = not args['no_cuda'] and torch.cuda.is_available()\n",
        "    torch.manual_seed(args['seed'])\n",
        "    device = torch.device('cuda' if use_cuda else 'cpu')\n",
        "\n",
        "    train_kwargs = {'batch_size': args['batch_size']}\n",
        "    test_kwargs = {'batch_size': args['test_batch_size']}\n",
        "    if use_cuda:\n",
        "        cuda_kwargs = {'num_workers': 1,\n",
        "                       'pin_memory': True,\n",
        "                       'shuffle': True}\n",
        "        train_kwargs.update(cuda_kwargs)\n",
        "        test_kwargs.update(cuda_kwargs)\n",
        "\n",
        "    transform=transforms.Compose([\n",
        "        transforms.ToTensor(),\n",
        "        transforms.Normalize((0.1307,), (0.3081,))\n",
        "        ])\n",
        "    train_loader = torch.utils.data.DataLoader(datasets.MNIST('../data', train=True, download=True,\n",
        "                       transform=transform),**train_kwargs)\n",
        "    test_loader = torch.utils.data.DataLoader(datasets.MNIST('../data', train=False,\n",
        "                       transform=transform), **test_kwargs)\n",
        "\n",
        "    model = Net().to(device)\n",
        "    optimizer = optim.SGD(model.parameters(), lr=args['lr'], momentum=args['momentum'])\n",
        "\n",
        "    acc_list, time_list = [], []\n",
        "    start_time = time.time()\n",
        "    for epoch in range(1, args['epochs'] + 1):\n",
        "        train(args, model, device, train_loader, optimizer, epoch)\n",
        "        time_list.append(time.time()-start_time)\n",
        "        acc = test(model, device, test_loader)\n",
        "        acc_list.append(acc)\n",
        "\n",
        "    return acc_list, time_list"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "z50bBGUVCpx4"
      },
      "source": [
        "The training takes over 20 mins. Please skip running below cells for now and come back if time is allowed."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "h0Pokv_OAqYp",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### Train (run me)\n",
        "# Training settings\n",
        "args = {'batch_size': 32,\n",
        "        'test_batch_size': 1000,\n",
        "        'epochs': 3,\n",
        "        'lr': 0.01,\n",
        "        'momentum': 0.9,\n",
        "        'no_cuda': False,\n",
        "        'seed': 1,\n",
        "        'log_interval': 100\n",
        "        }\n",
        "\n",
        "batch_size = [8, 16, 32, 64, 256, 512, 1024]\n",
        "acc_dict = {}\n",
        "test_acc = []\n",
        "\n",
        "for i in range(len(batch_size)):\n",
        "    args['batch_size'] = batch_size[i]\n",
        "    acc, timer = main(args)\n",
        "    acc_dict['acc'+str(batch_size[i])] = acc\n",
        "    acc_dict['time'+str(batch_size[i])] = timer\n",
        "    test_acc.append(acc[-1])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "SMkNtdQmLREc",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### Plot (run me)\n",
        "with plt.xkcd():\n",
        "    plt.plot(batch_size, test_acc, linewidth=2)\n",
        "    plt.title('Optimal Minibach Size')\n",
        "    plt.ylabel('Test Accuracy (%)')\n",
        "    plt.xscale('log')\n",
        "    plt.xlabel('Batch Size')\n",
        "    plt.savefig('minibatch.png')\n",
        "    plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DNSgxUdcDVUf"
      },
      "source": [
        "Plot the optimal batch size curve by running below cell."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "cV-d8wUp5hlI"
      },
      "source": [
        "url = \"https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W04_Optimization/static/W4_Tutorial1_Exercise1_minibatch.png\"\n",
        "img = plt.imread(requests.get(url, stream=True).raw)\n",
        "plt.imshow(img)\n",
        "plt.axis('off')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "tYwQyOAnygw5",
        "cellView": "form"
      },
      "source": [
        "#@markdown How did the convergence speed vary with batch size? Why?\n",
        "convergence_speed = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8_LIZP2yYMoJ"
      },
      "source": [
        "*Estimated time: 70 minutes since start*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-uSRuwiTXTFO"
      },
      "source": [
        "---\n",
        "# Section 3: Batch normalization"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "4hNogXMgSDBs",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Batch Normalization\n",
        "\n",
        "try: t3;\n",
        "except NameError: t3=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"FAnd9Ra7v-E\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GWo8HcvjR_-a"
      },
      "source": [
        "Rather than improving the optimization algorithms, batch normalization improves the network structure itself by adding additional layers in between existing layers. The goal is to improve the optimization and generalization performance.\n",
        "\n",
        "In neural networks, we typically alternate linear (weighted summation) operations with non-linear operations, the activation functions, such as ReLU. The most common practice is to put the normalization is between the linear layers and activation functions.\n",
        "\n",
        "More formally, normalization is as follows:\n",
        "$$\\tilde x_j = a\\frac{x_j-\\mu_j}{\\sigma_j}+b$$\n",
        "where\n",
        "*   $x_j$ is the output of a neuron or, equivalently, the input to the next layer,\n",
        "*   $\\tilde x_j$ is that same feature after being normalized ,\n",
        "*   $\\mu_j$ is the mean of the feature $x_j$ over the minibatch,\n",
        "*   $\\sigma_j$ is the estimate of the standard deviation of $x_j$ over the minibatch (with $\\epsilon$ added, so we don't divide by zero),\n",
        "*   $a$ is the learnable scaling factor,\n",
        "*   $b$ is the learnable bias term.\n",
        "\n",
        "Batch normalization tries to reduce the “internal covariate shift” between the training and testing data. Internal covariate shift is the change in the distribution of network activations due to the change in paramaters during training. In neural networks, the output of the first layer feeds into the second layer, the output of the second layer feeds into the third, and so on. When the parameters of a layer change, so does the distribution of inputs to subsequent layers. These shifts in input distributions can be problematic for neural networks, especially deep neural networks that could have a large number of layers. Batch normalization tries to mitigate this. You can check out [this](https://arxiv.org/abs/1502.03167) paper where the idea of mitigating internal covariance shift with batch normalization was first introduced. \n",
        "\n",
        "\n",
        "The advantages of BN are as follows:\n",
        "\n",
        "*   Networks with normalization layers are easier to optimize, allowing for the use of larger learning rates, speeding up the training of neural networks.\n",
        "*   The mean/std deviation estimates are noisy due to the randomness of the samples in batch. This extra “noise” sometimes results in better generalization. Normalization has a regularization effect.\n",
        "*   Normalization reduces sensitivity to weight initialization.\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Db4NgmA0XUlg",
        "cellView": "form"
      },
      "source": [
        "#@markdown Why do we need learnable parameters a and b for batch normalization? Why isn't the unit gaussian form sufficient?\n",
        "batch_norm_ab = '' #@param {type:\"string\"}\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "j_BjOdOcSH4n"
      },
      "source": [
        "## Exercise 2: The joys and perils of batch normalization\n",
        "\n",
        "We implement 4 netowrks: 2-layer without batch norm, 2-layer without batch norm, 5-layer without batch norm, and 5-layer with batch norm separately to see how BN works in a shallow network and a deep one."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "AyY-qXIclJIr"
      },
      "source": [
        "class BNShallowNet(nn.Module):\n",
        "    def __init__(self):\n",
        "        super(BNShallowNet, self).__init__()\n",
        "        self.fc1 = nn.Linear(784, 128)\n",
        "        self.fc2 = nn.Linear(128, 10)\n",
        "        self.bn = nn.BatchNorm1d(128)\n",
        "        self.dropout = nn.Dropout(0.5)\n",
        "        \n",
        "    def forward(self, x):\n",
        "        x = torch.flatten(x, 1)\n",
        "        x = self.fc1(x)\n",
        "        x = self.bn(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.dropout(x)\n",
        "        x = self.fc2(x)\n",
        "        output = F.log_softmax(x, dim=1)\n",
        "        return output\n",
        "\n",
        "class BNDeepNet(nn.Module):\n",
        "    def __init__(self):\n",
        "        super(BNDeepNet, self).__init__()\n",
        "        self.fc1 = nn.Linear(784, 128)\n",
        "        self.fc2 = nn.Linear(128, 64)\n",
        "        self.fc3 = nn.Linear(64, 32)\n",
        "        self.fc4 = nn.Linear(32, 32)\n",
        "        self.fc5 = nn.Linear(32, 10)\n",
        "        self.bn1 = nn.BatchNorm1d(128)\n",
        "        self.bn2 = nn.BatchNorm1d(64)\n",
        "        self.bn3 = nn.BatchNorm1d(32)\n",
        "        self.bn4 = nn.BatchNorm1d(32)\n",
        "        self.dropout = nn.Dropout(0.5)\n",
        "        \n",
        "    def forward(self, x):\n",
        "        x = torch.flatten(x, 1)\n",
        "        x = self.fc1(x)\n",
        "        x = self.bn1(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.fc2(x)\n",
        "        x = self.bn2(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.fc3(x)\n",
        "        x = self.bn3(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.fc4(x)\n",
        "        x = self.bn4(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.dropout(x)\n",
        "        x = self.fc5(x)\n",
        "        output = F.log_softmax(x, dim=1)\n",
        "        return output\n",
        "\n",
        "class DeepNet(nn.Module):\n",
        "    def __init__(self):\n",
        "        super(DeepNet, self).__init__()\n",
        "        self.fc1 = nn.Linear(784, 128)\n",
        "        self.fc2 = nn.Linear(128, 64)\n",
        "        self.fc3 = nn.Linear(64, 32)\n",
        "        self.fc4 = nn.Linear(32, 32)\n",
        "        self.fc5 = nn.Linear(32, 10)\n",
        "        self.dropout = nn.Dropout(0.5)\n",
        "        \n",
        "    def forward(self, x):\n",
        "        x = torch.flatten(x, 1)\n",
        "        x = self.fc1(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.fc2(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.fc3(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.fc4(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.dropout(x)\n",
        "        x = self.fc5(x)\n",
        "        output = F.log_softmax(x, dim=1)\n",
        "        return output"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ymNvSuVORILQ",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### helper functions (Run Me)\n",
        "def train(args, model, device, train_loader, optimizer, epoch):\n",
        "    model.train()\n",
        "    avg_loss = 0.\n",
        "    for batch_idx, (data, target) in enumerate(train_loader):\n",
        "        data, target = data.to(device), target.to(device)\n",
        "        optimizer.zero_grad()\n",
        "        output = model(data)\n",
        "        loss = F.nll_loss(output, target)\n",
        "        avg_loss += loss.item()\n",
        "        loss.backward()\n",
        "        optimizer.step()\n",
        "        if batch_idx % args['log_interval'] == 0:\n",
        "            print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n",
        "                epoch, batch_idx * len(data), len(train_loader.dataset),\n",
        "                100. * batch_idx / len(train_loader), loss.item()))\n",
        "    avg_loss /= len(train_loader.dataset)\n",
        "    return avg_loss\n",
        "            \n",
        "def test(model, device, test_loader):\n",
        "    model.eval()\n",
        "    test_loss = 0\n",
        "    correct = 0\n",
        "    with torch.no_grad():\n",
        "        for data, target in test_loader:\n",
        "            data, target = data.to(device), target.to(device)\n",
        "            output = model(data)\n",
        "            test_loss += F.nll_loss(output, target, reduction='sum').item()  # sum up batch loss\n",
        "            pred = output.argmax(dim=1, keepdim=True)  # get the index of the max log-probability\n",
        "            correct += pred.eq(target.view_as(pred)).sum().item()\n",
        "\n",
        "    test_loss /= len(test_loader.dataset)\n",
        "\n",
        "    print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.4f}%)\\n'.format(\n",
        "        test_loss, correct, len(test_loader.dataset),\n",
        "        100. * correct / len(test_loader.dataset)))\n",
        "    return test_loss\n",
        "\n",
        "def bn_eval(args):\n",
        "    use_cuda = not args['no_cuda'] and torch.cuda.is_available()\n",
        "    torch.manual_seed(args['seed'])\n",
        "    device = torch.device('cuda' if use_cuda else 'cpu')\n",
        "\n",
        "    train_kwargs = {'batch_size': args['batch_size']}\n",
        "    test_kwargs = {'batch_size': args['test_batch_size']}\n",
        "    if use_cuda:\n",
        "        cuda_kwargs = {'num_workers': 1,\n",
        "                       'pin_memory': True,\n",
        "                       'shuffle': True}\n",
        "        train_kwargs.update(cuda_kwargs)\n",
        "        test_kwargs.update(cuda_kwargs)\n",
        "\n",
        "    transform=transforms.Compose([\n",
        "        transforms.ToTensor(),\n",
        "        transforms.Normalize((0.1307,), (0.3081,))\n",
        "        ])\n",
        "    train_loader = torch.utils.data.DataLoader(datasets.MNIST('../data', train=True, download=True,\n",
        "                       transform=transform),**train_kwargs)\n",
        "    test_loader = torch.utils.data.DataLoader(datasets.MNIST('../data', train=False,\n",
        "                       transform=transform), **test_kwargs)\n",
        "\n",
        "    if args['net_type'] == 'Shallow':\n",
        "        model = Net().to(device)\n",
        "    elif args['net_type'] == 'BNShallow':\n",
        "        model = BNShallowNet().to(device)\n",
        "    elif args['net_type'] == 'Deep':\n",
        "        model = DeepNet().to(device)\n",
        "    elif args['net_type'] == 'BNDeep':\n",
        "        model = BNDeepNet().to(device)\n",
        "    optimizer = optim.SGD(model.parameters(), lr=args['lr'], momentum=args['momentum'])\n",
        "\n",
        "    train_list, test_list = [], []\n",
        "    for epoch in range(1, args['epochs'] + 1):\n",
        "        train_loss = train(args, model, device, train_loader, optimizer, epoch)\n",
        "        test_loss = test(model, device, test_loader)\n",
        "        train_list.append(train_loss)\n",
        "        test_list.append(test_loss)\n",
        "\n",
        "    return train_list, test_list"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "izyWPUb2MZNO"
      },
      "source": [
        "The training takes over 20 mins. Please skip running below cells for now and come back if time is allowed."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Tdp0rFk_S5mq",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### Train function (Run me)\n",
        "# Training settings\n",
        "args = {'batch_size': 64,\n",
        "        'test_batch_size': 1000,\n",
        "        'epochs': 10,\n",
        "        'lr': 0.01,\n",
        "        'momentum': 0.9,\n",
        "        'net_type': 'Net',\n",
        "        'no_cuda': False,\n",
        "        'seed': 1,\n",
        "        'log_interval': 100\n",
        "        }\n",
        "\n",
        "net = ['Shallow', 'BNShallow', 'Deep', 'BNDeep']\n",
        "loss_dict = {}\n",
        "\n",
        "for i in range(len(net)):\n",
        "    args['net_type'] = net[i]\n",
        "    train_loss, test_loss = bn_eval(args)\n",
        "    loss_dict['train' + str(net[i])] = train_loss\n",
        "    loss_dict['test' + str(net[i])] = test_loss"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Jw4p70xhUZ1X",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### Plot (run me)\n",
        "with plt.xkcd():\n",
        "    fig, axs = plt.subplots(1, 2, figsize=(10,4))\n",
        "    axs[0].plot(loss_dict['trainShallow'], label='Shallow w/o BN', color='b')\n",
        "    axs[1].plot(loss_dict['testShallow'], label='Shallow w/o BN', color='b', linestyle='dashed')\n",
        "    axs[0].plot(loss_dict['trainBNShallow'], label='Shallow BN', color='r')\n",
        "    axs[1].plot(loss_dict['testBNShallow'], label='Shallow BN', color='r', linestyle='dashed')\n",
        "    axs[0].plot(loss_dict['trainDeep'], label='Deep w/o BN', color='g')\n",
        "    axs[1].plot(loss_dict['testDeep'], label='Deep w/o BN', color='g', linestyle='dashed')\n",
        "    axs[0].plot(loss_dict['trainBNDeep'], label='Deep BN', color='orange')\n",
        "    axs[1].plot(loss_dict['testBNDeep'], label='Deep BN', color='orange', linestyle='dashed')\n",
        "    axs[0].set_title('Train')\n",
        "    axs[1].set_title('Test')\n",
        "    axs[0].set_ylabel('Loss')\n",
        "    #plt.yscale('log')\n",
        "    axs[0].set_xlabel('Epoch')\n",
        "    axs[1].set_xlabel('Epoch')\n",
        "    axs[0].legend()\n",
        "    axs[1].legend()\n",
        "    plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bX2wt88NMfLc"
      },
      "source": [
        "Plot the train and test convergence curves of the 4 networks by running below cell."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "QsJA4O32Mfp2"
      },
      "source": [
        "url = \"https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W04_Optimization/static/W4_Tutorial1_Exercise2_batchnorm.png\"\n",
        "img = plt.imread(requests.get(url, stream=True).raw)\n",
        "plt.imshow(img)\n",
        "plt.axis('off')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "3g_TTTgTr7Ko",
        "cellView": "form"
      },
      "source": [
        "#@markdown You looked at a shallow and a deep network. When did BN help or hurt?  Why do you think that happens?\n",
        "\n",
        "bn_deep_shallow = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vRc73jUPqF5B"
      },
      "source": [
        "## Momentum \n",
        "\n",
        "Momentum in gradient descent is similar to the concept of momentum in physics. The optimization process resembles a ball rolling down the hill. Momentum keeps the ball moving in the same direction that it is already moving in. The gradient can be thought of as a force pushing the ball in some other direction.\n",
        "\n",
        "<p align=\"center\">\n",
        "  <img width=\"460\" height=\"300\" src=\"https://miro.medium.com/max/640/1*i1Qc2E0TVlPHEKG7LepXgA.gif\">\n",
        "</p>\n",
        "\n",
        "Mathematically it can be expressed as follows-\n",
        "$$w_{t+1}=w_t-\\eta (\\nabla f(w_t) +\\beta m_{t}) $$\n",
        "$$m_{t+1}= \\nabla f(w_t) +\\beta m_{t}$$\n",
        "or, equivalently\n",
        "$$w_{t+1}= w_t -\\eta\\nabla f(w_t) +\\beta (w_{t} -w_{t-1})$$\n",
        "\n",
        "where\n",
        "*   $m$ is the momentum (the running average of the past gradients, initialized at zero),\n",
        "*   $\\beta\\in [0,1)$ is the damping factor, usually $0.9$ or $0.99$."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MSYiHGpxHill"
      },
      "source": [
        "\n",
        "\n",
        "Let’s consider two extreme cases to understand this decay rate parameter better. If the decay rate is 0, then it is exactly the same as (vanilla) gradient descent (blue ball). If the decay rate is 1 (and provided that the learning rate is reasonably small), then it rocks back and forth endlessly like the frictionless ball we saw previously; you do not want that. Typically the decay rate is chosen around 0.8–0.9 — it’s like a surface with a little bit of friction so it eventually slows down and stops (purple ball).\n",
        "\n",
        "<p align=\"center\">\n",
        "  <img width=\"460\" height=\"300\" src=\"https://miro.medium.com/max/800/1*zVi4ayX9u0MQQwa90CnxVg.gif\">\n",
        "</p>\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aALryw2eqHIQ"
      },
      "source": [
        "Go check this out --> [Interactive view](https://distill.pub/2017/momentum/)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "MqqAQJVSaLHg",
        "cellView": "form"
      },
      "source": [
        "#@markdown What are the advantages of momentum compared to vanilla gradient descent? (Select all that apply)\n",
        "reduced_oscillations = False #@param {type:\"boolean\"}\n",
        "faster_convergence = False #@param {type:\"boolean\"}\n",
        "possibility_of_evading_local_minima  = False #@param {type:\"boolean\"}\n",
        "adapts_learning_rate_based_on_convergence_speed = False #@param {type:\"boolean\"}\n",
        "\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6D2Yzsn3YbZ3"
      },
      "source": [
        "*Estimated time: 90 minutes since start*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iZcTTso8J23M"
      },
      "source": [
        "---\n",
        "# Wrap up"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7SM1xlMdJ7xB"
      },
      "source": [
        "## Submit responses"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7K1yAK8tKDuA",
        "cellView": "form"
      },
      "source": [
        "#@markdown #Run Cell to Show Airtable Form\n",
        "#@markdown ##**Confirm your answers and then click \"Submit\"**\n",
        "\n",
        "import time\n",
        "import numpy as np\n",
        "from IPython.display import IFrame\n",
        "\n",
        "def prefill_form(src, fields: dict):\n",
        "  '''\n",
        "  src: the original src url to embed the form\n",
        "  fields: a dictionary of field:value pairs,\n",
        "  e.g. {\"pennkey\": my_pennkey, \"location\": my_location}\n",
        "  '''\n",
        "  prefills = \"&\".join([\"prefill_%s=%s\"%(key, fields[key]) for key in fields])\n",
        "  src = src + prefills\n",
        "  src = \"+\".join(src.split(\" \"))\n",
        "  return src\n",
        "\n",
        "\n",
        "#autofill time if it is not present\n",
        "try: t0;\n",
        "except NameError: t0 = time.time()\n",
        "try: t1;\n",
        "except NameError: t1 = time.time()\n",
        "try: t2;\n",
        "except NameError: t2 = time.time()\n",
        "try: t3;\n",
        "except NameError: t3 = time.time()\n",
        "try: t4;\n",
        "except NameError: t4 = time.time()\n",
        "\n",
        "times = [(t-t0) for t in [t1,t2,t3,t4]]\n",
        "\n",
        "\n",
        "#autofill fields if they are not present\n",
        "#a missing pennkey and pod will result in an Airtable warning\n",
        "#which is easily fixed user-side.\n",
        "try: my_pennkey;\n",
        "except NameError: my_pennkey = \"\"\n",
        "try: my_pod;\n",
        "except NameError: my_pod = \"\"\n",
        "try: learning_from_previous_week;\n",
        "except NameError: learning_from_previous_week = \"\" \n",
        "try: strategy;\n",
        "except NameError: strategy = \"\";\n",
        "try: loss_fn;\n",
        "except NameError: loss_fn = \"\"\n",
        "try: characteristics_for_gd;\n",
        "except NameError: characteristics_for_gd = \"\"\n",
        "try: sgd_vs_minibtach_plot;\n",
        "except NameError: sgd_vs_minibtach_plot = \"\"\n",
        "try: noisy_learning_process;\n",
        "except NameError: noisy_learning_process = False\n",
        "try: computationally_efficient;\n",
        "except NameError: computationally_efficient = False\n",
        "try: increased_model_update_frequency;\n",
        "except NameError: increased_model_update_frequency = False\n",
        "try: memory_efficient;\n",
        "except NameError: memory_efficient = False\n",
        "try: convergence_speed;\n",
        "except NameError: convergence_speed = False\n",
        "try: batch_norm_ab;\n",
        "except NameError: batch_norm_ab = \"\"\n",
        "try: bn_deep_shallow;\n",
        "except NameError: bn_deep_shallow = False\n",
        "try: reduced_oscillations;\n",
        "except NameError: reduced_oscillations = False\n",
        "try: faster_convergence;\n",
        "except NameError: faster_convergence = False\n",
        "try: possibility_of_evading_local_minima;\n",
        "except NameError: possibility_of_evading_local_minima = False\n",
        "try: adaptive;\n",
        "except NameError: adaptive = False\n",
        "try: megabytes_1;\n",
        "except NameError: megabytes_1 = \"\"\n",
        "try: megabytes_2;\n",
        "except NameError: megabytes_2 = \"\"\n",
        "\n",
        "\n",
        "fields = {\n",
        "    \"my_pennkey\": my_pennkey,\n",
        "    \"my_pod\": my_pod, \n",
        "    \"learning_from_previous_week\": learning_from_previous_week,\n",
        "    \"strategy\": strategy, \n",
        "    \"loss_fn\": loss_fn, \n",
        "    \"characteristics_for_gd\": characteristics_for_gd,\n",
        "    \"sgd_vs_minibtach_plot\": sgd_vs_minibtach_plot,\n",
        "    \"noisy_learning_process\": noisy_learning_process,\n",
        "    \"computationally_efficient\": computationally_efficient,\n",
        "    \"increased_model_update_frequency\": increased_model_update_frequency,\n",
        "    \"memory_efficient\": memory_efficient,\n",
        "    \"convergence_speed\": convergence_speed,\n",
        "    \"batch_norm_ab\": batch_norm_ab,\n",
        "    \"bn_deep_shallow\": bn_deep_shallow,\n",
        "    \"reduced_oscillations\": reduced_oscillations,\n",
        "    \"faster_convergence\": faster_convergence,\n",
        "    \"possibility_of_evading_local_minima\": possibility_of_evading_local_minima,\n",
        "    \"adaptive\": adaptive,\n",
        "    \"cumulative_times\": times,\n",
        "    \"megabytes_1\": megabytes_1,\n",
        "    \"megabytes_2\": megabytes_2\n",
        "}\n",
        "\n",
        "src = \"https://airtable.com/embed/shrxiEcmPSW5yr7np?\"\n",
        "\n",
        "#now instead of the original source url, we do: src = prefill_form(src, fields)\n",
        "display(IFrame(src = prefill_form(src, fields), width = 800, height = 400))\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pTlLIcI9TlbN"
      },
      "source": [
        "# Feedback\n",
        "\n",
        "*   How could this session have been better?\n",
        "*   How happy are you in your group?\n",
        "*   How do you feel right now?\n",
        "\n",
        "Feel free to use the embeded form below or use this link:\n",
        "<a target=\"_blank\" rel=\"noopener noreferrer\" href=\"https://airtable.com/shrNSJ5ECXhNhsYss\">https://airtable.com/shrNSJ5ECXhNhsYss</a>"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "5K-ILLzoT0XO"
      },
      "source": [
        "# report to Airtable\n",
        "display(IFrame(src=\"https://airtable.com/embed/shrNSJ5ECXhNhsYss?backgroundColor=red\", width = 800, height = 400))"
      ],
      "execution_count": null,
      "outputs": []
    }
  ]
}