{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "name": "W4_Tutorial2.ipynb",
      "provenance": [],
      "collapsed_sections": [],
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/CIS-522/course-content/blob/main/tutorials/W04_Optimization/W4Tutorial2.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vMFtkpqfCN2f"
      },
      "source": [
        "# CIS-522 Week 4 Part 2\n",
        "# Optimization\n",
        "\n",
        "__Instructor__: Lyle Ungar\n",
        "\n",
        "__Content creator:__ Rongguang Wang\n",
        "\n",
        "__Content reviewer:__ Pooja Consul"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "854uzpETDJtP"
      },
      "source": [
        "---\n",
        "# Objectives for today\n",
        "\n",
        "We show how gradient descent can be tweaked using adaptive learning rate, and other techiniques to really speed up the optimization process, and hint at the theory behind it.\n",
        "\n",
        "1.   Why use adaptive learning rates?\n",
        "2.   Why does Adagrad help?\n",
        "3.   Understand and use RMSprop and Adam\n",
        "4.   The intuition behind natural gradients\n",
        "5.   Recognize and mitigate amplification and class error disparities  "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "cxCD0oTfKpXY",
        "cellView": "form"
      },
      "source": [
        "#@markdown What is your Pennkey and pod? (text, not numbers, e.g. bfranklin)\n",
        "my_pennkey = '' #@param {type:\"string\"}\n",
        "my_pod = 'Select' #@param ['Select', 'euclidean-wombat', 'sublime-newt', 'buoyant-unicorn', 'lackadaisical-manatee','indelible-stingray','superfluous-lyrebird','discreet-reindeer','quizzical-goldfish','astute-jellyfish','ubiquitous-cheetah','nonchalant-crocodile','fashionable-lemur','spiffy-eagle','electric-emu','quotidian-lion']\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RV8-x3YyGDRw"
      },
      "source": [
        "---\n",
        "# Setup\n",
        "Note that some of the code for today can take up to an hour to run. We have therefore \"hidden\" that code and shown the resulting outputs.\n",
        "\n",
        "[Here](https://docs.google.com/presentation/d/1NSE9VQPKhWQMlRniuxrUiKeVbjhEf7_rxjAwbQ-qzeE/edit?usp=sharing) are the slides for today's videos (in case you want to take notes). **Do not read them now.**"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "upSbWUROGFvk"
      },
      "source": [
        "# imports\n",
        "from __future__ import print_function\n",
        "import torch\n",
        "import torch.nn as nn\n",
        "import torch.nn.functional as F\n",
        "import torch.optim as optim\n",
        "from torchvision import datasets, transforms\n",
        "from torch.optim.lr_scheduler import StepLR\n",
        "from torch.utils.data import Dataset\n",
        "import time\n",
        "import numpy as np\n",
        "import matplotlib.pyplot as plt\n",
        "from collections import defaultdict\n",
        "\n",
        "import requests\n",
        "import io\n",
        "from urllib.request import urlopen"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "yDKo45Q-Gn7I",
        "cellView": "form"
      },
      "source": [
        "# @title Figure Settings\n",
        "import ipywidgets as widgets\n",
        "%matplotlib inline \n",
        "fig_w, fig_h = (8, 6)\n",
        "plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})\n",
        "%config InlineBackend.figure_format = 'retina'\n",
        "SMALL_SIZE = 12\n",
        "\n",
        "\n",
        "plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/\"\n",
        "              \"course-content/master/nma.mplstyle\")\n",
        "\n",
        "# plt.rcParams.update(plt.rcParamsDefault)\n",
        "# plt.rc('font', size=SMALL_SIZE)          # controls default text sizes\n",
        "# plt.rc('axes', titlesize=SMALL_SIZE)     # fontsize of the axes title\n",
        "# plt.rc('axes', labelsize=SMALL_SIZE)    # fontsize of the x and y labels\n",
        "# plt.rc('xtick', labelsize=SMALL_SIZE)    # fontsize of the tick labels\n",
        "# plt.rc('ytick', labelsize=SMALL_SIZE)    # fontsize of the tick labels\n",
        "# plt.rc('legend', fontsize=SMALL_SIZE)    # legend fontsize\n",
        "# plt.rc('figure', titlesize=SMALL_SIZE)  # fontsize of the figure title"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IuzyGTvyXuhF"
      },
      "source": [
        "---\n",
        "# Section 1:  Learning rate scheduling"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bzmdMYFHSdoe",
        "cellView": "form"
      },
      "source": [
        "#@title Video: The importance of the Learning Rate\n",
        "\n",
        "import time\n",
        "try: t0;\n",
        "except NameError: t0=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"aUmlgb38ABY\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5G3RimhHSZ52"
      },
      "source": [
        "If the learning rate is too large, optimization diverges; if it is too small, it takes too long to train or we end up with a suboptimal result. People often start large learning rate and then 'decay' or 'anneal'(decrease) it.  This can help both optimization and generalization.\n",
        "\n",
        "Common beliefs in how annealing works come from the optimization analysis of stochastic gradient descent: \n",
        "\n",
        "1.   An initial large learning rate accelerates training or helps the network escape spurious local minima\n",
        "2.   Decaying the learning rate helps the network converge to a local minimum and avoid oscillation. \n",
        "\n",
        "The simplest learning rate schedule is to decrease the learning rate linearly from a large initial value to a small value. This allows large weight changes in the beginning of the learning process and small changes or fine-tuning towards the end of the learning process. There are other schedules such as square root and exponential decay. \n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pP_zfGmXSd5D"
      },
      "source": [
        "## Exercise 1: Compare different annealing schedules: constant, linear, sqrt(t) and exp(-t)\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "hVgS_5JA3ypq",
        "cellView": "form"
      },
      "source": [
        "#@markdown **Student response**: How do you think they differ on train and test error? Which annealing schedule do you think is the best one ? \n",
        "annealing_difference = '' #@param {type:\"string\"}\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0jaja6Shbzpb"
      },
      "source": [
        "Firstly, let's plot the simulation of different annealing scheduels: constant, linear, sqrt(t) and exp(-t) in below cell."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xlS2oqhznc7R"
      },
      "source": [
        "model = torch.nn.Linear(2, 1)\n",
        "lr_anneal = ['constant', 'linear', 'sqrt', 'exp']\n",
        "lr_dict = defaultdict(list)\n",
        "\n",
        "for idx in range(len(lr_anneal)):\n",
        "    optimizer = optim.SGD(model.parameters(), lr=1e-2)\n",
        "    if lr_anneal[idx] == 'constant':\n",
        "        lambda1 = lambda epoch: 1\n",
        "    elif lr_anneal[idx] == 'linear':\n",
        "        lambda1 = lambda epoch: max(1e-7, 1 - 0.1*epoch)\n",
        "    elif lr_anneal[idx] == 'sqrt':\n",
        "        lambda1 = lambda epoch: (epoch + 1.0) ** -0.5\n",
        "    elif lr_anneal[idx] == 'exp':\n",
        "        lambda1 = lambda epoch: 0.1 ** epoch\n",
        "    scheduler = optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda1)\n",
        "    for i in range(10):\n",
        "        optimizer.step()\n",
        "        lr_dict[lr_anneal[idx]].append(optimizer.param_groups[0][\"lr\"])\n",
        "        scheduler.step()\n",
        "\n",
        "with plt.xkcd():\n",
        "  plt.plot(range(10), lr_dict['constant'], label='Constant')\n",
        "  plt.plot(range(10), lr_dict['linear'], label='Linear')\n",
        "  plt.plot(range(10), lr_dict['sqrt'], label='Sqrt')\n",
        "  plt.plot(range(10), lr_dict['exp'], label='Exp')\n",
        "  plt.title('Annealing Schedules')\n",
        "  plt.ylabel('Learning Rate')\n",
        "  plt.xlabel('Epoch')\n",
        "  plt.legend()\n",
        "  plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "wzZZE9x8zcYw",
        "cellView": "form"
      },
      "source": [
        "#@markdown **Student response**: Now that you can see how the different learning rates change over the epochs (batches), do you still agree with your previous answer? Why or why not?\n",
        "annealing_answer = '' #@param {type:\"string\"}\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zUVZ7u7Occlg"
      },
      "source": [
        "Now, check your assumption by running below digit classification example with different learning rate scheduelers: linear, sqrt(t) and exp(-t)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Wlmg9nWggwTs",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### helper functions (Run Me)\n",
        "class Net(nn.Module):\n",
        "    def __init__(self):\n",
        "        super(Net, self).__init__()\n",
        "        self.fc2 = nn.Linear(128, 10)\n",
        "        self.fc3 = nn.Linear(784, 128)\n",
        "\n",
        "    def forward(self, x):\n",
        "        x = torch.flatten(x, 1)\n",
        "        x = self.fc3(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.fc2(x)\n",
        "        output = F.log_softmax(x, dim=1)\n",
        "        return output\n",
        "\n",
        "def train(args, model, device, train_loader, optimizer, epoch):\n",
        "    model.train()\n",
        "    avg_loss, correct = (0., 0.)\n",
        "    for batch_idx, (data, target) in enumerate(train_loader):\n",
        "        data, target = data.to(device), target.to(device)\n",
        "        optimizer.zero_grad()\n",
        "        output = model(data)\n",
        "        loss = F.nll_loss(output, target)\n",
        "        avg_loss += loss.item()\n",
        "        pred = output.argmax(dim=1, keepdim=True)  # get the index of the max log-probability\n",
        "        correct += pred.eq(target.view_as(pred)).sum().item()\n",
        "        loss.backward()\n",
        "        optimizer.step()\n",
        "        if batch_idx % args['log_interval'] == 0:\n",
        "            print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n",
        "                epoch, batch_idx * len(data), len(train_loader.dataset),\n",
        "                100. * batch_idx / len(train_loader), loss.item()))\n",
        "    avg_loss /= len(train_loader.dataset)\n",
        "    return 100. * correct / len(train_loader.dataset)\n",
        "\n",
        "def test(model, device, test_loader):\n",
        "    model.eval()\n",
        "    test_loss = 0\n",
        "    correct = 0\n",
        "    with torch.no_grad():\n",
        "        for data, target in test_loader:\n",
        "            data, target = data.to(device), target.to(device)\n",
        "            output = model(data)\n",
        "            test_loss += F.nll_loss(output, target, reduction='sum').item()  # sum up batch loss\n",
        "            pred = output.argmax(dim=1, keepdim=True)  # get the index of the max log-probability\n",
        "            correct += pred.eq(target.view_as(pred)).sum().item()\n",
        "\n",
        "    test_loss /= len(test_loader.dataset)\n",
        "\n",
        "    print('\\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.4f}%)\\n'.format(\n",
        "        test_loss, correct, len(test_loader.dataset),\n",
        "        100. * correct / len(test_loader.dataset)))\n",
        "    return 100. * correct / len(test_loader.dataset)\n",
        "\n",
        "def schedular_eval(args):\n",
        "    use_cuda = not args['no_cuda'] and torch.cuda.is_available()\n",
        "    torch.manual_seed(args['seed'])\n",
        "    device = torch.device('cuda' if use_cuda else 'cpu')\n",
        "\n",
        "    train_kwargs = {'batch_size': args['batch_size']}\n",
        "    test_kwargs = {'batch_size': args['test_batch_size']}\n",
        "    if use_cuda:\n",
        "        cuda_kwargs = {'num_workers': 1,\n",
        "                       'pin_memory': True,\n",
        "                       'shuffle': True}\n",
        "        train_kwargs.update(cuda_kwargs)\n",
        "        test_kwargs.update(cuda_kwargs)\n",
        "\n",
        "    transform=transforms.Compose([\n",
        "        transforms.ToTensor(),\n",
        "        transforms.Normalize((0.1307,), (0.3081,))\n",
        "        ])\n",
        "    train_loader = torch.utils.data.DataLoader(datasets.MNIST('../data', train=True, download=True,\n",
        "                       transform=transform),**train_kwargs)\n",
        "    test_loader = torch.utils.data.DataLoader(datasets.MNIST('../data', train=False,\n",
        "                       transform=transform), **test_kwargs)\n",
        "\n",
        "    model = Net().to(device)\n",
        "    optimizer = optim.SGD(model.parameters(), lr=args['lr'], momentum=args['momentum'])\n",
        "\n",
        "    if args['anneal_type'] == 'constant':\n",
        "        lambda1 = lambda epoch: 1\n",
        "    elif args['anneal_type'] == 'linear':\n",
        "        lambda1 = lambda epoch: max(1e-7, 1 -0.1 * epoch)\n",
        "    elif args['anneal_type'] == 'sqrt':\n",
        "        lambda1 = lambda epoch: (epoch + 1.0) ** -0.5\n",
        "    elif args['anneal_type'] == 'exp':\n",
        "        lambda1 = lambda epoch: 0.1 ** epoch\n",
        "    scheduler = optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda1)\n",
        "\n",
        "    train_list, test_list = [], []\n",
        "    for epoch in range(1, args['epochs'] + 1):\n",
        "        '''\n",
        "        if epoch > 1:\n",
        "            for param_group in optimizer.param_groups:\n",
        "                param_group['lr'] *= 0.1\n",
        "        '''\n",
        "        train_acc = train(args, model, device, train_loader, optimizer, epoch)\n",
        "        train_list.append(100.-train_acc)\n",
        "        test_acc = test(model, device, test_loader)\n",
        "        test_list.append(100.-test_acc)\n",
        "        scheduler.step()\n",
        "\n",
        "    return train_list, test_list "
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6tBDX8ECdGP_"
      },
      "source": [
        "The training takes over 20 mins. Please skip running below cells for now and come back if time is allowed."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Pbs3P3JdrveH",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### Train (Run me)\n",
        "# Training settings\n",
        "args = {'batch_size': 64,\n",
        "        'test_batch_size': 1000,\n",
        "        'epochs': 10,\n",
        "        'lr': 0.01,\n",
        "        'momentum': 0.9,\n",
        "        'net_type': 'Net',\n",
        "        'anneal_type': 'linear',\n",
        "        'no_cuda': False,\n",
        "        'seed': 1,\n",
        "        'log_interval': 100\n",
        "        }\n",
        "\n",
        "lr_anneal = ['constant', 'linear', 'sqrt', 'exp']\n",
        "error_dict = {}\n",
        "\n",
        "for i in range(len(lr_anneal)):\n",
        "    args['anneal_type'] = lr_anneal[i]\n",
        "    train_error, test_error = schedular_eval(args)\n",
        "    error_dict['train' + str(lr_anneal[i])] = train_error\n",
        "    error_dict['test' + str(lr_anneal[i])] = test_error"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "esrdkxkWuQI-",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### Plot (run me)\n",
        "with plt.xkcd():\n",
        "    fig, axs = plt.subplots(1, 2, figsize=(10,4))\n",
        "    axs[0].plot(error_dict['trainconstant'], label='Constant', color='b')\n",
        "    axs[1].plot(error_dict['testconstant'], label='Constant', color='b', linestyle='dashed')\n",
        "    axs[0].plot(error_dict['trainlinear'], label='Linear', color='r')\n",
        "    axs[1].plot(error_dict['testlinear'], label='Linear', color='r', linestyle='dashed')\n",
        "    axs[0].plot(error_dict['trainsqrt'], label='Sqrt', color='g')\n",
        "    axs[1].plot(error_dict['testsqrt'], label='Sqrt', color='g', linestyle='dashed')\n",
        "    axs[0].plot(error_dict['trainexp'], label='Exp', color='orange')\n",
        "    axs[1].plot(error_dict['testexp'], label='Exp', color='orange', linestyle='dashed')\n",
        "    axs[0].set_title('Train')\n",
        "    axs[1].set_title('Test')\n",
        "    axs[0].set_ylabel('Error (%)')\n",
        "    #plt.yscale('log')\n",
        "    axs[0].set_xlabel('Epoch')\n",
        "    axs[1].set_xlabel('Epoch')\n",
        "    axs[0].legend()\n",
        "    axs[1].legend()\n",
        "    plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "joT0AnEvdLXu"
      },
      "source": [
        "Plot the train and test classification error curves of different annealing strategies by running below cell."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "JgLxW9XkdL3M"
      },
      "source": [
        "url = \"https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W04_Optimization/static/W4_Tutorial2_Exercise1_annealing.png\"\n",
        "img = plt.imread(requests.get(url, stream=True).raw)\n",
        "plt.imshow(img)\n",
        "plt.axis('off')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_nyyQml1znfT",
        "cellView": "form"
      },
      "source": [
        "#@markdown Which methods are good? Which are bad? Why?\n",
        "method_performance = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "p19KHhZEYv-k"
      },
      "source": [
        "*Estimated time: 20 minutes since start*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dHSs_dJixX0T"
      },
      "source": [
        "---\n",
        "# Section 2: Adaptive learning rate methods"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FNSXBg63EFMK",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Adaptative Gradient Descent\n",
        "\n",
        "try: t1;\n",
        "except NameError: t1=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"pR7CpHh3oNk\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "video\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SbRyYxtmSyEw"
      },
      "source": [
        "In the standard SGD formulation, every weight in network is updated with the same learning rate (global $\\eta$). Here, we adapt a learning rate for each weight individually, using information we get from their gradients.\n",
        "\n",
        "## Section 2.1: Adagrad\n",
        "\n",
        "Adagrad adapts the learning rate of each parameter, downweighting the learning rates for parameters that have changed a lot and upweighting the learning rates of parameters that have changed very little.\n",
        "\n",
        "It uses a different learning rate for every parameter $w_j$ at every time step, $t$. (The time step here in practice is a minibatch, with everything averaged over that minibatch.) The update for every parameter $w_j$ at each time step (or epoch) $t$ then becomes\n",
        "\n",
        "$$w_{t+1}=w_t- \\frac{\\eta}{\\sqrt{v_{t+1}+\\epsilon}} \\nabla f(w_t)$$\n",
        "\n",
        "where the equation holds for every feature $w_j$ separately. Thus, $\\nabla f(w_{t})$ is the partial derivative of the objective function w.r.t. to the parameter $w_j$ at time step $t$ and the learning rate for each feature is scaleed using the sum of the gradients for that feature:\n",
        "\n",
        "$$v_{t+1} = \\sum^t_{\\tau=1} \\nabla f(w_{\\tau})^2$$\n",
        "\n",
        "Adagrad effectively selects low learning rates for parameters associated with frequently occurring features, and high learning rates for parameters associated with infrequent features. It is thus well-suited for dealing with sparse data.\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qNqwe7BPcq9R",
        "cellView": "form"
      },
      "source": [
        "#@markdown Does Adagrad have a monotonically decreasing learning rate?\n",
        "monotonically_decreasing = True #@param [\"False\", \"True\"] {type:\"raw\"}\n",
        "\n",
        "#@markdown What problems can arise due to a monotonically decreasing learning rate? \n",
        "problems = '' #@param {type:\"string\"}\n",
        "\n",
        "\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HnV3yjjbUCJB"
      },
      "source": [
        "## Section 2.2: RMSprop\n",
        "\n",
        "RMSprop seeks to reduce Adagrad's aggressive, monotonically decreasing learning rate. Instead of accumulating all past squared gradients, RMSprop restricts the window of accumulated past gradients to some fixed size. The sum of gradients is recursively defined as a decaying average of all past squared gradients.\n",
        "\n",
        "$$w_{t+1}=w_t- \\frac{\\eta}{\\sqrt{v_{t+1}+\\epsilon}} \\nabla f(w_t)$$\n",
        "$$v_{t+1}=\\alpha v_t+(1-\\alpha)(\\nabla f(w_t))^2$$\n",
        "\n",
        "where \n",
        "*   $v$ is the 2nd moment estimate which depends (as a fraction $\\alpha$ similarly to the Momentum term) on the previous average and the current gradient.\n",
        "*   $\\alpha$ is usually set to $0.9$, while a good default value for the learning rate $\\eta$ is $0.001$.\n",
        "\n",
        "We update $v$ to estimate this noisy quantity via an exponential moving average (which is a standard way of maintaining an average of a quantity that may change over time). We need to put larger weights on the newer values as they provide more information. One way to do that is down-weight old values exponentially. The values in the $v$ calculation that are very old are down-weighted at each step by an $\\alpha$ constant, which varies between 0 and 1. This dampens the old values until they are no longer an important part of the exponential moving average.\n",
        "\n",
        "\n",
        "### Adam\n",
        "\n",
        "In addition to storing an exponentially decaying average of past squared gradients $v_t$ like RMSprop, Adam also keeps an exponentially decaying average of past gradients $m_t$, similar to momentum. Whereas momentum can be seen as a ball running down a slope, Adam behaves like a heavy ball with friction, which thus prefers flat minima in the error surface.\n",
        "\n",
        "We compute the decaying averages of past gradients $m_t$ and past squared gradients $v_t$ respectively as follows,\n",
        "\n",
        "$$m_{t+1}=\\beta m_t+(1-\\beta)\\nabla f(w_t)$$\n",
        "$$v_{t+1}=\\alpha v_t+(1-\\alpha)(\\nabla f(w_t))^2$$\n",
        "\n",
        "where\n",
        "*   $m_t$ is the exponentially decaying average of the first moment (the mean) of the gradients,\n",
        "*   $v_t$ is the exponentially decaying average of the second moment (the uncentered variance) of the gradients.\n",
        "\n",
        "As always, each of these equations uses averages of the gradient (and squared gradient) over the observations in the minibatch $B_t$, and everything is computed separately for each feature (using vectorized code,  of couse!).\n",
        "\n",
        "Since we initialize averages with zeros, the estimators are biased towards zero. To correct the bias,\n",
        "\n",
        "$$\\hat{m}_t=\\frac{m_t}{1-\\beta}$$\n",
        "$$\\hat{v}_t=\\frac{v_t}{1-\\alpha}$$\n",
        "\n",
        "Finally, the Adam update rule is\n",
        "\n",
        "$$w_{t+1}=w_t-\\frac{\\eta}{\\sqrt{\\hat{v}_t+\\epsilon}} \\hat{m}_t$$\n",
        "\n",
        "where the default values for $\\alpha$ and $\\beta$ are $0.9$ and $0.999$.\n",
        "\n",
        "\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-LXagOzUEBI6"
      },
      "source": [
        "[Interactive visualization](https://bl.ocks.org/EmilienDupont/raw/aaf429be5705b219aaaf8d691e27ca87/): click anywhere to see the different methods converge from that starting point"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KgdI1x0MS8-D"
      },
      "source": [
        "## Exercise 2: Learn and compare different adaptive learning rate optimizers\n",
        "\n",
        "For SGD with fixed schedule, Adagrad, RMSprop, Adam, how do they differ on train and test error? Which one works the best?\n",
        "\n",
        "We compare these optimizers by performing digit classification task in MNIST."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "NIWmVS4TzfC5"
      },
      "source": [
        "def optimizer_eval(args):\n",
        "    use_cuda = not args['no_cuda'] and torch.cuda.is_available()\n",
        "    torch.manual_seed(args['seed'])\n",
        "    device = torch.device('cuda' if use_cuda else 'cpu')\n",
        "\n",
        "    train_kwargs = {'batch_size': args['batch_size']}\n",
        "    test_kwargs = {'batch_size': args['test_batch_size']}\n",
        "    if use_cuda:\n",
        "        cuda_kwargs = {'num_workers': 1,\n",
        "                       'pin_memory': True,\n",
        "                       'shuffle': True}\n",
        "        train_kwargs.update(cuda_kwargs)\n",
        "        test_kwargs.update(cuda_kwargs)\n",
        "\n",
        "    transform=transforms.Compose([\n",
        "        transforms.ToTensor(),\n",
        "        transforms.Normalize((0.1307,), (0.3081,))\n",
        "        ])\n",
        "    train_loader = torch.utils.data.DataLoader(datasets.MNIST('../data', train=True, download=True,\n",
        "                       transform=transform),**train_kwargs)\n",
        "    test_loader = torch.utils.data.DataLoader(datasets.MNIST('../data', train=False,\n",
        "                       transform=transform), **test_kwargs)\n",
        "\n",
        "    model = Net().to(device)\n",
        "    if args['optimizer'] == 'sgd':\n",
        "        optimizer = optim.SGD(model.parameters(), lr=args['lr'])\n",
        "    elif args['optimizer'] == 'adagrad':\n",
        "        optimizer = optim.Adagrad(model.parameters(), lr=args['lr'])\n",
        "    elif args['optimizer'] == 'rmsprop':\n",
        "        optimizer = optim.RMSprop(model.parameters(), lr=1e-3)\n",
        "    elif args['optimizer'] == 'adam':\n",
        "        optimizer = optim.Adam(model.parameters(), lr=1e-3)\n",
        "\n",
        "    train_list, test_list = [], []\n",
        "    for epoch in range(1, args['epochs'] + 1):\n",
        "        train_acc = train(args, model, device, train_loader, optimizer, epoch)\n",
        "        train_list.append(100.-train_acc)\n",
        "        test_acc = test(model, device, test_loader)\n",
        "        test_list.append(100.-test_acc)\n",
        "\n",
        "    return train_list, test_list "
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uktQPea8gNBk"
      },
      "source": [
        "The training takes over 20 mins. Please skip running below cells for now and come back if time is allowed."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "vvX9ezmg0Er3",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### Train (run me)\n",
        "# Training settings\n",
        "args = {'batch_size': 64,\n",
        "        'test_batch_size': 1000,\n",
        "        'epochs': 10,\n",
        "        'lr': 0.01,\n",
        "        'momentum': 0.9,\n",
        "        'net_type': 'Net',\n",
        "        'anneal_type': 'linear',\n",
        "        'optimizer': 'sgd',\n",
        "        'no_cuda': False,\n",
        "        'seed': 1,\n",
        "        'log_interval': 100\n",
        "        }\n",
        "\n",
        "optimizer = ['sgd', 'adagrad', 'rmsprop', 'adam']\n",
        "error_dict = {}\n",
        "\n",
        "for i in range(len(optimizer)):\n",
        "    args['optimizer'] = optimizer[i]\n",
        "    train_error, test_error = optimizer_eval(args)\n",
        "    error_dict['train' + str(optimizer[i])] = train_error\n",
        "    error_dict['test' + str(optimizer[i])] = test_error"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "8JDWUE09040_",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### Plot (run me)\n",
        "with plt.xkcd():\n",
        "    fig, axs = plt.subplots(1, 2, figsize=(10,4))\n",
        "    axs[0].plot(error_dict['trainsgd'], label='SGD', color='b')\n",
        "    axs[1].plot(error_dict['testsgd'], label='SGD', color='b', linestyle='dashed')\n",
        "    axs[0].plot(error_dict['trainadagrad'], label='Adagrad', color='r')\n",
        "    axs[1].plot(error_dict['testadagrad'], label='Adagrad', color='r', linestyle='dashed')\n",
        "    axs[0].plot(error_dict['trainrmsprop'], label='RMSprop', color='g')\n",
        "    axs[1].plot(error_dict['testrmsprop'], label='RMSprop', color='g', linestyle='dashed')\n",
        "    axs[0].plot(error_dict['trainadam'], label='Adam', color='orange')\n",
        "    axs[1].plot(error_dict['testadam'], label='Adam', color='orange', linestyle='dashed')\n",
        "    axs[0].set_title('Train')\n",
        "    axs[1].set_title('Test')\n",
        "    axs[0].set_ylabel('Error (%)')\n",
        "    #plt.yscale('log')\n",
        "    axs[0].set_xlabel('Epoch')\n",
        "    axs[1].set_xlabel('Epoch')\n",
        "    axs[0].legend()\n",
        "    axs[1].legend()\n",
        "    plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Y4KsR565gkk9"
      },
      "source": [
        "Plot the train and test classification error curves of different optimizers by running below cell."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "VwR9c1cmglEq"
      },
      "source": [
        "url = \"https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W04_Optimization/static/W4_Tutorial2_Exercise2_optimizers.png\"\n",
        "img = plt.imread(requests.get(url, stream=True).raw)\n",
        "plt.imshow(img)\n",
        "plt.axis('off')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "SjgObTND0YCZ",
        "cellView": "form"
      },
      "source": [
        "#@markdown **Student response**: How do the learning rate methods - fixed schedule, Adagrad, RMSprop, Adam differ on train and test error? What works best?\n",
        "lr_best_train = '' #@param {type:\"string\"}\n",
        "lr_best_test = '' #@param {type:\"string\"}\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "aKXWprzKZden"
      },
      "source": [
        "*Estimated time: 40 minutes since start*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Jy3B_ME5X-f5"
      },
      "source": [
        "---\n",
        "# Section 3:  Natural gradients"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "LB7QFb4ryLYp",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Natural Gradient\n",
        "\n",
        "try: t2;\n",
        "except NameError: t2=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"QmM6_qBHuvM\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GM4IqLEWx-F2"
      },
      "source": [
        "Neural nets, when trained on new data, tend to forget what they have already learned.  Natural gradients try to reduce this problem.\n",
        "\n",
        "Instead of fixing the Euclidean distance each parameter moves in the parameter space, we can fix the distance in the distribution space of the target output. I.e., instead of changing the parameter vector to at  most move an epsilon distance, we constrain the output distribution of the model to be within an epsilon distance from the distribution on the previous step. We measure the distance between two distributions with Kullback-Leibler Divergence (KL) <insert link>. \n",
        " The natural gradient update rule is\n",
        "\n",
        "$$w_{t+1}=w_t-\\eta F(w_t)^{-1}\\nabla f_i(w_t)$$,\n",
        "\n",
        "where $F(w_t)$ is the Fisher information matrix and \n",
        "\n",
        "$$KL(p(y;w)||p(y;w+\\delta w))\\approx \\frac{1}{2} \\delta w^T  F \\delta w$$. \n",
        "\n",
        "The Fisher information measures the amount of information that an observable random variable $Y$ (here, the output of the neural net) carries about an unknown parameter (here, $w$) of a distribution that models $Y$. Formally, it is the variance of the score, or the expected value of the observed information. When there are $d$ parameters, then the Fisher information is a $d × d$ positive semidefinite matrix.\n",
        "\n",
        "The natural gradient changes the input/output function the least possible amount as it moves down the gradient. It works great in theory but is hard to implement. If you’d like to learn more, [here](https://wiseodd.github.io/techblog/2018/03/14/natural-gradient/) is a nice formal description of natural gradients, which shows that “Fisher Information Matrix defines the local curvature in distribution space for which KL-divergence is the metric” and explains that the second moment, as calculated by Adam, approximates the Fisher Information Matrix.  I.e., Adam approximates natural gradients, but instead of inverting a (Fisher) matrix of dimension the number of weights x number of weights, it approximates that matrix by a diagonal matrix.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "exFPHhvDswg-"
      },
      "source": [
        "*Estimated time: 50 minutes since start*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gE6jwH4UTTHC"
      },
      "source": [
        "---\n",
        "# Section 4:  Bias in ML: Amplification and class error imbalance"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Dm6hKdmtTVGf",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Bias in ML\n",
        "\n",
        "try: t3;\n",
        "except NameError: t3=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"GXlM9QVxE98\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "82KDnqzrTXkB"
      },
      "source": [
        "## Exercise 3: Class error imbalance\n",
        "\n",
        "We introduce digit image samples with colorized background (instead of black in the original data) to the MNIST dataset. Then, the dataset consists of images with black background and red background in ratio 83% and 17% separately. We report the testing accuracy for the two types of images, along with the average accuracy.\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_ahPVhoMmQxR"
      },
      "source": [
        "class colorMNIST(Dataset):\n",
        "\n",
        "    COLOR_DICT = {\n",
        "        'red': [1., 0., 0.],\n",
        "        'blue': [0., 1., 0.],\n",
        "        'green': [0., 0., 1.],\n",
        "    }\n",
        "\n",
        "    def __init__(self, train=True, color='red', ratio=0.2):\n",
        "        self.train = train\n",
        "        self.color = color\n",
        "        self.ratio = ratio\n",
        "\n",
        "        transform = transforms.Compose([\n",
        "            transforms.ToTensor(),\n",
        "            transforms.Lambda(lambda x: x.repeat(3, 1, 1))\n",
        "        ])\n",
        "\n",
        "        self.data = datasets.MNIST('./datasets/mnist/MNIST', train=self.train, \n",
        "                                        download=True, transform=transform)\n",
        "        self.data = self.colorize_dataset()\n",
        "\n",
        "    def colorize_img(self, img):\n",
        "        if self.color in self.COLOR_DICT.keys():\n",
        "            color = self.COLOR_DICT[self.color]\n",
        "        elif self.color == 'rand':\n",
        "            color = torch.rand(3)\n",
        "        else:\n",
        "            raise ValueError('Invalid color.')\n",
        "\n",
        "        zero_tensor = torch.zeros_like(img)\n",
        "        zero_tensor[0, :, :] = color[0]\n",
        "        zero_tensor[1, :, :] = color[1]\n",
        "        zero_tensor[2, :, :] = color[2]\n",
        "\n",
        "        return torch.where(img < 0.05, zero_tensor, img)\n",
        "\n",
        "    def colorize_dataset(self):\n",
        "        last_select_sample = int(len(self.data) * self.ratio)\n",
        "        new_data = []\n",
        "        for i in range(last_select_sample):\n",
        "            img, label = self.data[i]\n",
        "            new_img = self.colorize_img(img)\n",
        "            new_data.append((new_img, label))\n",
        "\n",
        "        new_data.extend(self.data)\n",
        "        return new_data\n",
        "\n",
        "    def __len__(self):\n",
        "        return len(self.data)\n",
        "\n",
        "    def __getitem__(self, index):\n",
        "        img, label = self.data[index]\n",
        "        return img, label\n",
        "\n",
        "class colorNet(nn.Module):\n",
        "    def __init__(self):\n",
        "        super(colorNet, self).__init__()\n",
        "        self.conv1 = nn.Conv2d(3, 32, 3, 1)\n",
        "        self.conv2 = nn.Conv2d(32, 64, 3, 1)\n",
        "        self.dropout1 = nn.Dropout(0.25)\n",
        "        self.dropout2 = nn.Dropout(0.5)\n",
        "        self.fc1 = nn.Linear(9216, 128)\n",
        "        self.fc2 = nn.Linear(128, 10)\n",
        "        self.fc3 = nn.Linear(784, 128)\n",
        "\n",
        "    def forward(self, x):\n",
        "        x = self.conv1(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.conv2(x)\n",
        "        x = F.relu(x)\n",
        "        x = F.max_pool2d(x, 2)\n",
        "        x = self.dropout1(x)\n",
        "        x = torch.flatten(x, 1)\n",
        "        x = self.fc1(x)\n",
        "        #x = self.fc3(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.dropout2(x)\n",
        "        x = self.fc2(x)\n",
        "        output = F.log_softmax(x, dim=1)\n",
        "        return output"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "xrMhM0QbmeSo"
      },
      "source": [
        "Check the original image and colorized image by running below cell."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "-o2hEeFPwwL1"
      },
      "source": [
        "data = colorMNIST().data\n",
        "with plt.xkcd():\n",
        "    fig, axs = plt.subplots(1, 2, figsize=(10,4))\n",
        "    axs[0].imshow(data[int(len(data)*(2/12))+1][0].numpy().swapaxes(0,2))\n",
        "    axs[1].imshow(data[1][0].numpy().swapaxes(0,2))\n",
        "    axs[0].set_title('Original digit')\n",
        "    axs[1].set_title('Colorized digit')\n",
        "    axs[0].axis('off')\n",
        "    axs[1].axis('off')\n",
        "    plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bW-2qNQJiInH",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### helper functions (Run Me)\n",
        "def train(args, model, device, train_loader, optimizer, epoch):\n",
        "    model.train()\n",
        "    for batch_idx, (data, target) in enumerate(train_loader):\n",
        "        data, target = data.to(device), target.to(device)\n",
        "        optimizer.zero_grad()\n",
        "        output = model(data)\n",
        "        loss = F.nll_loss(output, target)\n",
        "        loss.backward()\n",
        "        optimizer.step()\n",
        "        if batch_idx % args['log_interval'] == 0:\n",
        "            print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n",
        "                epoch, batch_idx * len(data), len(train_loader.dataset),\n",
        "                100. * batch_idx / len(train_loader), loss.item()))\n",
        "\n",
        "def test(model, device, test_loader):\n",
        "    model.eval()\n",
        "    test_loss = 0\n",
        "    pred_list, target_list = [], []\n",
        "    correct, raw_correct, color_correct = (0, 0, 0)\n",
        "    with torch.no_grad():\n",
        "        for data, target in test_loader:\n",
        "            data, target = data.to(device), target.to(device)\n",
        "            output = model(data)\n",
        "            test_loss += F.nll_loss(output, target, reduction='sum').item()  # sum up batch loss\n",
        "            pred = output.argmax(dim=1, keepdim=True)  # get the index of the max log-probability\n",
        "            pred_list.append(pred)\n",
        "            target_list.append(target.view_as(pred))\n",
        "            #correct += pred.eq(target.view_as(pred)).sum().item()\n",
        "\n",
        "    pred = torch.cat(pred_list, dim=0)\n",
        "    target = torch.cat(target_list, dim=0)\n",
        "    correct = pred.eq(target).sum().item()\n",
        "\n",
        "    last_color_sample = int(len(test_loader.dataset)*(2/12))\n",
        "    color_correct = pred[:last_color_sample].eq(target[:last_color_sample]).sum().item()\n",
        "    raw_correct = pred[last_color_sample:].eq(target[last_color_sample:]).sum().item()\n",
        "    test_loss /= len(test_loader.dataset)\n",
        "\n",
        "    print('\\nTest set: Average loss: {:.4f}, Avg Accuracy: {}/{} ({:.4f}%), Raw Accuracy: {:.4f}%, Color Accuracy: {:.4f}%\\n'.format(\n",
        "        test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset), 100. * raw_correct / (len(test_loader.dataset)-last_color_sample),\n",
        "        100. * color_correct / last_color_sample))\n",
        "    return 100. * correct / len(test_loader.dataset), 100. * raw_correct / (len(test_loader.dataset)-last_color_sample), 100. * color_correct / last_color_sample\n",
        "    \n",
        "def main(args):\n",
        "    use_cuda = not args['no_cuda'] and torch.cuda.is_available()\n",
        "    torch.manual_seed(args['seed'])\n",
        "    device = torch.device('cuda' if use_cuda else 'cpu')\n",
        "\n",
        "    train_kwargs = {'batch_size': args['batch_size']}\n",
        "    test_kwargs = {'batch_size': args['test_batch_size']}\n",
        "    if use_cuda:\n",
        "        cuda_kwargs = {'num_workers': 1,\n",
        "                       'pin_memory': True}\n",
        "        train_kwargs.update(cuda_kwargs)\n",
        "        test_kwargs.update(cuda_kwargs)\n",
        "\n",
        "    train_loader = torch.utils.data.DataLoader(colorMNIST(train=True), shuffle=True, **train_kwargs)\n",
        "    test_loader = torch.utils.data.DataLoader(colorMNIST(train=False), shuffle=False, **test_kwargs)\n",
        "\n",
        "    model = colorNet().to(device)\n",
        "    optimizer = optim.SGD(model.parameters(), lr=args['lr'], momentum=args['momentum'])\n",
        "\n",
        "    acc_list, raw_list, color_list = [], [], []\n",
        "    start_time = time.time()\n",
        "    for epoch in range(1, args['epochs'] + 1):\n",
        "        train(args, model, device, train_loader, optimizer, epoch)\n",
        "        #time_list.append(time.time()-start_time)\n",
        "        acc, raw, color = test(model, device, test_loader)\n",
        "        acc_list.append(acc)\n",
        "        raw_list.append(raw)\n",
        "        color_list.append(color)\n",
        "\n",
        "    return acc_list, raw_list, color_list"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "gDka-2FivCxG",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### Train (run me)\n",
        "# Training settings\n",
        "args = {'batch_size': 64,\n",
        "        'test_batch_size': 1000,\n",
        "        'epochs': 10,\n",
        "        'lr': 0.01,\n",
        "        'momentum': 0.9,\n",
        "        'no_cuda': False,\n",
        "        'seed': 1,\n",
        "        'log_interval': 100\n",
        "        }\n",
        "\n",
        "acc_list, raw_list, color_list = main(args)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "C_4QwL6j4NRu",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### Plot (run me)\n",
        "with plt.xkcd():\n",
        "    plt.plot(np.arange(10)+1, acc_list, label='Average')\n",
        "    plt.plot(np.arange(10)+1, raw_list, label='Raw Images (83%)')\n",
        "    plt.plot(np.arange(10)+1, color_list, label='Colorized Images (17%)')\n",
        "    plt.title('Test')\n",
        "    plt.ylabel('Accuracy (%)')\n",
        "    plt.xlabel('Epoch')\n",
        "    plt.legend()\n",
        "    plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0i54ygBnk8Ra"
      },
      "source": [
        "Which class (black background vs. red background) has lower accuracy? Why does it have lower accuracy? What real world consequences might this mathematical property have?\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "tyrbVYMAmEWZ",
        "cellView": "form"
      },
      "source": [
        "ex3_answer = '' #@param {type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QLdl_ULf6nF1"
      },
      "source": [
        "## Exercise 4: Amplification\n",
        "\n",
        "We again use zeros and ones from MNIST dataset for binary classification. The class sample ratio is 83% : 17% for 0 and 1 respectively. When testing, we apply a Gaussian blur filter and add Guassian noise to the images from both class. We report the testing accuracy for the two class of images, along with the average accuracy.\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "pz45dD3tmkpc"
      },
      "source": [
        "class GaussianNoise(object):\n",
        "    def __init__(self, mean=0., std=1.):\n",
        "        self.std = std\n",
        "        self.mean = mean\n",
        "        \n",
        "    def __call__(self, tensor):\n",
        "        return tensor + torch.randn(tensor.size()) * self.std + self.mean\n",
        "\n",
        "class binaryMNIST(Dataset):\n",
        "\n",
        "    def __init__(self, train=True, size=(5000, 900), ratio=0.2):\n",
        "        self.train = train\n",
        "        self.size= size\n",
        "        self.ratio = ratio\n",
        "\n",
        "        if self.train:\n",
        "            transform=transforms.Compose([\n",
        "                transforms.ToTensor(),\n",
        "                transforms.Normalize((0.1307,), (0.3081,))\n",
        "            ])\n",
        "\n",
        "            self.data = datasets.MNIST('./datasets/mnist/MNIST', train=True, \n",
        "                                        download=True, transform=transform)\n",
        "            mask = []\n",
        "            for i in range(2):\n",
        "                idxes = np.where(self.data.targets == i)[0]\n",
        "                if i == 0:\n",
        "                    mask.append(idxes[:self.size[0]])\n",
        "                else:\n",
        "                    mask.append(idxes[:int(self.size[0]*self.ratio)])\n",
        "        else:\n",
        "            transform=transforms.Compose([\n",
        "                transforms.ToTensor(),\n",
        "                transforms.Normalize((0.1307,), (0.3081,)),\n",
        "                transforms.GaussianBlur(3, sigma=(0.1, 2.0)),\n",
        "                GaussianNoise(0., 0.1)\n",
        "            ])\n",
        "\n",
        "            self.data = datasets.MNIST('./datasets/mnist/MNIST', train=False, \n",
        "                                        download=True, transform=transform)\n",
        "            mask = []\n",
        "            for i in range(2):\n",
        "                idxes = np.where(self.data.targets == i)[0]\n",
        "                if i == 0:\n",
        "                    mask.append(idxes[:self.size[1]])\n",
        "                else:\n",
        "                    mask.append(idxes[:int(self.size[1]*self.ratio)])\n",
        "  \n",
        "        \n",
        "        masks = np.concatenate(mask, axis=None)\n",
        "        self.images, self.labels = self.data.data[masks], self.data.targets[masks]\n",
        "        \n",
        "    def __len__(self):\n",
        "        return len(self.images)\n",
        "\n",
        "    def __getitem__(self, index):\n",
        "        img, label = self.images[index].float(), self.labels[index].long()\n",
        "        return img, label\n",
        "\n",
        "class binaryNet(nn.Module):\n",
        "    def __init__(self):\n",
        "        super(binaryNet, self).__init__()\n",
        "        self.dropout2 = nn.Dropout(0.5)\n",
        "        self.fc1 = nn.Linear(9216, 128)\n",
        "        self.fc2 = nn.Linear(128, 2)\n",
        "        self.fc3 = nn.Linear(784, 128)\n",
        "\n",
        "    def forward(self, x):\n",
        "        x = torch.flatten(x, 1)\n",
        "        x = self.fc3(x)\n",
        "        x = F.relu(x)\n",
        "        x = self.dropout2(x)\n",
        "        x = self.fc2(x)\n",
        "        output = F.log_softmax(x, dim=1)\n",
        "        return output"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5xzwkMAVmqMH"
      },
      "source": [
        "Check the training image and testing image (Gaussian filter and noises are applied) by running below cell."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "92gGWEQk1Ytz"
      },
      "source": [
        "train_data = binaryMNIST(train=True).data\n",
        "test_data = binaryMNIST(train=False).data\n",
        "with plt.xkcd():\n",
        "    fig, axs = plt.subplots(1, 2, figsize=(10,4))\n",
        "    axs[0].imshow(np.squeeze(train_data[1][0].numpy(), axis=0))\n",
        "    axs[1].imshow(np.squeeze(test_data[1][0].numpy(), axis=0))\n",
        "    axs[0].set_title('Original digit')\n",
        "    axs[1].set_title('Blur + Noise digit')\n",
        "    axs[0].axis('off')\n",
        "    axs[1].axis('off')\n",
        "    plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "whu_LJ6p6oej",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### helper functions (Run Me)\n",
        "\n",
        "def test(model, device, test_loader):\n",
        "    model.eval()\n",
        "    test_loss = 0\n",
        "    pred_list, target_list = [], []\n",
        "    correct, zero_correct, one_correct = (0, 0, 0)\n",
        "    with torch.no_grad():\n",
        "        for data, target in test_loader:\n",
        "            data, target = data.to(device), target.to(device)\n",
        "            output = model(data)\n",
        "            test_loss += F.nll_loss(output, target, reduction='sum').item()  # sum up batch loss\n",
        "            pred = output.argmax(dim=1, keepdim=True)  # get the index of the max log-probability\n",
        "            pred_list.append(pred)\n",
        "            target_list.append(target.view_as(pred))\n",
        "            #correct += pred.eq(target.view_as(pred)).sum().item()\n",
        "\n",
        "    pred = torch.cat(pred_list, dim=0)\n",
        "    target = torch.cat(target_list, dim=0)\n",
        "    correct = pred.eq(target).sum().item()\n",
        "\n",
        "    #last_color_sample = int(len(test_loader.dataset)*(2/12))\n",
        "    zero_correct = pred[target==0].eq(target[target==0]).sum().item()\n",
        "    one_correct = pred[target==1].eq(target[target==1]).sum().item()\n",
        "    test_loss /= len(test_loader.dataset)\n",
        "\n",
        "    print('\\nTest set: Average loss: {:.4f}, Avg Accuracy: {}/{} ({:.4f}%), Zero Accuracy: {:.4f}%, One Accuracy: {:.4f}%\\n'.format(\n",
        "        test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset), 100. * zero_correct / len(target==0),\n",
        "        100. * one_correct / len(target==1)))\n",
        "    return 100. * correct / len(test_loader.dataset), 100. * zero_correct / len(target==0), 100. * one_correct / len(target==1)\n",
        "    \n",
        "def main(args):\n",
        "    use_cuda = not args['no_cuda'] and torch.cuda.is_available()\n",
        "    torch.manual_seed(args['seed'])\n",
        "    device = torch.device('cuda' if use_cuda else 'cpu')\n",
        "\n",
        "    train_kwargs = {'batch_size': args['batch_size']}\n",
        "    test_kwargs = {'batch_size': args['test_batch_size']}\n",
        "    if use_cuda:\n",
        "        cuda_kwargs = {'num_workers': 1,\n",
        "                       'pin_memory': True}\n",
        "        train_kwargs.update(cuda_kwargs)\n",
        "        test_kwargs.update(cuda_kwargs)\n",
        "\n",
        "    train_loader = torch.utils.data.DataLoader(binaryMNIST(train=True), shuffle=True, **train_kwargs)\n",
        "    test_loader = torch.utils.data.DataLoader(binaryMNIST(train=False), shuffle=False, **test_kwargs)\n",
        "\n",
        "    model = binaryNet().to(device)\n",
        "    optimizer = optim.SGD(model.parameters(), lr=args['lr'], momentum=args['momentum'])\n",
        "\n",
        "    acc_list, one_list, zero_list = [], [], []\n",
        "    start_time = time.time()\n",
        "    for epoch in range(1, args['epochs'] + 1):\n",
        "        train(args, model, device, train_loader, optimizer, epoch)\n",
        "        #time_list.append(time.time()-start_time)\n",
        "        acc, zero, one = test(model, device, test_loader)\n",
        "        acc_list.append(acc)\n",
        "        zero_list.append(zero)\n",
        "        one_list.append(one)\n",
        "        \n",
        "    return acc_list, zero_list, one_list"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bsVmnN2ePhge",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### Train (run me)\n",
        "# Training settings\n",
        "args = {'batch_size': 64,\n",
        "        'test_batch_size': 1000,\n",
        "        'epochs': 10,\n",
        "        'lr': 0.01,\n",
        "        'momentum': 0.9,\n",
        "        'no_cuda': False,\n",
        "        'seed': 1,\n",
        "        'log_interval': 100\n",
        "        }\n",
        "\n",
        "acc_list, zero_list, one_list = main(args)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "PHW3dee0SY-2",
        "cellView": "form"
      },
      "source": [
        "# @markdown ### Plot (run me)\n",
        "with plt.xkcd():\n",
        "    plt.plot(np.arange(10)+1, acc_list, label='Average')\n",
        "    plt.plot(np.arange(10)+1, zero_list, label='Digit Zero (83%)')\n",
        "    plt.plot(np.arange(10)+1, one_list, label='Digit One (17%)')\n",
        "    plt.title('Test')\n",
        "    plt.ylabel('Accuracy (%)')\n",
        "    plt.xlabel('Epoch')\n",
        "    plt.legend()\n",
        "    plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "rsnNJeP6mVFp",
        "cellView": "form"
      },
      "source": [
        "#@markdown **Student response**: What is the ratio of predicted majority class 0's to rarer 1's on the blurred images? Why does this outcome occur? What real world consequences might this mathematical property have?\n",
        "ex4_answer = \"\" #@param {type:\"string\"}\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "o81NkU0QZws8"
      },
      "source": [
        "*Estimated time: 90 minutes since start*"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iZcTTso8J23M"
      },
      "source": [
        "---\n",
        "# Wrap up"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7SM1xlMdJ7xB"
      },
      "source": [
        "## Submit responses"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7K1yAK8tKDuA",
        "cellView": "form"
      },
      "source": [
        "#@markdown #Run Cell to Show Airtable Form\n",
        "#@markdown ##**Confirm your answers and then click \"Submit\"**\n",
        "\n",
        "import time\n",
        "import numpy as np\n",
        "from IPython.display import IFrame\n",
        "\n",
        "def prefill_form(src, fields: dict):\n",
        "  '''\n",
        "  src: the original src url to embed the form\n",
        "  fields: a dictionary of field:value pairs,\n",
        "  e.g. {\"pennkey\": my_pennkey, \"location\": my_location}\n",
        "  '''\n",
        "  prefills = \"&\".join([\"prefill_%s=%s\"%(key, fields[key]) for key in fields])\n",
        "  src = src + prefills\n",
        "  src = \"+\".join(src.split(\" \"))\n",
        "  return src\n",
        "\n",
        "\n",
        "#autofill time if it is not present\n",
        "try: t0;\n",
        "except NameError: t0 = time.time()\n",
        "try: t1;\n",
        "except NameError: t1 = time.time()\n",
        "try: t2;\n",
        "except NameError: t2 = time.time()\n",
        "try: t3;\n",
        "except NameError: t3 = time.time()\n",
        "try: t4;\n",
        "except NameError: t4 = time.time()\n",
        "\n",
        "#autofill fields if they are not present\n",
        "#a missing pennkey and pod will result in an Airtable warning\n",
        "#which is easily fixed user-side.\n",
        "try: my_pennkey;\n",
        "except NameError: my_pennkey = \"\"\n",
        "try: my_pod;\n",
        "except NameError: my_pod = \"\"\n",
        "try: annealing_difference;\n",
        "except NameError: annealing_difference = \"\"\n",
        "try: annealing_answer;\n",
        "except NameError: annealing_answer = \"\"\n",
        "try: method_performance;\n",
        "except NameError: method_performance = \"\"\n",
        "try: monotonically_decreasing;\n",
        "except NameError: monotonically_decreasing = \"Select\"\n",
        "try: problems;\n",
        "except NameError: problems = \"\"\n",
        "try: lr_best_train;\n",
        "except NameError: lr_best_train = \"\"\n",
        "try: lr_best_test;\n",
        "except NameError: lr_best_test = \"\"\n",
        "try: ex3_answer;\n",
        "except NameError: ex3_answer = \"\"\n",
        "try: ex4_answer;\n",
        "except NameError: ex4_answer = \"\"\n",
        "\n",
        "times = [(t-t0) for t in [t1,t2,t3,t4]]\n",
        "\n",
        "fields = {\n",
        "    \"my_pennkey\": my_pennkey,\n",
        "    \"my_pod\": my_pod,\n",
        "\t\"annealing_difference\": annealing_difference,\n",
        "    \"annealing_answer\": annealing_answer,\n",
        "    \"method_performance\": method_performance,\n",
        "    \"monotonically_decreasing\":monotonically_decreasing,\n",
        "    \"problems\": problems,\n",
        "    \"lr_best_train\": lr_best_train,\n",
        "    \"lr_best_test\": lr_best_test,\n",
        "    \"ex3_answer\": ex3_answer,\n",
        "    \"ex4_answer\": ex4_answer,\n",
        "    \"cumulative_times\": times\n",
        "}\n",
        "\n",
        "src = \"https://airtable.com/embed/shrwUN7XmbuUVG6Q2?\"\n",
        "\n",
        "#now instead of the original source url, we do: src = prefill_form(src, fields)\n",
        "display(IFrame(src = prefill_form(src, fields), width = 800, height = 400))\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pTlLIcI9TlbN"
      },
      "source": [
        "# Feedback\n",
        "\n",
        "*   How could this session have been better?\n",
        "*   How happy are you in your group?\n",
        "*   How do you feel right now?\n",
        "\n",
        "Feel free to use the embeded form below or use this link:\n",
        "<a target=\"_blank\" rel=\"noopener noreferrer\" href=\"https://airtable.com/shrNSJ5ECXhNhsYss\">https://airtable.com/shrNSJ5ECXhNhsYss</a>"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "5K-ILLzoT0XO"
      },
      "source": [
        "# report to Airtable\n",
        "display(IFrame(src=\"https://airtable.com/embed/shrNSJ5ECXhNhsYss?backgroundColor=red\", width = 800, height = 400))"
      ],
      "execution_count": null,
      "outputs": []
    }
  ]
}