{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "W8_Tutorial1",
      "provenance": [],
      "collapsed_sections": [
        "VVCR7EVyFkWS"
      ],
      "machine_shape": "hm",
      "include_colab_link": true
    },
    "kernel": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.7.8"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/CIS-522/course-content/blob/main/tutorials/W08_AutoEncoders_GANs/student/W8_Tutorial1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ML2DwqwkVwfo"
      },
      "source": [
        "# CIS-522 Week 8 Part 1\n",
        "# AutoEncoders (AEs) and Variational AutoEncoders (VAEs)\n",
        "\n",
        "__Instructor:__ Konrad Kording\n",
        "\n",
        "__Content creators:__ Richard Lange, Arash Ash"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "exeRQO8LnRZB"
      },
      "source": [
        "## Today's agenda\n",
        "In the first tutorial of Week 8, we are going to\n",
        "\n",
        "1. Think about unsupervised learning and get a bird's eye view of why it is useful\n",
        "2. See the connection between AutoEncoding and dimensionality reduction\n",
        "3. Start thinking about neural networks as generative models\n",
        "4. Put on our Bayesian hats and turn AEs into VAEs\n",
        "\n",
        "But before we start, let's do a 15-minute recap of what we have learned in the previous week."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Y_mmmZQ0TIBi"
      },
      "source": [
        "\n",
        "## Recap the experience from last week"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xRPk6HG-Rj5N",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Week 7 Recap\n",
        "import time\n",
        "try: t0;\n",
        "except NameError: t0=time.time()\n",
        "\n",
        "from IPython.display import YouTubeVideo\n",
        "video = YouTubeVideo(id=\"VHhtye5SwY0\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "uiN7yWdPSiCH"
      },
      "source": [
        "#@markdown What is your Pennkey and pod? (text, not numbers, e.g. bfranklin)\n",
        "my_pennkey = '' #@param {type:\"string\"}\n",
        "my_pod = 'Select' #@param ['Select', 'euclidean-wombat', 'sublime-newt', 'buoyant-unicorn', 'lackadaisical-manatee','indelible-stingray','superfluous-lyrebird','discreet-reindeer','quizzical-goldfish','astute-jellyfish','ubiquitous-cheetah','nonchalant-crocodile','fashionable-lemur','spiffy-eagle','electric-emu','quotidian-lion']\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "q1qjVNRuTji2"
      },
      "source": [
        "Meet with your pod for few minutes to discuss what you learned, what was clear, and what you hope to learn more about."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "BHR0gq1VThDc",
        "cellView": "form"
      },
      "source": [
        "#@markdown Tell us on Airtable what the upshot of this discussion is for you.\n",
        "w7_upshot = '' #@param {type:\"string\"}\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5x61BA1qPubi"
      },
      "source": [
        "---\n",
        "# Setup"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "EuOTxAJdmhsl"
      },
      "source": [
        "# we need to first upgrade the Colab's TorchVision\n",
        "!pip install --upgrade torchvision"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "uw7q7Z_ZPt66"
      },
      "source": [
        "# imports\n",
        "import matplotlib.pylab as plt\n",
        "from tqdm.notebook import tqdm, trange\n",
        "from math import sqrt\n",
        "\n",
        "import torch\n",
        "import torch.nn as nn\n",
        "import torch.nn.functional as F\n",
        "import torch.optim as optim\n",
        "import torchvision as tv\n",
        "from torch.utils.data import DataLoader\n",
        "\n",
        "DEVICE = 'cuda' if torch.cuda.is_available() else 'cpu'"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "LID483Ou-z53",
        "cellView": "form"
      },
      "source": [
        "# @title Figure Settings\n",
        "%config InlineBackend.figure_format = 'retina'\n",
        "%matplotlib inline \n",
        "\n",
        "fig_w, fig_h = (8, 6)\n",
        "plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})\n",
        "\n",
        "plt.style.use(\"https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle\")"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "rNg1RPFSH1Xc",
        "cellView": "form"
      },
      "source": [
        "#@title Helper functions\n",
        "\n",
        "def image_moments(image_batches, n_batches=None):\n",
        "    \"\"\"\n",
        "    Compute mean an covariance of all pixels from batches of images\n",
        "    \"\"\"\n",
        "    m1, m2 = torch.zeros((), device=DEVICE), torch.zeros((), device=DEVICE)\n",
        "    n = 0\n",
        "    for im in tqdm(image_batches, total=n_batches, leave=False,\n",
        "                   desc='Computing pixel mean and covariance...'):\n",
        "        im = im.to(DEVICE)\n",
        "        b = im.size()[0]\n",
        "        im = im.view(b, -1)\n",
        "        m1 = m1 + im.sum(dim=0)\n",
        "        m2 = m2 + (im.view(b,-1,1) * im.view(b,1,-1)).sum(dim=0)\n",
        "        n += b\n",
        "    m1, m2 = m1/n, m2/n\n",
        "    cov = m2 - m1.view(-1,1)*m1.view(1,-1)\n",
        "    return m1.cpu(), cov.cpu()\n",
        "\n",
        "def pca_encoder_decoder(mu, cov, k):\n",
        "    \"\"\"\n",
        "    Compute encoder and decoder matrices for PCA dimensionality reduction\n",
        "    \"\"\"\n",
        "    mu = mu.view(1,-1)\n",
        "    u, s, v = torch.svd_lowrank(cov, q=k)\n",
        "    W_encode = v / torch.sqrt(s)\n",
        "    W_decode = u * torch.sqrt(s)\n",
        "    \n",
        "    def pca_encode(x):\n",
        "        # Encoder: subtract mean image and project onto top K eigenvectors of\n",
        "        # the data covariance\n",
        "        return (x.view(-1,mu.numel()) - mu) @ W_encode\n",
        "    \n",
        "    def pca_decode(h):\n",
        "        # Decoder: un-project then add back in the mean\n",
        "        return (h @ W_decode.T) + mu\n",
        "    \n",
        "    return pca_encode, pca_decode\n",
        "\n",
        "# Helper for plotting images\n",
        "def plot_torch_image(image, ax=None):\n",
        "    ax = ax if ax is not None else plt.gca()\n",
        "    c, h, w = image.size()\n",
        "    cm = 'gray' if c==1 else None\n",
        "    # Torch images have shape (channels, height, width) but matplotlib expects\n",
        "    # (height, width, channels) or just (height,width) when grayscale\n",
        "    ax.imshow(image.detach().cpu().permute(1,2,0).squeeze(), cmap=cm)\n",
        "    ax.set_xticks([])\n",
        "    ax.set_yticks([])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "AeuJ4budZOas"
      },
      "source": [
        "---\n",
        "# Section 1: Supervised and unsupervised learning"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "NMCMAipWKMeZ",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Datasets\n",
        "\n",
        "try: t1;\n",
        "except NameError: t1=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"Vw9MLfb4bi4\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Ev8xgKVBiwCB",
        "cellView": "form"
      },
      "source": [
        "#@title Download a few standard image datasets while the above video plays\n",
        "# See https://pytorch.org/docs/stable/torchvision/datasets.html\n",
        "%%capture\n",
        "\n",
        "# MNIST contains handwritten digets 0-9, in grayscale images of size (1,28,28)\n",
        "mnist = tv.datasets.MNIST('./mnist/', train=True, transform=tv.transforms.ToTensor(), download=True)\n",
        "mnist_val = tv.datasets.MNIST('./mnist/', train=False, transform=tv.transforms.ToTensor(), download=True)\n",
        "# CIFAR10 contains 10 object classes in color images of size (3,32,32)\n",
        "cifar10 = tv.datasets.CIFAR10('./cifar10/', train=True, transform=tv.transforms.ToTensor(), download=True)\n",
        "cifar10_val = tv.datasets.CIFAR10('./cifar10/', train=False, transform=tv.transforms.ToTensor(), download=True)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DsczwVZsKzEE"
      },
      "source": [
        "Unsupervised and semi-supervised learning are broad concepts that can be applied in many different domains. In machine learning research, however, computer vision is by far the most common domain for studying these things. Using image datasets will also let us build on what you learned last week. But keep in mind that the techniques you learn this week are quite general!\n",
        "\n",
        "It's always a good idea to visualize your data. Let's look at some samples of images in each of the example datasets we've downloaded:"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DICj-nphMuQo"
      },
      "source": [
        "### Visualize MNIST examples"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "kQTwy7Y1MxqW"
      },
      "source": [
        "minval, maxval = float('inf'), float('-inf')\n",
        "plt.figure(figsize=(10,4))\n",
        "for i in range(10):\n",
        "    idx = torch.randint(len(mnist), size=())\n",
        "    image, label_idx = mnist[idx]\n",
        "    plt.subplot(2,5,i+1)\n",
        "    plot_torch_image(image)\n",
        "    plt.title(f\"'{label_idx}'\")\n",
        "    minval, maxval = min(minval, image.min()), max(maxval, image.max())\n",
        "plt.show()\n",
        "\n",
        "print(f\"MNIST contains {len(mnist)} examples each of size {image.size()} with values ranging in [{minval},{maxval}]\")"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "DMSlLFGWN2Ha"
      },
      "source": [
        "### Visualize CIFAR10 examples"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "sKW3GfHQPeFf"
      },
      "source": [
        "minval, maxval = float('inf'), float('-inf')\n",
        "plt.figure(figsize=(10,4))\n",
        "for i in range(10):\n",
        "    idx = torch.randint(len(cifar10), size=())\n",
        "    image, label_idx = cifar10[idx]\n",
        "    plt.subplot(2,5,i+1)\n",
        "    plot_torch_image(image)\n",
        "    plt.title(f\"'{cifar10.classes[label_idx]}'\")\n",
        "    minval, maxval = min(minval, image.min()), max(maxval, image.max())\n",
        "plt.show()\n",
        "\n",
        "print(f\"CIFAR10 contains {len(cifar10)} examples each of size {image.size()} with values ranging in [{minval},{maxval}]\")"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Pa0bqPRVR2vN"
      },
      "source": [
        "The goal of today is to make sense of these images in an _unsupervised_ way, that is without the labels."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sTr705q3aZRG"
      },
      "source": [
        "### Select a dataset\n",
        "\n",
        "We've built today's tutorial to be flexible. It should work more-or-less out of the box with both MNIST and CIFAR (and other image datasets). MNIST is in many ways simpler, and the results will likely look better and run a bit faster if using MNIST. But we are leaving it up to you to pick which one you want to experiment with!\n",
        "\n",
        "We encourage pods to coordinate so that some members use MNIST and others use CIFAR10."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "4aiHOutIa7E_"
      },
      "source": [
        "# Uncomment this to select MNIST\n",
        "my_dataset = mnist\n",
        "my_dataset_name = \"MNIST\"\n",
        "my_dataset_size = (1, 28, 28)\n",
        "my_dataset_dim = 28*28\n",
        "my_valset = mnist_val\n",
        "\n",
        "# Uncomment this to select CIFAR\n",
        "# my_dataset = cifar10\n",
        "# my_dataset_name = \"CIFAR\"\n",
        "# my_dataset_size = (3, 32, 32)\n",
        "# my_dataset_dim = 3*32*32\n",
        "# my_valset = cifar10_val"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Z4Gl5z8zlRWH"
      },
      "source": [
        "---\n",
        "# Section 2: AutoEncoders\n",
        "*Estimated time to here: 25min*\n",
        "## Conceptual introduction to AutoEncoders"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DQouSlpSaooX",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Linear Autoencoders\n",
        "\n",
        "try: t2;\n",
        "except NameError: t2=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"QwsHAKDN_vw\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QkephF68qxjP"
      },
      "source": [
        "## Build a linear AutoEncoder"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "GWBr4YYTbgGE"
      },
      "source": [
        "Now we'll create our first autoencoder. It will reduce images down to $K$ dimensions. The architecture will be quite simple: the input will be linearly mapped to a single hidden layer with $K$ units, which will then be linearly mapped back to an output that is the same size as the input:\n",
        "$$\\mathbf{x} \\longrightarrow \\mathbf{h} \\longrightarrow \\mathbf{x'}$$\n",
        "\n",
        "The loss function we'll use will simply be mean squared error (MSE) quantifying how well the reconstruction ($\\mathbf{x'}$) matches the original image ($\\mathbf{x}$):\n",
        "$$\\text{MSE Loss} = \\sum_{i=1}^{N} ||\\mathbf{x}_i - \\mathbf{x'}_i||^2_2$$\n",
        "\n",
        "If all goes well, then the AutoEncoder will learn, **end to end**, a good \"encoding\" or \"compression\" of inputs ($\\mathbf{x \\longrightarrow h}$) as well as a good \"decoding\" ($\\mathbf{h \\longrightarrow x'}$)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Rlb6MaPJk0kB"
      },
      "source": [
        "The first choice to make is the dimensionality of $\\mathbf{h}$. We'll see more on this below, but For MNIST, 5 to 20 is plenty. For CIFAR, we need more like 50 to 100 dimensions.\n",
        "\n",
        "Coordinate with your pod to try a variety of values for $K$ in each dataset so you can compare results."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "zYZNBb2olR4t"
      },
      "source": [
        "# Pick your own K\n",
        "K = ..."
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ipsndMbNwAWT"
      },
      "source": [
        "*Example solution:*  \n",
        "\n",
        "```python\n",
        "K = 20\n",
        "```"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Kl3x376LldRq"
      },
      "source": [
        "### Exercise 1\n",
        "### Fill in the missing parts of the `LinearAutoEncoder` class and training loop\n",
        "\n",
        "1. The `LinearAutoEncoder` as two stages: an `encoder` which linearly maps from inputs to a hidden layer of size `K` (with no nonlinearity), and a `decoder` which maps back from `K` up to the number of pixels in each image (`my_dataset_dim`).\n",
        "2. The training loop will minimize MSE loss, as written above."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ejGv1nxRmHCp"
      },
      "source": [
        "class LinearAutoEncoder(nn.Module):\n",
        "    def __init__(self, K):\n",
        "        ####################################################################\n",
        "        # Fill in all missing code below (...),\n",
        "        # then remove or comment the line below to test your class\n",
        "        raise NotImplementedError(\"Please complete the LinearAutoEncoder class!\")\n",
        "        #################################################################### \n",
        "        super(LinearAutoEncoder, self).__init__()\n",
        "        self.enc_lin = ... # your code here\n",
        "        self.dec_lin = ... # your code here\n",
        "    \n",
        "    def encode(self, x):\n",
        "        h = ... # your code here\n",
        "        return h\n",
        "    \n",
        "    def decode(self, h):\n",
        "        x_prime = ... # your code here\n",
        "        return x_prime\n",
        "\n",
        "    def forward(self, x):\n",
        "        flat_x = x.view(x.size()[0], -1)\n",
        "        h = self.encode(flat_x)\n",
        "        return self.decode(h).view(x.size())\n",
        "\n",
        "def train_autoencoder(autoencoder, dataset, epochs=20, batch_size=250):\n",
        "    autoencoder.to(DEVICE)\n",
        "    optim = torch.optim.Adam(autoencoder.parameters(), lr=1e-3, weight_decay=1e-5)\n",
        "    loss_fn = nn.MSELoss()\n",
        "    loader = DataLoader(dataset, batch_size=batch_size, shuffle=True,\n",
        "                        pin_memory=True, num_workers=2)\n",
        "    mse_loss = torch.zeros(epochs*len(dataset)//batch_size, device=DEVICE)\n",
        "    i = 0\n",
        "    for epoch in trange(epochs, desc='Epoch'):\n",
        "        for im_batch, _ in loader:\n",
        "            im_batch = im_batch.to(DEVICE)\n",
        "            optim.zero_grad()\n",
        "            ####################################################################\n",
        "            # Fill in all missing code below (...),\n",
        "            # then remove or comment the line below to test your function\n",
        "            raise NotImplementedError(\"Please complete the train_autoencoder function!\")\n",
        "            #################################################################### \n",
        "            loss = ... # your code here\n",
        "            loss.backward()\n",
        "            optim.step()\n",
        "\n",
        "            mse_loss[i] = loss.detach()\n",
        "            i += 1\n",
        "    # After training completes, make sure the model is on CPU so we can easily\n",
        "    # do more visualizations and demos.\n",
        "    autoencoder.to('cpu')\n",
        "    return mse_loss.cpu()\n",
        "\n",
        "# Uncomment to test your code\n",
        "# lin_ae = LinearAutoEncoder(K)\n",
        "# lin_losses = train_autoencoder(lin_ae, my_dataset)\n",
        "\n",
        "# plt.figure()\n",
        "# plt.plot(lin_losses)\n",
        "# plt.ylim([0, 2*torch.as_tensor(lin_losses).median()])\n",
        "# plt.xlabel('Training batch')\n",
        "# plt.ylabel('MSE Loss')\n",
        "# plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "C_nAXyitwqKD"
      },
      "source": [
        "[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W08_AutoEncoders_GANs/solutions/W8_Tutorial1_Solution_Ex01.py)\n",
        "\n",
        "*Example output:*  \n",
        "\n",
        "<img alt='Solution hint 1' align='left' width=600 height=400 src=https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W08_AutoEncoders_GANs/static/W8_Tutorial1_Solution_Ex01.png />"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "UxSA-0UgPNc1"
      },
      "source": [
        "One way to think about AutoEncoders is that they automatically discover good dimensionality-reduction of the data. Another easy and common technique for dimensionality reduction is to project data onto the top $K$ **principal components** (Principal Component Analysis or PCA). For comparison, let's also do PCA."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "1At-bwcmPzwx"
      },
      "source": [
        "# PCA requires finding the top K eigenvectors of the data covariance. Start by\n",
        "# finding the mean and covariance of the pixels in our dataset\n",
        "loader = DataLoader(my_dataset, batch_size=32, pin_memory=True)\n",
        "mu, cov = image_moments((im for im, _ in loader), n_batches=len(my_dataset)//32)\n",
        "pca_encode, pca_decode = pca_encoder_decoder(mu, cov, K)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "M7DCDLnIg-Mk"
      },
      "source": [
        "Let's visualize some of the reconstructions ($\\mathbf{x'}$) side-by-side with the input images ($\\mathbf{x}$)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "36iI09hNhN2n"
      },
      "source": [
        "n_plot = 7\n",
        "plt.figure(figsize=(10,4.5))\n",
        "for i in range(n_plot):\n",
        "    idx = torch.randint(len(my_dataset), size=())\n",
        "    image, _ = my_dataset[idx]\n",
        "    # Get reconstructed image from autoencoder\n",
        "    with torch.no_grad():\n",
        "        reconstruction = lin_ae(image.unsqueeze(0)).reshape(image.size())\n",
        "    \n",
        "    # Get reconstruction from PCA dimensionality reduction\n",
        "    h_pca = pca_encode(image)\n",
        "    recon_pca = pca_decode(h_pca).reshape(image.size())\n",
        "    \n",
        "    plt.subplot(3,n_plot,i+1)\n",
        "    plot_torch_image(image)\n",
        "    if i == 0:\n",
        "        plt.ylabel('Original\\nImage')\n",
        "    \n",
        "    plt.subplot(3,n_plot,i+1+n_plot)\n",
        "    plot_torch_image(reconstruction)\n",
        "    if i == 0:\n",
        "        plt.ylabel(f'Lin AE\\n(K={K})')\n",
        "    \n",
        "    plt.subplot(3,n_plot,i+1+2*n_plot)\n",
        "    plot_torch_image(recon_pca)\n",
        "    if i == 0:\n",
        "        plt.ylabel(f'PCA\\n(K={K})')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9m5arq_hqdLJ"
      },
      "source": [
        "**Student response**\n",
        "\n",
        "compare the PCA-based reconstructions to those from the linear autoencoder. Is one better than the other? Are they equally good? Equally bad?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "9iqm4O6VnK6i",
        "cellView": "form"
      },
      "source": [
        "linear_ae_vs_pca = \"\" #@param{type:'string'}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Z80iE-ssiwsW"
      },
      "source": [
        "If you're interested, Appendix C includes a plot of explained-variance as a function of $K$, as well as some discussion of why fraction of explained variance using PCA is a rough and not very good guide to choosing $K$ for a given dataset."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "yu1RomPmrAYm"
      },
      "source": [
        "## Building a nonlinear convolutional autoencoder\n",
        "*Estimated time to here: 50min*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "SgKS9x0vS_oR",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Convolutional Autoencoders\n",
        "\n",
        "video = YouTubeVideo(id=\"mzHY6rW_4Eo\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ozE6CZkXv6jM"
      },
      "source": [
        "The `nn.Linear` layer by default has a \"bias\" term, which is a learnable offset parameter separate for each output unit. Just like the PCA encoder \"centered\" the data by subtracting off the average image (`mu`) before encoding and added it back in during decoding, a bias term in the decoder can effectively account for the first moment of the data (AKA the average of all images in the training set). Convolution layers do have bias parameters, but the bias is applied per filter rather than per pixel location. If we're generating RGB images, then `Conv2d` will learn only 3 biases: one for each of R, G, and B.\n",
        "\n",
        "For some conceptual continuity with both PCA and the `nn.Linear` layers above, the next block defines a custom layer for adding a learnable per-pixel offset. This custom layer will be used twice: as the first stage of the encoder and as the final stage of the decoder. Ideally, this means that the rest of the neural net can focus on fitting more interesting fine-grained structure."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "raWUBjnUwAs6"
      },
      "source": [
        "class BiasLayer(nn.Module):\n",
        "    def __init__(self, shape):\n",
        "        super(BiasLayer, self).__init__()\n",
        "        init_bias = torch.zeros(shape)\n",
        "        self.bias = nn.Parameter(init_bias, requires_grad=True)\n",
        "    \n",
        "    def forward(self, x):\n",
        "        return x + self.bias"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KwhsR5WWxfdN"
      },
      "source": [
        "With that out of the way, we will next define a **nonlinear** and **convolutional** autoencoder. Here's a quick tour of the architecture:\n",
        "\n",
        "1. The **encoder** once again maps from images to $\\mathbf{h}\\in\\mathbb{R}^K$. This will use a `BiasLayer` followed by two convolutional layers (`nn.Conv2D`), followed by flattening and linearly projecting down to $K$ dimensions. The convolutional layers will have `ReLU` nonlinearities on their outputs. \n",
        "1. The **decoder** inverts this process, taking in vectors of length $K$ and outputting images. Roughly speaking, its architecture is a \"mirror image\" of the encoder: the first decoder layer is linear, followed by two **deconvolution** layers (`nn.ConvTranspose2d`). The `ConvTranspose2d` layers will have `ReLU` nonlinearities on their _inputs_. This \"mirror image\" between the encoder and decoder is a useful and near-ubiquitous convention. The idea is that the decoder can then learn to approximately invert the encoder, but it is not a strict requirement (and it does not guarantee the decoder will be an exact inverse of the encoder!).\n",
        "\n",
        "Below is a schematic of the architecture for MNIST. Notice that the width and height dimensions of the image planes reduce after each `nn.Conv2d` and increase after each `nn.ConvTranspose2d`. With CIFAR10, the architecture is the same but the exact sizes will differ a bit.\n",
        "\n",
        "<img src=\"https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W08_AutoEncoders_GANs/static/conv_sizes.png\" />\n",
        "\n",
        "We will not go into detail about `ConvTranspose2d` here. For now, just know that it acts a bit like, but not exactly, an inverse to `Conv2d`. The following code demonstrates this change in sizes:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_j3LZPFJ2yh_"
      },
      "source": [
        "dummy_image = torch.zeros(my_dataset_size).unsqueeze(0)\n",
        "channels = my_dataset_size[0]\n",
        "dummy_conv = nn.Conv2d(in_channels=channels, out_channels=channels, kernel_size=5)\n",
        "dummy_conv_transpose = nn.ConvTranspose2d(in_channels=channels, out_channels=channels, kernel_size=5)\n",
        "\n",
        "print(f'Size of image is {dummy_image.size()}')\n",
        "print(f'Size of Conv2D(image) {dummy_conv(dummy_image).size()}')\n",
        "print(f'Size of ConvTranspose2D(image) {dummy_conv_transpose(dummy_image).size()}')\n",
        "print(f'Size of ConvTranspose2D(Conv2D(image)) {dummy_conv_transpose(dummy_conv(dummy_image)).size()}')"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VfY8DpGx28pt"
      },
      "source": [
        "### Exercise 2\n",
        "### fill in code for the `ConvAutoEncoder` module\n",
        "*Estimated time to here: 65min*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "mjH_LETh7hgn"
      },
      "source": [
        "class ConvAutoEncoder(nn.Module):\n",
        "    def __init__(self, K, num_filters=32, filter_size=5):\n",
        "        super(ConvAutoEncoder, self).__init__()\n",
        "        \n",
        "        # With padding=0, the number of pixels cut off from each image dimension\n",
        "        # is filter_size // 2. Double it to get the amount of pixels lost in\n",
        "        # width and height per Conv2D layer, or added back in per \n",
        "        # ConvTranspose2D layer.\n",
        "        filter_reduction = 2 * (filter_size // 2)\n",
        "\n",
        "        # After passing input through two Conv2d layers, the shape will be\n",
        "        # 'shape_after_conv'. This is also the shape that will go into the first\n",
        "        # deconvolution layer in the decoder\n",
        "        self.shape_after_conv = (num_filters,\n",
        "                                 my_dataset_size[1]-2*filter_reduction,\n",
        "                                 my_dataset_size[2]-2*filter_reduction)\n",
        "        flat_size_after_conv = self.shape_after_conv[0] \\\n",
        "            * self.shape_after_conv[1] \\\n",
        "            * self.shape_after_conv[2]\n",
        "        ####################################################################\n",
        "        # Fill in all missing code below (...),\n",
        "        # then remove or comment the line below to test your class\n",
        "        raise NotImplementedError(\"Please complete the ConvAutoEncoder class!\")\n",
        "        #################################################################### \n",
        "        # Your code here\n",
        "        ... # Create encoder layers (BiasLayer, Conv2d, Conv2d, Flatten, Linear)\n",
        "        ... # Create decoder layers (Linear, Unflatten(-1, self.shape_after_conv), ConvTranspose2d, ConvTranspose2d, BiasLayer)\n",
        "\n",
        "    def encode(self, x):\n",
        "        ... # Your code here: encode batch of images (don't forget ReLUs!)\n",
        "        return h\n",
        "    \n",
        "    def decode(self, h):\n",
        "        ... # Your code here: decode batch of h vectors (don't forget ReLUs!)\n",
        "        return x_prime\n",
        "\n",
        "    def forward(self, x):\n",
        "        return self.decode(self.encode(x))\n",
        "\n",
        "# Uncomment to test your solution\n",
        "# conv_ae = ConvAutoEncoder(K=K)\n",
        "# assert conv_ae.encoder(my_dataset[0][0].unsqueeze(0)).numel() == K, \\\n",
        "#     \"Encoder output size should be K!\"\n",
        "# conv_losses = train_autoencoder(conv_ae, my_dataset)\n",
        "# plt.figure()\n",
        "# plt.plot(lin_losses)\n",
        "# plt.plot(conv_losses)\n",
        "# plt.legend(['Lin AE', 'Conv AE'])\n",
        "# plt.xlabel('Training batch')\n",
        "# plt.ylabel('MSE Loss')\n",
        "# plt.ylim([0,2*max(torch.as_tensor(conv_losses).median(), torch.as_tensor(lin_losses).median())])\n",
        "# plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JYqxWucCxVEE"
      },
      "source": [
        "[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W08_AutoEncoders_GANs/solutions/W8_Tutorial1_Solution_Ex02.py)\n",
        "\n",
        "*Example output:*  \n",
        "\n",
        "<img alt='Solution hint 2' align='left' width=600 height=400 src=https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W08_AutoEncoders_GANs/static/W8_Tutorial1_Solution_Ex02.png />"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "n48WL4b8yxUm"
      },
      "source": [
        "You should see that the `ConvAutoEncoder` achieved lower MSE loss than the linear one. If not, you may need to retrain it (or run another few training epochs from where it left off). We make fewer guarantees on this working with CIFAR10, but it should definitely work with MNIST.\n",
        "\n",
        "Now let's visually compare the reconstructed images from the linear and nonlinear autoencoders. Keep in mind that both have the same dimensionality for $\\mathbf{h}$!"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "bmC-u5zEbpmF"
      },
      "source": [
        "n_plot = 7\n",
        "plt.figure(figsize=(10,4.5))\n",
        "for i in range(n_plot):\n",
        "    idx = torch.randint(len(my_dataset), size=())\n",
        "    image, _ = my_dataset[idx]\n",
        "    with torch.no_grad():\n",
        "        # Get reconstructed image from linear autoencoder\n",
        "        lin_recon = lin_ae(image.unsqueeze(0))[0]\n",
        "    \n",
        "        # Get reconstruction from deep (nonlinear) autoencoder\n",
        "        nonlin_recon = conv_ae(image.unsqueeze(0))[0]\n",
        "    \n",
        "    plt.subplot(3,n_plot,i+1)\n",
        "    plot_torch_image(image)\n",
        "    if i == 0:\n",
        "        plt.ylabel('Original\\nImage')\n",
        "    \n",
        "    plt.subplot(3,n_plot,i+1+n_plot)\n",
        "    plot_torch_image(lin_recon)\n",
        "    if i == 0:\n",
        "        plt.ylabel(f'Lin AE\\n(K={K})')\n",
        "    \n",
        "    plt.subplot(3,n_plot,i+1+2*n_plot)\n",
        "    plot_torch_image(nonlin_recon)\n",
        "    if i == 0:\n",
        "        plt.ylabel(f'NonLin AE\\n(K={K})')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bSpvDpLXrLD5"
      },
      "source": [
        "## Inspecting the hidden representations\n",
        "*Estimated time to here: 80min*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "k7OGAEnI0J8o",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Latent Space\n",
        "\n",
        "video = YouTubeVideo(id=\"HcvTrvCntBY\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TaAkIZpZ9nAG"
      },
      "source": [
        "Let's start by plotting points in the hidden space ($\\mathbf{h}$), colored by class of the image (which, of course, the autoencoder didn't know about during training!)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "eSDlEhzC95rc"
      },
      "source": [
        "h_vectors = torch.zeros(len(my_valset), K, device=DEVICE)\n",
        "labels = torch.zeros(len(my_valset), dtype=torch.int32)\n",
        "loader = DataLoader(my_valset, batch_size=200, pin_memory=True)\n",
        "conv_ae.to(DEVICE)\n",
        "i = 0\n",
        "for im, la in loader:\n",
        "    b = im.size()[0]\n",
        "    h_vectors[i:i+b, :] = conv_ae.encode(im.to(DEVICE))\n",
        "    labels[i:i+b] = la\n",
        "    i += b\n",
        "conv_ae.to('cpu')\n",
        "h_vectors = h_vectors.detach().cpu()\n",
        "_, _, h_pcs = torch.pca_lowrank(h_vectors, q=2)\n",
        "h_xy = h_vectors @ h_pcs\n",
        "\n",
        "with plt.xkcd():\n",
        "    plt.figure(figsize=(7,6))\n",
        "    plt.scatter(h_xy[:,0], h_xy[:,1], c=labels, cmap='hsv')\n",
        "    plt.title('2D projection of h, colored by class')\n",
        "    plt.colorbar()\n",
        "    plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2vhNusJZ01KR"
      },
      "source": [
        "To explore the hidden representations, $\\mathbf{h}$, we're going to pick two random images from the dataset and interpolate them 3 different ways. Let's introduce some notation for this: we'll use a variable $t \\in [0,1]$ to gradually transition from image $\\mathbf{x}_1$ at $t=0$ to image $\\mathbf{x}_2$ at $t=1$. Using $\\mathbf{x}(t)$ to denote the interpolated output, the three methods will be\n",
        "\n",
        "1. interpolate the raw pixels, so $$\\mathbf{x}(t) = (1-t) \\cdot \\mathbf{x}_1 + t \\cdot \\mathbf{x}_2$$\n",
        "2. interpolate their encodings from the **linear** AE, so $$\\mathbf{x}(t) = \\text{linear_decoder}((1-t) \\cdot \\text{linear_encoder}(\\mathbf{x}_1) + t \\cdot  \\text{linear_encoder}(\\mathbf{x}_2))$$\n",
        "3. interpolate their encodings from the **nonlinear** AE, so $$\\mathbf{x}(t) = \\text{conv_decoder}((1-t) \\cdot \\text{conv_encoder}(\\mathbf{x}_1) + t \\cdot  \\text{conv_encoder}(\\mathbf{x}_2))$$\n",
        "\n",
        "Note: this demo will likely look better using MNIST than using CIFAR. Check with other members of your pod. If you're using CIFAR for this notebook, consider having someone using MNIST share their screen. \n",
        "\n",
        "What do you notice about the \"interpolated\" images, especially around $t \\approx 1/2$? How many distinct classes do you see in the bottom row?\n",
        "Re-run the above cell a few times to look at multiple examples.\n",
        "\n",
        "**Discuss with your pod and describe what is happening here.**"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "S_NGi5P11Dj5"
      },
      "source": [
        "idx1 = torch.randint(len(my_dataset), size=())\n",
        "idx2 = torch.randint(len(my_dataset), size=())\n",
        "x1, _ = my_dataset[idx1]\n",
        "x2, _ = my_dataset[idx2]\n",
        "n_interp = 11\n",
        "\n",
        "with torch.no_grad():\n",
        "    h1_lin = lin_ae.encode(x1.reshape(1,-1))\n",
        "    h2_lin = lin_ae.encode(x2.reshape(1,-1))\n",
        "    h1_conv = conv_ae.encode(x1.unsqueeze(0))\n",
        "    h2_conv = conv_ae.encode(x2.unsqueeze(0))\n",
        "\n",
        "plt.figure(figsize=(14, 4.5))\n",
        "for i in range(n_interp):\n",
        "    t = i / (n_interp-1)\n",
        "    pixel_interp = (1-t)*x1 + t*x2\n",
        "    plt.subplot(3,n_interp,i+1)\n",
        "    plot_torch_image(pixel_interp)\n",
        "    if i == 0:\n",
        "        plt.ylabel('Raw\\nPixels')\n",
        "    plt.title(f't={i}/{n_interp-1}')\n",
        "    \n",
        "    with torch.no_grad():\n",
        "        lin_ae_interp = lin_ae.decode((1-t)*h1_lin + t*h2_lin)\n",
        "    plt.subplot(3,n_interp,i+1+n_interp)\n",
        "    plot_torch_image(lin_ae_interp.reshape(my_dataset_size))\n",
        "    if i == 0:\n",
        "        plt.ylabel('Lin AE')\n",
        "    \n",
        "    with torch.no_grad():\n",
        "        conv_ae_interp = conv_ae.decode((1-t)*h1_conv + t*h2_conv)[0]\n",
        "    plt.subplot(3,n_interp,i+1+2*n_interp)\n",
        "    plot_torch_image(conv_ae_interp)\n",
        "    if i == 0:\n",
        "        plt.ylabel('NonLin AE')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "2C97SzoFI9ay"
      },
      "source": [
        "\n",
        "__Summmarize your observations here:__"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Ac8JQlYzKVSD",
        "cellView": "form"
      },
      "source": [
        "interpolation_observations = \"At first I was surprised to see... But this makes sense because...\" #@param{type:\"string\"}"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "RP_0Hcyflj3w"
      },
      "source": [
        "---\n",
        "# Section 3: Generative models and density networks\n",
        "*Estimated time to here: 95min*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "rjpZAX0mrUg_",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Generating with Gaussians\n",
        "\n",
        "try: t3;\n",
        "except NameError: t3=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"h96JaT5Jyi4\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-7PBmLooLeds"
      },
      "source": [
        "## Generating novel images from the decoder\n",
        "\n",
        "If we isolate the decoder part of the AutoEncoder, what we have is a neural network that takes as input a vector of size $K$ and produces as output an image that looks something like our training data. Recall that in our earlier notation, we had an input $\\mathbf{x}$ that was mapped to a low-dimensional hidden representation $\\mathbf{h}$ which was then decoded into a reconstruction of the input, $\\mathbf{x'}$:\n",
        "$$\\mathbf{x} \\overset{\\text{encode}}{\\longrightarrow} \\mathbf{h} \\overset{\\text{decode}}{\\longrightarrow} \\mathbf{x'}\\, .$$\n",
        "Partly as a matter of convention, and partly to distinguish where we are going next from the previous section, we're going to introduce a new variable, $\\mathbf{z} \\in \\mathbb{R}^K$, which will take the place of $\\mathbf{h}$. The key difference is that while $\\mathbf{h}$ is produced by the encoder for a particular $\\mathbf{x}$, $\\mathbf{z}$ will be drawn out of thin air from a prior of our choosing:\n",
        "$$\\mathbf{z} \\sim p(\\mathbf{z})\\\\ \\mathbf{z} \\overset{\\text{decode}}{\\longrightarrow} \\mathbf{x}\\, .$$\n",
        "(Note that it is also conventional to drop the \"prime\" on $\\mathbf{x}$ when it is no longer being thought of as a \"reconstruction\")."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "dc0GemsWQYds"
      },
      "source": [
        "### Exercise 3\n",
        "sample $\\mathbf{z}$ from a standard normal and visualize the images produced"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "SYBmyD1JQjoE"
      },
      "source": [
        "def generate_images(autoencoder, K, n_images=1):\n",
        "    \"\"\"Generate n_images 'new' images from the decoder part of the given\n",
        "    autoencoder.\n",
        "\n",
        "    returns (n_images, channels, height, width) tensor of images\n",
        "    \"\"\"\n",
        "    # Concatenate tuples to get (n_images, channels, height, width)\n",
        "    output_shape = (n_images,) + my_dataset_size\n",
        "    with torch.no_grad():\n",
        "        ####################################################################\n",
        "        # Fill in all missing code below (...),\n",
        "        # then remove or comment the line below to test your function\n",
        "        raise NotImplementedError(\"Please complete the generate_images function!\")\n",
        "        #################################################################### \n",
        "        ... # Your code here: sample z, pass through autoencoder.decode(), and reshape output.\n",
        "\n",
        "# Uncomment to run it\n",
        "# images = generate_images(conv_ae, K, n_images=25)\n",
        "# plt.figure(figsize=(5,5))\n",
        "# for i in range(25):\n",
        "#     plt.subplot(5,5,i+1)\n",
        "#     plot_torch_image(images[i])\n",
        "# plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3yIa-T6kx94P"
      },
      "source": [
        "[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W08_AutoEncoders_GANs/solutions/W8_Tutorial1_Solution_Ex03.py)\n",
        "\n",
        "*Example output:*  \n",
        "\n",
        "<img alt='Solution hint 3' align='left' width=400 height=400 src=https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W08_AutoEncoders_GANs/static/W8_Tutorial1_Solution_Ex03.png />"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "s-kMt6-ul1iM"
      },
      "source": [
        "## Formalizing the problem: density estimation with maximum likelihood\n",
        "*Estimated time to here: 110min*"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "1al9VdfYRkt9",
        "cellView": "form"
      },
      "source": [
        "#@title Video: Density Networks\n",
        "\n",
        "video = YouTubeVideo(id=\"rx3IlM4qnvw\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MqQIy5H7Ud61"
      },
      "source": [
        "Note: we've moved the technical details of \"formalizing the problem\" to Appendix A.1 at the end of this notebook. Those who want more of the theoretical/mathematical backstory are encouraged to read it. Those who just want to build a VAE, carry on!"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pRp1gFniPT7B"
      },
      "source": [
        "---\n",
        "# Section 4: Variational Auto-Encoders (VAEs)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "S8jinOmeTT3v",
        "cellView": "form"
      },
      "source": [
        "#@title Video: VAE Samples\n",
        "\n",
        "try: t4;\n",
        "except NameError: t4=time.time()\n",
        "\n",
        "video = YouTubeVideo(id=\"RgOF3XJL5vw\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IgROsfdxDbuX"
      },
      "source": [
        "## Components of a VAE\n",
        "## Recognition models and density networks\n",
        "\n",
        "Variational AutoEncoders (VAEs) are a lot like the classic AutoEncoders (AEs) you just saw, but where we explicitly think about probability distributions. In the language of VAEs, the __encoder__ is replaced with a __recognition model__, and the __decoder__ is replaced with a __density network__.\n",
        "\n",
        "Where in a classic autoencoder the encoder maps from images to a single hidden vector,\n",
        "$$\\mathbf{x} \\overset{\\text{AE}}{\\longrightarrow} \\mathbf{h} \\, , $$ in a VAE we would say that a recognition model maps from inputs to entire __distributions__ over hidden vectors,\n",
        "$$\\mathbf{x} \\overset{\\text{VAE}}{\\longrightarrow} q(\\mathbf{z}) \\, ,$$\n",
        "which we will then sample from.\n",
        "We'll say more in a moment about what kind of distribution $q(\\mathbf{z})$ is.\n",
        "Part of what makes VAEs work is that the loss function will require good reconstructions of the input not just for a single $\\mathbf{z}$, but _on average_ from samples of $\\mathbf{z} \\sim q(\\mathbf{z})$.\n",
        "\n",
        "In the classic autoencoder, we had a decoder which maps from hidden vectors to reconstructions of the input:\n",
        "$$\\mathbf{h} \\overset{\\text{AE}}{\\longrightarrow} \\mathbf{x'} \\, .$$\n",
        "In a density network, reconstructions are expressed in terms of a distribution:\n",
        "$$\\mathbf{z} \\overset{\\text{VAE}}{\\longrightarrow} p(\\mathbf{x}|\\mathbf{z};\\mathbf{w}) $$\n",
        "where, as above, $p(\\mathbf{x}|\\mathbf{z};\\mathbf{w})$ is defined by mapping $\\mathbf{z}$ through a density network then treating the resulting $f(\\mathbf{z};\\mathbf{w})$ as the mean of a (Gaussian) distribution over $\\mathbf{x}$."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "svLx5HwJKhdZ"
      },
      "source": [
        "### Exercise 4\n",
        "### sampling from $q(\\mathbf{z})$\n",
        "\n",
        "How can a neural network (the __recognition model__) output an entire probability distribution $$\\mathbf{x} \\longrightarrow q(\\mathbf{z}) \\, ?$$\n",
        "One idea would be to make the weights of the neural network stochastic, so that every time the network is run, a different $\\mathbf{z}$ is produced. (In fact, this is quite common in [Bayesian Neural Networks](https://medium.com/neuralspace/bayesian-neural-network-series-post-1-need-for-bayesian-networks-e209e66b70b2), but this isn't what people use in VAEs.)\n",
        "\n",
        "Instead, we will start by committing to a particular _family_ of distributions. We'll then have the recognition model output the _parameters_ of $q$, which we'll call $\\phi$. A common choice, which we will use throughout, is the family of isotropic multivariate Gaussians$^\\dagger$:\n",
        "$$q(\\mathbf{z};\\phi) = \\mathcal{N}(\\mathbf{z};\\boldsymbol{\\mu},\\sigma^2\\mathbf{I}_K) = \\prod_{k=1}^K \\mathcal{N}(z_k; \\mu_k, \\sigma^2)$$\n",
        "where the $K+1$ parameters are$^*$\n",
        "$$\\phi = \\lbrace{\\mu_1, \\mu_2, \\ldots, \\mu_K, \\log(\\sigma)}\\rbrace \\, .$$\n",
        "By defining the last entry of $\\phi$ as the _logarithm_ of $\\sigma$, the last entry can be any real number while enforcing the requirement that $\\sigma > 0$.\n",
        "\n",
        "A recognition model is a neural network that takes $\\mathbf{x}$ as input and produces $\\phi$ as output. The purpose of the following exercise is not to write a recognition model (that will come later), but to clarify the relationship between $\\phi$ and $q(\\mathbf{z})$. You will write a function, `rsample`, which takes as input a batch $\\phi$s and will output a set of samples of $\\mathbf{z}$ drawn from $q(\\mathbf{z};\\phi)$."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "0ReQnZaDT2i4"
      },
      "source": [
        "def rsample(phi, n_samples):\n",
        "    \"\"\"Sample z ~ q(z;phi)\n",
        "    Ouput z is size [b,n_samples,K] given phi with shape [b,K+1]. The first K\n",
        "    entries of each row of phi are the mean of q, and phi[:,-1] is the log\n",
        "    standard deviation\n",
        "    \"\"\"\n",
        "    b, kplus1 = phi.size()\n",
        "    k = kplus1-1\n",
        "    mu, sig = phi[:, :-1], phi[:,-1].exp()\n",
        "    ####################################################################\n",
        "    # Fill in all missing code below (...),\n",
        "    # then remove or comment the line below to test your function\n",
        "    raise NotImplementedError(\"Please complete the rsample function!\")\n",
        "    ####################################################################\n",
        "    ... # your code here!"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "s4w5ksfjyO9e"
      },
      "source": [
        "[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W08_AutoEncoders_GANs/solutions/W8_Tutorial1_Solution_Ex04.py)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "5ebTNwLKUFHY"
      },
      "source": [
        "# Testing rsample()\n",
        "phi = torch.randn(4, 3, device=DEVICE)\n",
        "zs = rsample(phi, 100)\n",
        "assert zs.size() == (4, 100, 2), \"rsample size is incorrect!\"\n",
        "assert zs.device == phi.device, \"rsample device doesn't match phi device!\"\n",
        "zs = zs.cpu()\n",
        "\n",
        "with plt.xkcd():\n",
        "    plt.figure(figsize=(12,3))\n",
        "    for i in range(4):\n",
        "        plt.subplot(1,4,i+1)\n",
        "        plt.scatter(zs[i,:,0], zs[i,:,1], marker='.')\n",
        "        th = torch.linspace(0, 6.28318, 100)\n",
        "        x, y = torch.cos(th), torch.sin(th)\n",
        "        # Draw 2-sigma contours\n",
        "        plt.plot(2*x*phi[i,2].exp().item()+phi[i,0].item(), 2*y*phi[i,2].exp().item()+phi[i,1].item())\n",
        "        # plt.title(f'mu={phi[i,0].item():.2f},{phi[i,1].item():.2f}, sig={phi[i,2].exp().item():.3f}')\n",
        "        plt.xlim(-5,5)\n",
        "        plt.ylim(-5,5)\n",
        "        plt.grid()\n",
        "        plt.axis('equal')\n",
        "    plt.suptitle('If rsample() is correct, then most but not all points should lie in the circles')\n",
        "    plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gTE_b3YOyVyz"
      },
      "source": [
        "*Example output:*  \n",
        "\n",
        "<img alt='Solution hint ZYX' align='left' width=900 height=225 src=https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W08_AutoEncoders_GANs/static/W8_Tutorial1_Solution_Ex04.png />"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nKZgt0ebP5Pe"
      },
      "source": [
        "---\n",
        "\n",
        "$^\\dagger$ PyTorch has a `MultivariateNormal` class which handles multivariate Gaussian distributions with arbitrary covariance matrices. It is not very beginner-friendly, though, so we will write our own functions to work with $\\phi$, which will both teach you some implementation details and is not very hard especially if we use only an isotropic ($\\sigma$) or diagonal ($\\lbrace{\\sigma_1, \\ldots, \\sigma_K}\\rbrace$) covariance\n",
        "\n",
        "$^*$ Another common parameterization is to use a separate $\\sigma$ for each dimension of $\\mathbf{z}$, in which case $\\phi$ would instead contain $2K$ parameters:\n",
        "$$\\phi = \\lbrace{\\mu_1, \\mu_2, \\ldots, \\mu_K, \\log(\\sigma_1), \\ldots, \\log(\\sigma_K)}\\rbrace \\, .$$"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LlQfA-aiK_LS"
      },
      "source": [
        "## VAE training: maximize the Evidence Lower BOund (ELBO)\n",
        "*Estimated time to here: 125min*\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "r64ZiPHZc7tb",
        "cellView": "form"
      },
      "source": [
        "#@title Video: ELBO\n",
        "\n",
        "video = YouTubeVideo(id=\"-99NskgKDo0\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EloDKD0_BqO1"
      },
      "source": [
        "A full derivation and further explanation of the ELBO can be found in Appendix A.2 at the end of this notebook. In the following few sections, we provide implementations of `log_p_x` and `kl_q_p` for you, since the technical details of each can be somewhat opaque."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uyBiFvRac-fE"
      },
      "source": [
        "<!-- To actually _train_ a VAE, we decompose the ELBO into two terms,\n",
        "$$\\text{ELBO}(\\mathbf{x}_i,\\phi,\\mathbf{w}) = \\color{blue}{\\mathbb{E}_{q(\\mathbf{z};\\phi)}\\left[\\log p(\\mathbf{x}_i|\\mathbf{z};\\mathbf{w}) \\right]} - \\color{green}{KL(q(\\mathbf{z};\\phi)||p(\\mathbf{z}))} \\, ,$$\n",
        "each of which we will be able to compute relatively easily:\n",
        "\n",
        "1. a <font color=\"blue\">**reconstruction term**</font>, which is maximized if $q(\\mathbf{z};\\phi)$ places higher probability on values of $\\mathbf{z}$ that do a good job of reconstructing $\\mathbf{x}_i$ after being passed through the density network. Since there is an expectation ($\\mathbb{E}_{q(\\mathbf{z};\\phi)}[\\ldots]$), this term wants _all_ samples from $\\mathbf{z} \\sim q(\\mathbf{z};\\phi)$ to produce good reconstructions of a given $\\mathbf{x}_i$. This term should remind you of the MSE loss we used to train the original AutoEncoders.\n",
        "2. a <font color=\"green\">**regularization term**</font> which penalizes $q(\\mathbf{z};\\phi)$ for deviating too far from the prior, $p(\\mathbf{z})$. -->\n",
        "\n",
        "First, we'll implement the $\\color{blue}{\\mathbb{E}_{q(\\mathbf{z};\\phi)}\\left[\\log p(\\mathbf{x}_i|\\mathbf{z};\\mathbf{w}) \\right]}$ term in PyTorch in a function called `log_p_x`.\n",
        "Earlier, we introduced the density network with $p(\\mathbf{x}|\\mathbf{z};\\mathbf{w}) = \\mathcal{N}\\left(f(\\mathbf{z};\\mathbf{w}), \\sigma^2_x\\mathbf{I}_P\\right)$. The $\\log$ of this is\n",
        "\\begin{align}\\log p(\\mathbf{x}_i|\\mathbf{z};\\mathbf{w}) &= -\\frac{1}{2} \\frac{||\\mathbf{x}_i - f(\\mathbf{z};\\mathbf{w})||_2^2}{\\sigma_\\mathbf{x}^2} - P \\log(\\sigma_\\mathbf{x}) \\\\\n",
        "&= -\\sum_{j=1}^P \\left(\\frac{(x_{ij} - f(\\mathbf{z};\\mathbf{w})_j)^2}{2\\sigma_\\mathbf{x}^2} + \\log \\sigma_\\mathbf{x}\\right)\\end{align}\n",
        "where $j$ indexes individual pixels in the image."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xWDsqABzgZ36"
      },
      "source": [
        "def log_p_x(x, mu_xs, sig_x):\n",
        "    \"\"\"Given [batch, ...] input x and [batch, n, ...] reconstructions, compute\n",
        "    pixel-wise log Gaussian probability\n",
        "\n",
        "    Sum over pixel dimensions, but mean over batch and samples.\n",
        "    \"\"\"\n",
        "    b, n = mu_xs.size()[:2]\n",
        "    # Flatten out pixels and add a singleton dimension [1] so that x will be\n",
        "    # implicitly expanded when combined with mu_xs\n",
        "    x = x.reshape(b, 1, -1)\n",
        "    _, _, p = x.size()\n",
        "    squared_error = (x - mu_xs.view(b, n, -1))**2 / (2*sig_x**2)\n",
        "\n",
        "    # Size of squared_error is [b,n,p]. log prob is by definition sum over [p].\n",
        "    # Expected value requires mean over [n]. Handling different size batches\n",
        "    # requires mean over [b].\n",
        "    return -(squared_error + torch.log(sig_x)).sum(dim=2).mean(dim=(0,1))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "FIvFgXZNiZjY"
      },
      "source": [
        "Next, we will implement the $\\color{green}{KL(q(\\mathbf{z};\\phi)||p(\\mathbf{z}))}$ term in a function called `kl_q_p`. While we could plug in the exact formula for $KL$ between two Gaussians, a more general (but more variable) approach is to write\n",
        "$$KL(q(\\mathbf{z};\\phi)||p(\\mathbf{z})) = \\mathbb{E}_{q(\\mathbf{z};\\phi)}\\left[\\log q(\\mathbf{z};\\phi) - \\log p(\\mathbf{z})\\right]$$\n",
        "and approximate this expectation with samples of $\\mathbf{z} \\sim q(\\mathbf{z};\\phi)$ just like we did for the reconstruction term."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "p0wOSNHuiYb4"
      },
      "source": [
        "def kl_q_p(zs, phi):\n",
        "    \"\"\"Given [b,n,k] samples of z drawn from q, compute estimate of KL(q||p).\n",
        "    phi must be size [b,k+1]\n",
        "\n",
        "    This uses mu_p = 0 and sigma_p = 1, which simplifies the log(p(zs)) term to\n",
        "    just -1/2*(zs**2)\n",
        "    \"\"\"\n",
        "    b, n, k = zs.size()\n",
        "    mu_q, log_sig_q = phi[:,:-1], phi[:,-1]\n",
        "    log_p = -0.5*(zs**2)\n",
        "    log_q = -0.5*(zs - mu_q.view(b,1,k))**2 / log_sig_q.exp().view(b,1,1)**2 - log_sig_q.view(b,1,-1)\n",
        "    # Size of log_q and log_p is [b,n,k]. Sum along [k] but mean along [b,n]\n",
        "    return (log_q - log_p).sum(dim=2).mean(dim=(0,1))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TRDnZaE8kq7W"
      },
      "source": [
        "Finally, we will implement the ELBO in a function called, well, `elbo`. Recall from the video that $$\\text{ELBO}(\\mathbf{x}_i,\\phi,\\mathbf{w}) = \\color{blue}{\\mathbb{E}_{q(\\mathbf{z};\\phi)}\\left[\\log p(\\mathbf{x}_i|\\mathbf{z};\\mathbf{w}) \\right]} - \\color{green}{KL(q(\\mathbf{z};\\phi)||p(\\mathbf{z}))} \\, \\,$$\n",
        "and that we have two functions for this: `log_p_x` for the first term and `kl_q_p` for the second term. At a high level, the `elbo` function simply computes each of these terms and takes their difference!\n",
        "\n",
        "For a bit more detail, `elbo` it will have five inputs:\n",
        "\n",
        "* `x`, which is a _batch_ of input images of size `[batch,channels,height,width]`\n",
        "* `phi`, which as before is a batch of _parameters_ of $q(\\mathbf{z};\\phi)$ with size `[batch,k+1]`\n",
        "* `density_net`, which takes in $\\mathbf{z}$s and outputs reconstructed $\\mathbf{x}$s. (For those who dove into Appendix A.1 it really outputs the mean of a distribution over each $\\mathbf{x}$. This is the $f(\\mathbf{z};\\mathbf{w})$ in the mathematical notation in the Appendix.)\n",
        "* `sig_x`, which is the amount of \"pixel noise\" in the generative model ($\\sigma_\\mathbf{x}$ in the appendix). Intuitively, larger `sig_x` means that the model can be sloppier in its reconstructions, since errors are attributed to _noise_.\n",
        "* `n`: the number of samples of `z` that will be sampled per input image."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Ruq0cjNwk9C1"
      },
      "source": [
        "def elbo(x, phi, density_net, sig_x, n):\n",
        "    # Start by drawing n samples of z from q(z;phi)\n",
        "    zs = rsample(phi, n)\n",
        "    # Density net expects just [b,k] inputs, so we'll collapse together batch \n",
        "    # and samples dimensions to get [b*n,k] samples of z, then expand back out\n",
        "    # separate [b,n,p] dimensions in the result\n",
        "    b = x.size()[0]\n",
        "    mu_xs = density_net(zs.view(b*n, -1)).view(b,n,-1)\n",
        "    # Compute reconstruction and regularization terms. ELBO = diff. between them\n",
        "    return log_p_x(x, mu_xs, sig_x) - kl_q_p(zs, phi)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zBlUF6zxCuvx"
      },
      "source": [
        "Take a moment to check in with you pod and ask if you have any questions about these functions.\n",
        "\n",
        "*Estimated time to here: 140min*\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LnOSPFb5oNHS"
      },
      "source": [
        "### See the ELBO in action\n",
        "\n",
        "What's cool about the ELBO is that it is a single objective which solves two problems at once:\n",
        "\n",
        "1. When we maximize the ELBO with respect to $\\phi$, we are making $q(\\mathbf{z};\\phi)$ closer to $p(\\mathbf{z}|\\mathbf{x};\\mathbf{w})$. This approximately solves the \"needle in a haystack\" problem of finding which $\\mathbf{z}$s are \"relevant\" for each $\\mathbf{x}_i$. This is the probabilistic equivalent of the _encoder_ from before, which simply mapped from $\\mathbf{x}$ to $\\mathbf{h}$, except now we have an entire distribution $q(\\mathbf{z};\\phi)$.\n",
        "2. When we maximize the ELBO with respect to $\\mathbf{w}$, we are improving the generative model, making $p(\\mathbf{x};\\mathbf{w})$ closer to the distribution of training examples of $\\mathbf{x}$. In other words, we are getting better at _generating new $\\mathbf{x}$ from $\\mathbf{z}$_.\n",
        "\n",
        "(Further details and explanations can be found in Appendix A.2)\n",
        "\n",
        "To see the ELBO in action, we'll use it to infer $\\mathbf{z}$ for a single $\\mathbf{x}$ using the decoder part of the convolutional autoencoder you built in the first part. This section may also need to be run multiple times, or with larger `steps` (especially if you chose a large $K$)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fKgahgnzoeoO"
      },
      "source": [
        "# Pick a random image from the validation set\n",
        "idx = torch.randint(len(my_valset), size=())\n",
        "x_i, _ = my_valset[idx]\n",
        "\n",
        "# Solve for q(z;phi) by maximizing the ELBO by gradient ascent\n",
        "steps = 4000\n",
        "phi = torch.zeros(1, K+1, requires_grad=True, device=DEVICE)\n",
        "# sig_x controls amount of 'pixel noise'.\n",
        "# Lower --> more 'strict' reconstructions, so q(z) will be narrower.\n",
        "# Higher --> more 'lax' reconstructions, so q(z) will be wider.\n",
        "sig_x = torch.tensor(0.5, device=DEVICE)\n",
        "optim = torch.optim.Adam([phi], lr=0.001)\n",
        "elbo_vals = torch.zeros(steps, device=DEVICE)\n",
        "x_i = x_i.to(DEVICE).unsqueeze(0)\n",
        "conv_ae.to(DEVICE)\n",
        "for iter in trange(steps, leave=False):\n",
        "    optim.zero_grad()\n",
        "    loss = -elbo(x_i, phi, density_net=conv_ae.decode, sig_x=sig_x, n=16)\n",
        "    loss.backward()\n",
        "    optim.step()\n",
        "    elbo_vals[iter] = -loss.detach()\n",
        "x_i = x_i.to('cpu')\n",
        "conv_ae.to('cpu')\n",
        "phi = phi.detach().cpu().flatten()\n",
        "\n",
        "mu_q = phi[:-1]\n",
        "sig_q = torch.exp(phi[-1])*torch.ones(K)\n",
        "\n",
        "# For comparison, what would our AE encoder have produced?\n",
        "h = conv_ae.encode(x_i).detach().flatten()\n",
        "\n",
        "# Plot\n",
        "with plt.xkcd():\n",
        "    plt.figure(figsize=(10,4.5))\n",
        "    plt.subplot(1,2,1)\n",
        "    plt.plot(elbo_vals.cpu())\n",
        "    plt.xlabel('Iterations')\n",
        "    plt.ylabel('ELBO')\n",
        "    plt.title('ELBO gradient ascent w.r.t. $\\phi$')\n",
        "    plt.subplot(1,2,2)\n",
        "    plt.plot(torch.arange(1,K+1), h, marker='.', color='k', linestyle='-')\n",
        "    plt.errorbar(torch.arange(1,K+1), mu_q, sig_q, marker='.', linestyle='-', color='b')\n",
        "    plt.legend(['h', 'q(z;$\\phi$)'])\n",
        "    plt.xlabel('hidden dimension (k)')\n",
        "    plt.ylabel('z_k')\n",
        "    plt.title('q(z) gets close to h')\n",
        "    plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "otcFWoujtmDP"
      },
      "source": [
        "Hopefully you see that we successfully maximized the ELBO by ascending its gradient with respect to $\\phi$, and that the resulting distribution $q(\\mathbf{z};\\phi)$ is close to the vector $\\mathbf{h}$ that our original autoencoder produces for the same image.\n",
        "\n",
        "Remember that in a VAE, we are thinking in terms of _distributions_. Rather than a single $\\mathbf{h}$, we have an entire distribution of $\\mathbf{z}$s. We should be able to sample $\\mathbf{z} \\sim q(\\mathbf{z};\\phi)$ and get decent reconstructions of $\\mathbf{x}_i$ for all of them. Let's take a look:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "5LlIucKSs8sE"
      },
      "source": [
        "plt.figure(figsize=(10,3))\n",
        "plt.subplot(1,7,1)\n",
        "plot_torch_image(x_i[0])\n",
        "plt.title('$x_i$')\n",
        "\n",
        "orig_reconstruction = conv_ae.decode(h.view(1,K))[0].detach()\n",
        "plt.subplot(1,7,2)\n",
        "plot_torch_image(orig_reconstruction)\n",
        "plt.title(\"h --> $x'$\")\n",
        "\n",
        "zs = rsample(phi.view(1,-1), 5)[0]\n",
        "q_reconstructions = conv_ae.decode(zs).detach()\n",
        "for i in range(5):\n",
        "    plt.subplot(1,7,3+i)\n",
        "    plot_torch_image(q_reconstructions[i])\n",
        "    plt.title(f\"$z_{i+1}$ --> $x'$\")"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "wEmc1SLvvWnf"
      },
      "source": [
        "You should see that the reconstructions from each sampled $\\mathbf{z}$ vary slightly from each other, but that all are plausible reconstructions of the original $\\mathbf{x}_i$. This proves that maximizing the ELBO by doing gradient descent on $\\phi$ gives us the ability to reconstruct $\\mathbf{x}$ a few different ways based on a __distribution__ of $\\mathbf{z}$s.\n",
        "\n",
        "(Your results may vary, depending on the dataset, $K$, number of optimization steps, etc)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9N1Uz_OxEyEP"
      },
      "source": [
        "## Build a VAE\n",
        "\n",
        "You now have all the ingredients you need to build a VAE! The architecture in this example will be nearly identical to the `ConvAutoEncoder` from earlier, with a few key differences reflecting the fact that we're now thinking _distributionally_:\n",
        "\n",
        "* The encoder is now a \"recognition model\" that outputs $\\phi$ for each input rather than $\\mathbf{h}$. All this means in the code is that the final linear layer which previously projected down to `K` dimensions will now project down to `K+1` dimensions (the size of $\\phi$).\n",
        "* The model will store `sig_x` as an extra parameter for the density network. In fact, we'll make `sig_x` learned from data as well.\n",
        "* We'll train to maximize the ELBO"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "XPytk4mmMXor",
        "cellView": "form"
      },
      "source": [
        "#@title Video: VAEs: the big picture\n",
        "\n",
        "video = YouTubeVideo(id=\"QPPCjiN7UIk\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "pRJiI7l6xmHY"
      },
      "source": [
        "### Exercise 5 (Homework)\n",
        "### write a VAE with otherwise the same architecture as `ConvAutoEncoder`\n",
        "\n",
        "Chances are, you are out of time with your pod at this point. Skip to the \"wrapping up\" section and return to this exercise as homework. It is a culmination of many ideas throughout this notebook and will likely take some time!\n",
        "\n",
        "Note that we're actually not using `elbo()` from above, but rewriting it to be a member function of the `ConvVAE` class. This is simply to reduce the amount of reshaping you need to worry about, but they're functionally the same."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "lUEOIel7PJZh"
      },
      "source": [
        "class ConvVAE(nn.Module):\n",
        "    def __init__(self, K, num_filters=32, filter_size=5):\n",
        "        super(ConvVAE, self).__init__()\n",
        "        \n",
        "        # With padding=0, the number of pixels cut off from each image dimension\n",
        "        # is filter_size // 2. Double it to get the amount of pixels lost in\n",
        "        # width and height per Conv2D layer, or added back in per \n",
        "        # ConvTranspose2D layer.\n",
        "        filter_reduction = 2 * (filter_size // 2)\n",
        "\n",
        "        # After passing input through two Conv2d layers, the shape will be\n",
        "        # 'shape_after_conv'. This is also the shape that will go into the first\n",
        "        # deconvolution layer in the decoder\n",
        "        self.shape_after_conv = (num_filters,\n",
        "                                 my_dataset_size[1]-2*filter_reduction,\n",
        "                                 my_dataset_size[2]-2*filter_reduction)\n",
        "        flat_size_after_conv = self.shape_after_conv[0] \\\n",
        "            * self.shape_after_conv[1] \\\n",
        "            * self.shape_after_conv[2]\n",
        "        ####################################################################\n",
        "        # Fill in all missing code below (...),\n",
        "        # then remove or comment the line below to test your class\n",
        "        raise NotImplementedError(\"Please complete the ConvVAE class!\")\n",
        "        ####################################################################\n",
        "        # Define the recognition model (encoder or q) part      \n",
        "        ... # YOUR CODE HERE (BiasLayer, nn.Conv2d x2, nn.Flatten, nn.Linear)\n",
        "\n",
        "        # Define the generative model (decoder or p) part\n",
        "        ... # YOUR CODE HERE (nn.Linear, nn.Unflatten(-1, self.shape_after_conv), nn.ConvTranspose2d x2, BiasLayer)\n",
        "\n",
        "        # Define a special extra parameter to learn scalar sig_x for all pixels.\n",
        "        self.log_sig_x = nn.Parameter(torch.zeros(()))\n",
        "    \n",
        "    def infer(self, x):\n",
        "        \"\"\"Map (batch of) x to (batch of) phi which can then be passed to\n",
        "        rsample to get z\n",
        "        \"\"\"\n",
        "        ... # YOUR CODE HERE. Analogous to conv_ae.encode(). Output should be size [b,k+1]\n",
        "\n",
        "    def generate(self, zs):\n",
        "        \"\"\"Map [b,n,k] sized samples of z to [b,n,p] sized images\n",
        "        \"\"\"\n",
        "        b, n, k = zs.size()\n",
        "        ... # YOUR CODE HERE. Analogous to conv_ae.decode(). Hint: requires zs.reshape() or zs.view() since nn.Linear expects (?, k) size inputs\n",
        "    \n",
        "    def forward(self, x):\n",
        "        # VAE.forward() is not used for training, but we'll treat it like a\n",
        "        # classic autoencoder by taking a single sample of z ~ q\n",
        "        phi = self.infer(x)\n",
        "        zs = rsample(phi, 1)\n",
        "        return self.generate(zs).view(x.size())\n",
        "\n",
        "    def elbo(self, x, n=1):\n",
        "        \"\"\"Run input end to end through the VAE and compute the ELBO using n\n",
        "        samples of z\n",
        "        \"\"\"\n",
        "        phi = self.infer(x)\n",
        "        zs = rsample(phi, n)\n",
        "        mu_xs = self.generate(zs)\n",
        "        return log_p_x(x, mu_xs, self.log_sig_x.exp()) - kl_q_p(zs, phi)\n",
        "\n",
        "def train_vae(vae, dataset, epochs=10, n_samples=16):\n",
        "    opt = torch.optim.Adam(vae.parameters(), lr=0.001, weight_decay=1e-6)\n",
        "    elbo_vals = []\n",
        "    vae.to(DEVICE)\n",
        "    vae.train()\n",
        "    loader = DataLoader(dataset, batch_size=100, shuffle=True, pin_memory=True)\n",
        "    for epoch in trange(epochs, desc='Epochs'):\n",
        "        for im, _ in tqdm(loader, total=len(dataset)//100, desc='Batches', leave=False):\n",
        "            im = im.to(DEVICE)\n",
        "            opt.zero_grad()\n",
        "            ####################################################################\n",
        "            # Fill in all missing code below (...),\n",
        "            # then remove or comment the line below to test your function\n",
        "            raise NotImplementedError(\"Please complete the train_vae function!\")\n",
        "            ####################################################################\n",
        "            loss = ... # YOUR CODE HERE (hint: use vae.elbo())\n",
        "            loss.backward()\n",
        "            opt.step()\n",
        "\n",
        "            elbo_vals.append(-loss.item())\n",
        "    vae.to('cpu')\n",
        "    vae.eval()\n",
        "    return elbo_vals\n",
        "\n",
        "# Uncomment to train\n",
        "# vae = ConvVAE(K=K)\n",
        "# elbo_vals = train_vae(vae, my_dataset, n_samples=10)\n",
        "\n",
        "# print(f'Learned sigma_x is {torch.exp(vae.log_sig_x)}')\n",
        "\n",
        "# plt.figure()\n",
        "# plt.plot(elbo_vals)\n",
        "# plt.xlabel('Batch #')\n",
        "# plt.ylabel('ELBO')\n",
        "# plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "s_FLnOxbzgIN"
      },
      "source": [
        "[*Click for solution*](https://github.com/CIS-522/course-content/blob/main/tutorials/W08_AutoEncoders_GANs/solutions/W8_Tutorial1_Solution_Ex05.py)\n",
        "\n",
        "*Example output:*  \n",
        "\n",
        "```python\n",
        "Learned sigma_x is 0.2584533393383026\n",
        "```\n",
        "\n",
        "<img alt='Solution hint 5' align='left' width=600 height=400 src=https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W08_AutoEncoders_GANs/static/W8_Tutorial1_Solution_Ex05.png />"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_b-uEhPf3LSb"
      },
      "source": [
        "## We can generate new images!\n",
        "\n",
        "Remember that earlier we tried sampling $\\mathbf{z} \\sim p(\\mathbf{z})$ and passing those through the `conv_ae.decode` function, and the results were ugly. The original autoencoder was never designed to work purely as a generative model.\n",
        "\n",
        "Although we didn't go into details of the derivation (see Appendix A), VAEs and the ELBO objective come from applying the logic of generative models and maximum likelihood learning to autoencoders. So: do generated images now look like plausible \"new images\" or samples from the distribution of training images?\n",
        "\n",
        "You can re-run this cell multiple times to see more examples."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "iXRy9D6e3Vu3"
      },
      "source": [
        "zs = torch.randn(1, 25, K)\n",
        "vae_images = vae.generate(zs).reshape((25,) + my_dataset_size)\n",
        "\n",
        "plt.figure(figsize=(5,5))\n",
        "for i in range(25):\n",
        "    plt.subplot(5,5,i+1)\n",
        "    plot_torch_image(vae_images[i])\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "myaU0gjj46N7"
      },
      "source": [
        "These won't be perfect, but hopefully they look a lot more like plausible \"new\" images!\n",
        "\n",
        "Common troubleshooting:\n",
        "\n",
        "- did it train? Did it get stuck? (One common way it gets stuck is by always outputting the same image – the average of the training set. Sometimes just restarting training fixes this.)\n",
        "- is $K$ too big or too small for this dataset?\n",
        "- is $p$ expressive enough? May need more hidden features or layers\n",
        "- is $q$ expressive enough? Maybe try diagonal covariance rather than isotropic. Or use [IWAE](https://arxiv.org/abs/1509.00519) loss instead.\n",
        "- Despite all the extra math, do all the $q(\\mathbf{z})$s end up being _calibrated_ to the chosen prior $p(\\mathbf{z})$? [Lots](http://approximateinference.org/2016/accepted/HoffmanJohnson2016.pdf) [of](https://arxiv.org/abs/1702.08658) [research](http://arxiv.org/abs/1711.00464) on [this](http://arxiv.org/abs/1806.06514)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "Uvn9BzjXV823"
      },
      "source": [
        "#@title Video: Wrapping up and interesting VAE examples\n",
        "\n",
        "video = YouTubeVideo(id=\"IDTq8muSySQ\", width=854, height=480, fs=1)\n",
        "print(\"Video available at https://youtube.com/watch?v=\" + video.id)\n",
        "\n",
        "video"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "P5-HZSWcCbr3"
      },
      "source": [
        "---\n",
        "# Submit responses"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FCJJf7OFk8SU",
        "cellView": "form"
      },
      "source": [
        "#@markdown #Run Cell to Show Airtable Form\n",
        "#@markdown ##**Confirm your answers and then click \"Submit\"**\n",
        "\n",
        "import time\n",
        "import numpy as np\n",
        "import urllib.parse\n",
        "from IPython.display import IFrame\n",
        "\n",
        "def prefill_form(src, fields: dict):\n",
        "  '''\n",
        "  src: the original src url to embed the form\n",
        "  fields: a dictionary of field:value pairs,\n",
        "  e.g. {\"pennkey\": my_pennkey, \"location\": my_location}\n",
        "  '''\n",
        "  prefill_fields = {}\n",
        "  for key in fields:\n",
        "      new_key = 'prefill_' + key\n",
        "      prefill_fields[new_key] = fields[key]\n",
        "  prefills = urllib.parse.urlencode(prefill_fields)\n",
        "  src = src + prefills\n",
        "  return src\n",
        "\n",
        "\n",
        "#autofill time if it is not present\n",
        "try: t0;\n",
        "except NameError: t0 = time.time()\n",
        "try: t1;\n",
        "except NameError: t1 = time.time()\n",
        "try: t2;\n",
        "except NameError: t2 = time.time()\n",
        "try: t3;\n",
        "except NameError: t3 = time.time()\n",
        "try: t4;\n",
        "except NameError: t4 = time.time()\n",
        "try: t5;\n",
        "except NameError: t5 = time.time()\n",
        "\n",
        "#autofill fields if they are not present\n",
        "#a missing pennkey and pod will result in an Airtable warning\n",
        "#which is easily fixed user-side.\n",
        "try: my_pennkey;\n",
        "except NameError: my_pennkey = \"\"\n",
        "try: my_pod;\n",
        "except NameError: my_pod = \"Select\"\n",
        "try: w7_upshot;\n",
        "except NameError: w7_upshot = \"\"\n",
        "try: linear_ae_vs_pca;\n",
        "except NameError: linear_ae_vs_pca = \"\"\n",
        "try: interpolation_observations;\n",
        "except NameError: interpolation_observations = \"\"\n",
        "\n",
        "\n",
        "times = [(t-t0) for t in [t1,t2,t3,t4,t5]]\n",
        "\n",
        "fields = {\"pennkey\": my_pennkey,\n",
        "          \"pod\": my_pod,\n",
        "          \"w7_upshot\": w7_upshot,\n",
        "          \"linear_ae_vs_pca\": linear_ae_vs_pca,\n",
        "          \"interpolation_observations\": interpolation_observations,\n",
        "          \"cumulative_times\": times}\n",
        "\n",
        "src = \"https://airtable.com/embed/shrs3mSM7IIIbgc8p?\"\n",
        "\n",
        "#now instead of the original source url, we do: src = prefill_form(src, fields)\n",
        "display(IFrame(src = prefill_form(src, fields), width = 800, height = 400))\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HKn5d3CCC05w"
      },
      "source": [
        "## Feedback\n",
        "How could this session have been better? How happy are you in your group? How do you feel right now?\n",
        "\n",
        "Feel free to use the embeded form below or use this link:\n",
        "<a target=\"_blank\" rel=\"noopener noreferrer\" href=\"https://airtable.com/shrNSJ5ECXhNhsYss\">https://airtable.com/shrNSJ5ECXhNhsYss</a>"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HIvhG6VZ8zez"
      },
      "source": [
        "display(IFrame(src=\"https://airtable.com/embed/shrNSJ5ECXhNhsYss?backgroundColor=red\", width = 800, height = 400))"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3yqyggnJTFsv"
      },
      "source": [
        "---\n",
        "# Appendices"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "_eIXWPrUa68q"
      },
      "source": [
        "## Appendix A: Formalizing the problem and deriving the ELBO\n",
        "\n",
        "### Part A.1: maximum likelihood with neural networks\n",
        "\n",
        "Let's state clearly the goal of learning a density network:\n",
        "\n",
        "__Given:__\n",
        "\n",
        "1. A latent space $\\mathbf{z} \\in \\mathbb{R}^K$ with prior $p(\\mathbf{z})$\n",
        "2. Data points, $\\lbrace{\\mathbf{x}_1, \\mathbf{x}_2, \\ldots, \\mathbf{x}_N\\rbrace}$ that live in $\\mathbb{R}^P$ drawn (iid) from some unknown distribution $p_{true}(\\mathbf{x})$ for instance, a set of training images sampled from the world, where $P$ is the number of pixels $\\times$ color channels per image (e.g. $P=1\\times 28 \\times 28 = 784$ for MNIST, and $P=3\\times 32 \\times 32 = 3072$ for CIFAR).\n",
        "3. A differentiable function (a neural network, say) with parameters $\\mathbf{w}$ that maps from $\\mathbb{R}^K$ to $\\mathbb{R}^P$, $$\\mathbf{x} = f(\\mathbf{z};\\mathbf{w}) \\, .^\\dagger$$ The decoder part of the AutoEncoder is an example of such an $f$.\n",
        "4. A \"noise model\" on $\\mathbf{x}$. Often this is simply chosen to be independent Gaussian pixel noise with $f(\\mathbf{z};\\mathbf{w})$ as the mean: $$p(\\mathbf{x}|\\mathbf{z};\\mathbf{w}) = \\mathcal{N}\\left(f(\\mathbf{z};\\mathbf{w}), \\sigma^2_x\\mathbf{I}_P\\right)$$\n",
        "where $\\mathbf{I}_P$ is the $P\\times P$ identity matrix and is used to express the assumption that noise is independent across pixels.$^*$\n",
        "\n",
        "Given all of this, a density network defines a distribution on $\\mathbf{x}$:\n",
        "$$p(\\mathbf{x};\\mathbf{w}) \\equiv \\int p(\\mathbf{z}) p(\\mathbf{x}|\\mathbf{z};\\mathbf{w}) \\, {\\rm d}\\mathbf{z} \\, .$$\n",
        "\n",
        "Now, we can succinctly state that the __goal__ of learning a density network is to minimize the KL divergence from $p(\\mathbf{x};\\mathbf{w})$ to $p_{true}(\\mathbf{x})$, or equivalently, to maximize the likelihood\n",
        "$$\\mathbf{w}^* = \\arg\\max_\\mathbf{w} \\sum_{i=1}^N \\log p(\\mathbf{x}_i;\\mathbf{w}) \\, .$$\n",
        "\n",
        "But this is hard to do and requires approximations, which leads us to the ELBO.\n",
        "\n",
        "---\n",
        "\n",
        "### Part A.2: ELBO Derivation\n",
        "\n",
        "The goal of training a density net is to maximize the (log) likelihood of points in the training set, or\n",
        "$$\\sum_{i=1}^N \\log p(\\mathbf{x}_i;\\mathbf{w}) \\, .$$\n",
        "\n",
        "We already know all about minimizing sums of losses (e.g. minibatching and SGD), so let's focus on the loss for a single data point, $\\log p(\\mathbf{x}_i;\\mathbf{w})$. With a bit of algebraic sleight of hand, we can pull out a $\\mathbf{z}$ and start turning this into something more tractable$^\\#$:\n",
        "$$\\log p(\\mathbf{x}_i) = \\log \\left[ p(\\mathbf{x}_i) \\frac{p(\\mathbf{z}|\\mathbf{x}_i)}{p(\\mathbf{z}|\\mathbf{x}_i)} \\right]\\qquad\\text{for all $\\mathbf{z}$}$$\n",
        "This step introduced a $\\mathbf{z}$ out of nowhere. Since we are effectively multiplying and dividing by $1$, this holds no matter what $\\mathbf{z}$ we plug in (we'll just ignore the possibility that $p(\\mathbf{z}|\\mathbf{x}_i)=0$). Next, we'll introduce a brand new auxiliary distribution, $q(\\mathbf{z})$, and integrate it:\n",
        "   $$\\ldots = \\int q(\\mathbf{z}) \\log \\left[ p(\\mathbf{x}_i) \\frac{p(\\mathbf{z}|\\mathbf{x}_i)}{p(\\mathbf{z}|\\mathbf{x}_i)} \\right] {\\rm d} \\mathbf{z}$$\n",
        "This step initially seems odd: do we really get to pick _any_ $q(\\mathbf{z})$ here? It works because we took an expression that was true _for all $\\mathbf{z}$_, i.e. a constant function of $\\mathbf{z}$, and integrated it with an arbitrary distribution on $\\mathbf{z}$, which by definition has to integrate to its constant integrand. \n",
        "\n",
        "Next, we'll sneak $q$ inside the $\\log$ using the same trick that got $p(\\mathbf{z}|\\mathbf{x}_i)$ in there, and we'll combine $p(\\mathbf{z}|\\mathbf{x}_i)p(\\mathbf{x}_i)$ into a single $p(\\mathbf{x}_i,\\mathbf{z})$:\n",
        "   $$\\ldots = \\int q(\\mathbf{z}) \\log \\left[ \\frac{p(\\mathbf{x}_i,\\mathbf{z})}{p(\\mathbf{z}|\\mathbf{x}_i)}\\frac{q(\\mathbf{z})}{q(\\mathbf{z})} \\right] {\\rm d} \\mathbf{z}$$\n",
        "If you are beginning to think that this looks a bit like KL-divergence, then you would be right! Let's complete the transformation by pulling out $KL$:\n",
        "$$\\log p(\\mathbf{x}_i) = KL\\left(q(\\mathbf{z})||p(\\mathbf{z}|\\mathbf{x}_i)\\right) + \\underbrace{\\int q(\\mathbf{z}) \\log \\left[ \\frac{p(\\mathbf{x}_i,\\mathbf{z})}{q(\\mathbf{z})} \\right] {\\rm d} \\mathbf{z}}_\\text{ELBO}$$\n",
        "\n",
        "Re-arranging, notice that\n",
        "$$\\text{ELBO} = \\log p(\\mathbf{x}_i) - KL\\left(q(\\mathbf{z})||p(\\mathbf{z}|\\mathbf{x}_i)\\right) \\, .$$\n",
        "We can get a lot of intuition out of this one expression. First, maximizing the ELBO has two effects, since there are two terms on the right hand side. The first effect is to make $KL(q(z)||p(z|x))$ smaller, which  means that by maximizing the ELBO, $q$ becomes a better approximation to the true posterior distribution over $z$. If $q$ were a _perfect fit_, that is if $q(z) = p(z|x)$, then maximizing the ELBO would be _equivalent_ to maximizing the thing we set out to maximize: the log likelihood of the data. Put another way: once $q$ is a good approximation to the posterior $p(z|x)$, then maximizing the ELBO makes $p(x|z)$ a better _generative_ model.\n",
        "\n",
        "The ELBO gets its name because it is a **Lower BOund** on the **Evidence**. \"Evidence\" is just another term for the log likelihood, or $\\log p(\\mathbf{x}_i)$. We know that it is a lower bound beause $KL$ is always non-negative. This one formula is the work-horse of nearly all variational inference:\n",
        "$$\\color{red}{\\log p(\\mathbf{x}_i) \\geq \\underbrace{\\int q(\\mathbf{z}) \\log \\left[ \\frac{p(\\mathbf{x}_i,\\mathbf{z})}{q(\\mathbf{z})} \\right] {\\rm d} \\mathbf{z}}_\\text{ELBO}}$$\n",
        "\n",
        "<!-- After all of that algebraic moving-around, you may be surprised to find that we've made some progress towards an expression that we can actually work with. It turns out that the ELBO makes many other nice and intuitive computational tricks available to us. Since the ELBO is a lower-bound on the log likelihood of the data, we _maximize_ it or, equivalently, do gradient _descent_ on the _negative_ ELBO. -->\n",
        "\n",
        "Note that other texts often derive the ELBO in fewer steps using Jensen's inequality, but arguably some of the clarity and cleverness of the whole idea is lost when done that way. Here, we see behind the curtain a bit more: we see that $q(\\mathbf{z})$ was introduced as a totally arbitrary distribution, but that we do \"better\" by making it agree more closely with $p(\\mathbf{z}|\\mathbf{x};\\mathbf{w})$. We also see where the \"Lower BOund\" comes from: the ELBO is always a lower bound on $\\log p(\\mathbf{x}_i; \\mathbf{w})$ precisely because $KL(q(\\mathbf{z})||p(\\mathbf{z}|\\mathbf{x}_i;\\mathbf{w}))$ is always positive. In fact, this tells us that the ELBO is **equal** to the log likelihood of the data (the bound is \"tight\") when $q(\\mathbf{z})$ is equal to the correct posterior on $\\mathbf{z}$, i.e. when $q(\\mathbf{z}) = p(\\mathbf{z}|\\mathbf{x}_i)$. This means we can think about the problem of creating \"tighter bounds\" as a problem of making $q(\\mathbf{z})$ closer to $p(\\mathbf{z}|\\mathbf{x}_i)$.\n",
        "\n",
        "---\n",
        "\n",
        "$^\\dagger$ A notational convention is that $p(a|b)$ is used when $a$ and $b$ are both random variables and $a$ is \"conditioned on\" $b$, while $p(a;c)$ is used when $c$ is not a random variable, but a parameter controlling the shape of the distribution. This is sometimes written $p_c(a)$.\n",
        "\n",
        "$^*$ In practice, we don't explicitly use $\\mathbf{I}_P$. The assumption that pixel noise is independent instead appears as a sum of log-likelihoods per pixel.\n",
        "\n",
        "$^\\#$ these derivations drop the \"$\\mathbf{w}$\" from all $p$s just to reduce clutter."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "jDXSWpZoqlxb"
      },
      "source": [
        "## Appendix B: Tricks\n",
        "\n",
        "There are a few \"tricks\" behind VAEs in particular had a big impact on the field, and in this lesson we haven't dived into all of the interesting and non-obvious details.\n",
        "\n",
        "1. The first \"trick\" was to recognize that the ELBO contains a \"reconstruction\" term and a \"regularization\" term, and that this naturally maps onto a kind of regularized auto-encoder. The original VAE paper is \"Auto-Encoding Variational Bayes\" because Variational Bayes and the ELBO had existed previously, but this was the first time it was built end-to-end with neural networks.\n",
        "\n",
        "2. The second \"trick\" we introduced above without talking about it is the __reparameterization trick__. When we computed the ELBO, we simply called\n",
        "\n",
        "        zs = rsample(phi, n)\n",
        "        elbo = log_p_x(...) - kl_q_p(...)\n",
        "\n",
        "    where each `...` depended on the sampled `zs`. Something sneaky is happening inside `rsample` which allows us to use Monte Carlo estimates of $\\mathbb{E}_{q(\\mathbf{z};\\phi)}[\\ldots]$ (as in the ELBO) but then take gradients of all of it with respect to $\\phi$. Inside the `torch.distributions` package, you'll find both `sample` and `rsample` functions. The key difference is that you can take gradients through `rsample` but not through `sample`. Magic! (The \"r\" in \"rsample\" stands for \"reparameterized sample\".)\n",
        "\n",
        "    For some extra history, before this simple trick came along, people generally thought of taking gradients with respect to $\\phi$ in this kind of example as a hard problem. Keyword to search: score function estimators. Or see [here](http://blog.shakirm.com/2015/10/machine-learning-trick-of-the-day-4-reparameterisation-tricks/) and [here](http://blog.shakirm.com/2015/11/machine-learning-trick-of-the-day-5-log-derivative-trick/) for some blog posts on the topic.\n",
        "\n",
        "3. The third \"trick\" was when we stopped taking gradients of $\\phi$ directly and instead used a __recognition model__ to map from $\\mathbf{x}$ to $\\phi$. This trick is also called \"amortized inference\" since the recognition model learns _on average_ to produce an estimate of $\\phi$ for each $\\mathbf{x}$."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ILEgdYHaeRK-"
      },
      "source": [
        "## Appendix C: fraction of variance, eigenspectra, and selecting $K$\n",
        "\n",
        "**tl;dr** one heuristic for choosing $K$ is to look at how much of the total variance in the data lies in the first $K$ principal components. But despite comparable variance-vs-K curves, you'll find that MNIST digits are legible with $K \\approx 15$ while CIFAR images are unrecognizable until about $K \\approx 80$. Reality is more complicated than percent variance explained!"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4g9HMk2Qw3_g"
      },
      "source": [
        "Let's pause to think about our choice of the dimensionality of the bottleneck layer, $\\mathbf{h}\\in\\mathbb{R}^K$. Dimensionality reduction by PCA works by projecting images onto the eigenvectors of the pixel covariance that have the largest eigenvalues. These eigenvalues, in turn, represent the amount of variance in the data along each of those vectors. So, we can visualize a rough, *linear* estimate of the intrinsic dimensionality of our datset by looking at its *eigenspectrum*.\n",
        "\n",
        "Imagine we have a 3-dimensional dataset, and the eigenvalues of the covariance are, in descending order, $[3, 2, 1]$. Then the total variance would be $3+2+1=6$, and we would say that the first principal component accounts for $3/6=50\\%$ of the total variance, while the first $K=2$ principal components account for a total of $(3+2)/6=83.3\\%$ of the total variance.\n",
        "\n",
        "MNIST is $28\\times 28 = 784$-dimensional, and CIFAR is $32\\times 32\\times 3=3072$-dimensional. The following plot estimates the _fraction_ of that variance distributed across the first $K$ modes."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "8llUJ9kkfKdX"
      },
      "source": [
        "def plot_fraction_variance_eigenspectrum(cov, max_K=200, annotate_K=None, ax=None):\n",
        "    ax = plt.gca() if ax is None else ax\n",
        "    _, s, _ = torch.svd_lowrank(cov, q=max_K)\n",
        "    rank = torch.arange(max_K+1)\n",
        "    total_variance = cov.diag().sum()\n",
        "    frac_variance = torch.cat([torch.zeros((1,)), torch.cumsum(s, dim=0) / total_variance])\n",
        "    line = ax.plot(rank[:max_K], frac_variance[:max_K])\n",
        "    if annotate_K is not None:\n",
        "        y = frac_variance[K]\n",
        "        ax.plot([K, K], [0, y], '-r')\n",
        "        ax.plot([0, K], [y, y], '-r')\n",
        "        ax.text(x=K+.5, y=y-.03, s=f'{100*y:.1f}% at K={K}')\n",
        "    return line\n",
        "\n",
        "plot_fraction_variance_eigenspectrum(cov, annotate_K=K)\n",
        "plt.ylim([0., 1.])\n",
        "plt.grid()\n",
        "plt.xlabel('number of hidden dimensions (K)')\n",
        "plt.ylabel('fraction of variance accounted for')\n",
        "plt.title(f'% variance in top K dimensions for {my_dataset_name}')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Xbxp67wujm_N"
      },
      "source": [
        "**Discussion prompts:**\n",
        "Compare curves between MNIST and CIFAR. \n",
        "\n",
        "Can you say which dataset is \"higher dimensional\" from this?\n",
        "\n",
        "How recognizable are reconstructed images using $K$ that accounts for about 50% of variance? 80%? 99%?\n",
        "\n",
        "Estimate the minimum $K$ for recognizable (linear) reconstructions. How does it differ between MNIST and CIFAR?"
      ]
    }
  ]
}