{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "W8 Homework",
      "provenance": [],
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/CIS-522/course-content/blob/main/tutorials/W08_AutoEncoders_GANs/W8_Homework.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TkhEz6Y17Q9c"
      },
      "source": [
        "# CIS-522 Week 8 Homework\n",
        "\n",
        "\n",
        "**Instructor:** Konrad Kording\n",
        "\n",
        "**Content Creators:** Richard Lange, Arash Ash"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "7TEunJDE7wC7",
        "cellView": "form"
      },
      "source": [
        "#@markdown What is your Pennkey and pod? (text, not numbers, e.g. bfranklin)\n",
        "my_pennkey = 'bfranklin' #@param {type:\"string\"}\n",
        "my_pod = 'euclidean-wombat' #@param ['Select', 'euclidean-wombat', 'sublime-newt', 'buoyant-unicorn', 'lackadaisical-manatee','indelible-stingray','superfluous-lyrebird','discreet-reindeer','quizzical-goldfish','ubiquitous-cheetah','nonchalant-crocodile','fashionable-lemur','spiffy-eagle','electric-emu','quotidian-lion','astute-jellyfish', 'quantum-herring']\n",
        "\n",
        "# start timing\n",
        "import time\n",
        "try:t0;\n",
        "except NameError: t0 = time.time()\n"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6DQZZg1Fhj5y"
      },
      "source": [
        "We **strongly** recommend that you keep a separate document offline with your answers, and paste them in when you're ready to submit. Colab may reset and clear your notebook after a period of inactivity."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8Q001COb70H5"
      },
      "source": [
        "---\n",
        "#Setup"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "4czoj8Ll725j"
      },
      "source": [
        "from IPython.display import IFrame"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "iSAYQpu475c9"
      },
      "source": [
        "# Part 1: build a VAE!"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "R_PIzQDB7_0L"
      },
      "source": [
        "Picking up where tutorial 1 left off: complete W8T1 Exercise 5. You may not be able to run multiple colab notebooks at once, so after completing the tutorial notebook, record some information for this assignment:\n",
        "\n",
        "1. Take a screenshot of the generated images. At the end of this homework, you'll upload your screenshot to airtable.\n",
        "2. Make a note of any changes you made to the architecture - you're encouraged (but not required) to experiment!\n",
        "3. Make a note of the value of the ELBO at the end of training.\n",
        "3. Make a note of `vae.sig_x` at the end of training."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "-ldUhAXuFpb7",
        "cellView": "form"
      },
      "source": [
        "#@markdown Which dataset did you train your VAE on?\n",
        "vae_dataset = \"other\" #@param [\"MNIST\", \"CIFAR\", \"other\"]\n",
        "#@markdown What dimensionality did you use for z (what is K)?\n",
        "vae_k_hidden = 20 #@param\n",
        "#@markdown How many convolutional filters did you use in each layer?\n",
        "vae_num_filters = 32 #@param\n",
        "#@markdown What value did the ELBO converge to?\n",
        "vae_elbo = 0.0 #@param\n",
        "#@markdown After training, what was the value of `vae.sig_x`?\n",
        "vae_sig_x = 1.0 #@param\n",
        "#@markdown Did you experiment with the architecture in any other way? If so, what did you change?\n",
        "vae_changes = \"\" #@param{type:\"string\"}\n",
        "\n",
        "try:t1;\n",
        "except NameError: t1 = time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9g1wQq118Aac"
      },
      "source": [
        "# Part 2: Know-a-pod"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fwtrWtDy8HzK"
      },
      "source": [
        " Discuss with two other members of your pod. What is an accomplishment that they are proud of, and why? (~100 words each)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "63bbOM9hJkVC",
        "cellView": "form"
      },
      "source": [
        "know_a_pod_1 = \"\" #@param{type:\"string\"}\n",
        "know_a_pod_2 = \"\" #@param{type:\"string\"}\n",
        "\n",
        "try:t2;\n",
        "except NameError: t2 = time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Sx1UQGNA8Wxc"
      },
      "source": [
        "# Part 3: choose your own adventure\n",
        "\n",
        "Here you have a choice: either do Part 3a (if you lean towards theory and feel like reading a paper) or do Part 3b (if you lean towards engineering and want to do some data wrangling in Kaggle)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "D4C-XyXe8rsU"
      },
      "source": [
        "## Part 3a: the bleeding edge of research\n",
        "\n",
        "You can choose between 3a and 3b.\n",
        "\n",
        "The field of \"deep generative models\" quite an active area of research! VAEs and GANs were both invented around 2014, and since then a steady stream of improvements and extensions have been developed, including thinkgs like WGAN and CycleGAN, which you've already seen.\n",
        "\n",
        "To get a sense of where the field is now, pick one of the following papers from the past few years. These papers can be quite dense, so it's not crucial that you understand every single step. With a bit of practice, you'll be able to read these papers and see at a glance what it's about at a high level. Don't spend more than an hour or two on this. \n",
        "\n",
        "Pick a paper from the list below and be prepared to answer the following high-level questions about it:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "3SgY97Lb8tOW",
        "cellView": "form"
      },
      "source": [
        "#@markdown Which paper did you choose?\n",
        "paper_choice = \"Lastname et al (YEAR). Title.\" #@param{type:\"string\"}\n",
        "#@markdown In your own words, what is the problem that is addressed by this paper?\n",
        "paper_problem_addressed = \"(skip if doing 3b)\" #@param{type:\"string\"}\n",
        "#@markdown In your own words, what solution do the authors propose?\n",
        "paper_proposed_solution = \"(skip if doing 3b)\" #@param{type:\"string\"}\n",
        "#@markdown In your own words, how do they evaluate their solution to demonstrate that it works?\n",
        "paper_how_evaluated = \"(skip if doing 3b)\" #@param{type:\"string\"}\n",
        "#@markdown Any other thoughts? (What part still doesn't make sense? Was something particularly surprising? How might you extend it?)\n",
        "paper_miscellaneous = \"(skip if doing 3b)\" #@param{type:\"string\"}\n",
        "\n",
        "try:t3;\n",
        "except NameError: t3 = time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "VJIpH42OSe4c"
      },
      "source": [
        "__List of paper suggestions:__\n",
        "\n",
        "* Eslami et al (2018). \"Neural scene representation and rendering\" https://science.sciencemag.org/content/360/6394/1204.full\n",
        "* Brock et al (2018). \"Large Scale GAN Training for High Fidelity Natural Image Synthesis\" https://arxiv.org/abs/1809.11096\n",
        "* Gulrajani et al (2017). \"Improved Training of Wasserstein GANs\" https://arxiv.org/abs/1704.00028\n",
        "* Zhao et al (2017). \"InfoVAE: Information Maximizing Variational Autoencoders\" http://arxiv.org/abs/1706.02262\n",
        "* van den Oord et al (2017). \"Neural Discrete Representation Learning\" https://arxiv.org/abs/1711.00937\n",
        "* Higgins et al (2017). \"beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework\" https://openreview.net/forum?id=Sy2fzU9gl\n",
        "* Larsen et al (2016) \"Autoencoding beyond pixels using a learned similarity metric\" http://proceedings.mlr.press/v48/larsen16.html\n",
        "* Chen et al (2016) \"InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets\" https://arxiv.org/abs/1606.03657\n",
        "* Gregor et al (2015) \"DRAW: A Recurrent Neural Network For Image Generation\" http://proceedings.mlr.press/v37/gregor15.html\n",
        "* Dziugaite et al (2015) \"Training generative neural networks via Maximum Mean Discrepancy optimization\" https://arxiv.org/abs/1505.03906\n",
        "* Find one of your own! Try, for instance, searching Google Scholar within citing articles of the [original VAE paper](https://scholar.google.com/scholar?cites=10486756931164834716) or the [original GAN paper](https://scholar.google.com/scholar?cites=8618380841735941249)."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BisrF3RmXVeT"
      },
      "source": [
        "## Part 3b: Continue Kaggle\n",
        "\n",
        "You can choose between 3a and 3b.\n",
        "\n",
        "If you enjoyed the week 3 Kaggle competition for solving the scrambled animal faces, you would definitely enjoy combining it with GANs here.\n",
        "\n",
        "In the [week 3 homework](https://colab.research.google.com/github/CIS-522/course-content/blob/main/tutorials/W3_MLPs/W3_Homework.ipynb), you used a fully-connected MLP to classify animal faces from scrambled pixels. In weeks 6-7, you saw that convolutional networks are far superior to MLPs on images since they detect spatial features better. Of course, convolutional networks are useless on the scrambled images. In this exercise, you will use a CycleGAN (or another method of your choosing) to un-scramble images, then use a convolutional network to classify the animal face images. If all goes well, you should see improvement in performance compared to your week 3 MLP!\n",
        "\n",
        " **Policy**:  Same as before, no methods are off the table, so feel free to unscramble images in any way imaginable, so we'll have a fun discussion later on!\n",
        "\n",
        "**Submission**: After unscrambling, You could train with any ConvNet (even pre-trained) and submit to Kaggle with the week3 submission code.\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "CaZVd3xTpSjW"
      },
      "source": [
        "# imports\n",
        "import os\n",
        "import csv\n",
        "import json\n",
        "import random\n",
        "import pathlib\n",
        "\n",
        "import torch\n",
        "import numpy as np\n",
        "import pandas as pd\n",
        "import matplotlib.pyplot as plt\n",
        "\n",
        "import torch.nn as nn\n",
        "import torch.optim as optim\n",
        "import torch.nn.functional as F\n",
        "from torch.utils.data import DataLoader, TensorDataset\n",
        "import torchvision.transforms as transforms\n",
        "from torchvision.datasets import ImageFolder\n",
        "\n",
        "dev = 'cuda' if torch.cuda.is_available() else 'cpu'"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "doUolX5z7iK0"
      },
      "source": [
        "### Kaggle Setup (same as Week 3)\n",
        "\n",
        "If you already set up your Kaggle account in Week 3, you can skip to uploading your `kaggle.json` file to this notebook.\n",
        "\n",
        "To use the Kaggle API, sign up for a Kaggle account at https://www.kaggle.com. Then go to the 'Account' tab of your user profile (`https://www.kaggle.com/<username>/account`) and select 'Create API Token'. This will trigger the download of `kaggle.json`, a file containing your API credentials. Place this file in Colab Files on the left menu.\n",
        "\n",
        "Then join the competition and accept the rules at https://www.kaggle.com/c/permuted-animal-faces/rules.\n",
        "Now we can handle the submisions here using the Kaggle API:\n",
        "\n",
        "![image.png]()"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "lfDMhvNwzTxx"
      },
      "source": [
        "# registering your API key\r\n",
        "!mkdir ~/.kaggle\r\n",
        "!cp kaggle.json ~/.kaggle/\r\n",
        "!chmod 600 ~/.kaggle/kaggle.json"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "cellView": "form",
        "id": "3azbU-7H7Szn"
      },
      "source": [
        "# @title Install Kaggle API in this notebook (run me)\n",
        "# %%capture\n",
        "!pip install --upgrade --force-reinstall --no-deps kaggle==1.5.10\n",
        "!kaggle -v"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "6D0MvzgrzioP",
        "cellView": "form"
      },
      "source": [
        "#@title Download training and test data for permuted animal faces\r\n",
        "!kaggle competitions download -c permuted-animal-faces\r\n",
        "!unzip permuted-animal-faces.zip"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "YqpESaTlE9U5",
        "cellView": "form"
      },
      "source": [
        "# @title Download the original (not permuted) animal faces dataset\n",
        "%%capture\n",
        "!rm -r AnimalFaces32x32/\n",
        "!git clone https://github.com/arashash/AnimalFaces32x32\n",
        "!rm -r afhq/\n",
        "!unzip ./AnimalFaces32x32/afhq_32x32.zip \n",
        "\n",
        "def get_data_loaders(batch_size):\n",
        "  # define the transform done only during training\n",
        "  augmentation_transforms = [transforms.RandomRotation(10),\n",
        "                             transforms.RandomHorizontalFlip()]\n",
        "\n",
        "  # define the transform done in training and testing (after augmentation)\n",
        "  preprocessing_transforms = [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]\n",
        "  \n",
        "  # compose them together\n",
        "  train_transform = transforms.Compose(augmentation_transforms + preprocessing_transforms)\n",
        "  test_transform = transforms.Compose(preprocessing_transforms)\n",
        "\n",
        "  # using pathlib to be compatible with all OS's\n",
        "  data_path = pathlib.Path('.')/'afhq'\n",
        "\n",
        "  # define the dataset objects (they can load one by one)\n",
        "  img_train_dataset = ImageFolder(data_path/'train', transform=train_transform)\n",
        "  img_test_dataset = ImageFolder(data_path/'val', transform=test_transform)\n",
        "\n",
        "  # define the dataloader objects (they can load batch by batch)\n",
        "  img_train_loader = DataLoader(img_train_dataset, batch_size=batch_size, pin_memory=True,\n",
        "                                shuffle=True, num_workers=0, drop_last=True)\n",
        "  # num_workers can be set to higher if running on Colab Pro TPUs to speed up,\n",
        "  # with more than one worker, it will do multithreading to queue batches\n",
        "  img_test_loader = DataLoader(img_test_dataset, batch_size=batch_size,\n",
        "                          shuffle=False, num_workers=0)\n",
        "  \n",
        "  return img_train_loader, img_test_loader"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "N_AkNbIqGp2R"
      },
      "source": [
        "X_animal_faces, _ = get_data_loaders(batch_size=128)\n",
        "\n",
        "im_batch, label_batch = next(iter(X_animal_faces))\n",
        "\n",
        "print(f\"X_animal_faces is a DataLoader producing images of size {im_batch.size()} and range [{im_batch.min()},{im_batch.max()}]\")\n",
        "\n",
        "plt.figure(figsize=(10,4))\n",
        "for i in range(10):\n",
        "    plt.subplot(2,5,i+1)\n",
        "    plt.imshow(im_batch[i].reshape(3,32,32).permute(1,2,0)/2+0.5)\n",
        "    plt.xticks([])\n",
        "    plt.yticks([])\n",
        "    plt.title(f'y={label_batch[i]}')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "l7cbuL-r8lOe"
      },
      "source": [
        "## Create your model\n",
        "\n",
        "Note: the two-step skeleton below is just a suggestion. Using a CycleGAN, as in [Week 8 Tutorial 2](https://colab.research.google.com/github/CIS-522/course-content/blob/main/tutorials/W08_AutoEncoders_GANs/W8_Tutorial2.ipynb#scrollTo=RKXyKSd98uCG), is encouraged. This is VAE and GAN week, after all!\n",
        "\n",
        "First, load the Kaggle training data, which is pairs of permuted images and their labels:"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fE1ZZKe8-EdG"
      },
      "source": [
        "df = pd.read_csv('train.csv')\n",
        "df.head()\n",
        "\n",
        "data = np.array(df)\n",
        "X_train = torch.tensor(data[:, 1:]).float()/255\n",
        "y_train = torch.tensor(data[:, 0]).long()\n",
        "\n",
        "kaggle_dataset = TensorDataset(X_train, y_train)\n",
        "kaggle_loader = DataLoader(kaggle_dataset, batch_size=128, shuffle=True)\n",
        "\n",
        "im_batch, label_batch = next(iter(kaggle_loader))\n",
        "\n",
        "print(f\"kaggle_loader is a DataLoader producing images of size {im_batch.size()} and range [{im_batch.min()},{im_batch.max()}]\")\n",
        "\n",
        "plt.figure(figsize=(10,4))\n",
        "for i in range(10):\n",
        "    plt.subplot(2,5,i+1)\n",
        "    plt.imshow(im_batch[i].reshape(3,32,32).permute(1,2,0))\n",
        "    plt.xticks([])\n",
        "    plt.yticks([])\n",
        "    plt.title(f'y={label_batch[i]}')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Rm4GAE8kAH5J"
      },
      "source": [
        "You'll need to write your own CycleGAN model, loss function, and training loop! Much of this can in principle be copied from tutorial code in earlier weeks. The more you try to write on your own from memory or from the Pytorch documentation, the more you'll learn.\n",
        "\n",
        "Suggestion: train a CycleGAN to un-scramble `X_train` from 3072-dimensional vectors to $3\\times 32 \\times 32$ images."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "qFvLLat1ASl7"
      },
      "source": [
        "# YOUR CODE HERE (e.g. train a CycleGAN to unscramble images)\n",
        "\n",
        "def unscramble_pixels(X_scrambled):\n",
        "    X_unscrambled = X_scrambled # YOUR CODE HERE (apply your trained unscrambler)\n",
        "    return X_unscrambled"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JM-enTgoOsGi"
      },
      "source": [
        "Use this block to visualize the unscrambled images. Did it seem to work? Do they look more like actual animal faces? "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "EgvJ0-M2OpiR"
      },
      "source": [
        "im_batch, label_batch = next(iter(kaggle_loader))\n",
        "im_unscrambled = unscramble_pixels(im_batch)\n",
        "\n",
        "plt.figure(figsize=(10,4))\n",
        "for i in range(10):\n",
        "    plt.subplot(2,5,i+1)\n",
        "    plt.imshow(im_unscrambled[i].reshape(3,32,32).permute(1,2,0))\n",
        "    plt.xticks([])\n",
        "    plt.yticks([])\n",
        "    plt.title(f'y={label_batch[i]}')\n",
        "plt.suptitle('Images after unscrambling')\n",
        "plt.show()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "WhKbBow1Aqta"
      },
      "source": [
        "Suggestion 2: Train a convnet (or other model of your choice)"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Eo_bOaR-AxmS"
      },
      "source": [
        "class MyModel(nn.Module):\n",
        "    def __init__(self):\n",
        "        super(MyModel, self).__init__()\n",
        "\n",
        "        # YOUR CODE HERE - define whatever model you like\n",
        "        self.dummy_layer = nn.Linear(3072, 3)\n",
        "    \n",
        "    def forward(self, x):\n",
        "        # YOUR CODE HERE - run your model forward\n",
        "        return self.dummy_layer(x.view(-1, 3072))\n",
        "\n",
        "net = MyModel()\n",
        "net.train()\n",
        "net.to(dev)\n",
        "\n",
        "CE_loss = nn.CrossEntropyLoss()\n",
        "opt = optim.Adam(net.parameters(), lr=0.001)\n",
        "losses = []\n",
        "for epoch in range(10):\n",
        "    for im_batch, label_batch in kaggle_loader:\n",
        "        opt.zero_grad()\n",
        "        im_unscrambled = unscramble_pixels(im_batch)\n",
        "        loss = CE_loss(net(im_unscrambled.to(dev)), label_batch.to(dev))\n",
        "        loss.backward()\n",
        "        opt.step()\n",
        "        losses.append(loss.item())\n",
        "\n",
        "plt.plot(losses)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1Yovf8OM8VM1"
      },
      "source": [
        "### Submit your model to Kaggle\n",
        "\n",
        "Note that you can submit maximum 20 times a day. Participants will need to wait until the next UTC day after submitting the maximum number of daily submissions.\n",
        "\n",
        "And finally you can hand select 2 submissions among your previous ones to be used for the final ranking."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "G9Vzhqtr7c6Y"
      },
      "source": [
        "# Predict test set\r\n",
        "df = pd.read_csv('test.csv')\r\n",
        "data = np.array(df)\r\n",
        "X_test = torch.tensor(data).float()/255\r\n",
        "n_test = X_test.size()[0]\r\n",
        "X_test_unscrambled = unscramble_pixels(X_test).reshape(n_test, 3, 32, 32)\r\n",
        "\r\n",
        "net.eval()\r\n",
        "y_pred = net(X_test_unscrambled.to(dev))\r\n",
        "labels_pred = torch.argmax(y_pred, dim=1)"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "rAOeoZH48OuJ"
      },
      "source": [
        "# create the submission file\r\n",
        "header = ['ImageId', 'Label']\r\n",
        "with open('submission.csv', 'w', newline='') as file:\r\n",
        "    writer = csv.writer(file)\r\n",
        "    writer.writerow(header)\r\n",
        "    \r\n",
        "    for i in range(len(labels_pred)):\r\n",
        "      writer.writerow([i+1] + [labels_pred[i].item()])"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "XEojRbYH9adp"
      },
      "source": [
        "# submit\r\n",
        "!kaggle competitions submit permuted-animal-faces -f submission.csv -m \"Week 8 homework, custom model\""
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7_09MZ0PCFYv"
      },
      "source": [
        "In addition to submitting to and climbing up the Kaggle leaderboards, tell us here a bit about the architecture you used and design choices you made. Bullet points are fine, but give enough detail that a classmate could read this and replicate your model."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "DrVY1NGm7VL-"
      },
      "source": [
        "kaggle_description = '''\n",
        "Tell us a bit about your model here. (Did you use a CycleGAN? How many layers? Was it Convolutional? What about the classifier?)\n",
        "'''\n",
        "\n",
        "try: t4;\n",
        "except NameError: t4 = time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "4H1sYq198IPq"
      },
      "source": [
        "# Part 4: Deep Learning, Deeper Thinking\n",
        "<img width=\"500\" src=\"https://raw.githubusercontent.com/CIS-522/course-content/main/tutorials/W08_AutoEncoders_GANs/static/cake_clipart.png\" />\n",
        "\n",
        "In this section, we will take a deeper dive into generative models and their implications for society. We will explore how these models may be used and misused, and what that means for engineers, policymakers, and end-users.\n",
        "\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ki6ZAa8pYI7L"
      },
      "source": [
        "## Part 4.1 Starstruck by Generative Models\n",
        "\n",
        "Let's begin by reading [this OpenAI blog post](https://openai.com/blog/generative-models/) as a primer on generative models.\n",
        "\n",
        "Then, read [This Quanta article on how GANs are changing other science domains](https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311/), particularly astronomy.\n",
        "\n",
        "Recall the [cake analogy](https://medium.com/syncedreview/yann-lecun-cake-analogy-2-0-a361da560dae) from T1.\n",
        "\n",
        "**In 200-500 words, please answer the following:**\n",
        "* Why might we (engineers, scientists, the public, policymakers) want to have good generative models? What might be a (set of) goals that we aspire to achieve through the development of generative models? \n",
        "* How do GANs and VAEs relate to LeCun's cake analogy? Have we realized the cake or only a portion of it - and how might GANs move us closer (or further) from this goal? Is the GAN/VAE-flavor of unsupervised learning the kind of learning we're interested in, or is it simply a distraction?\n",
        "* What are some societal risks involved with the use of generative models in science? What about deployment to the general public?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Z-s6R6fhhZF9"
      },
      "source": [
        "generative_initial_reaction = '''\n",
        "Your response here (200-500 words).\n",
        "'''\n",
        "\n",
        "try:t5;\n",
        "except NameError: t5 = time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5VXjke88V1Qc"
      },
      "source": [
        "## Part 4.2 (Deep)Fake it Till You Make It\n",
        "\n",
        "\n",
        "Now, read the following:\n",
        "* [This Guardian article on DeepFakes and How to Spot Them](https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them)\n",
        "\n",
        "\n",
        "Then, test yourself: \n",
        "* [Can you spot the Deepfake?](https://detectfakes.media.mit.edu/) try >10 examples and remember how many you got right. \n",
        "\n",
        "\n",
        "**In 300-500 words, answer the following questions:**\n",
        "* How did you do? Were you able to detect the DeepFakes?\n",
        "* As researchers and engineers, one of our roles is to communicate effectively and interface with journalists and the public. Do you think this article did a good job of presenting the risks of Deepfakes and generative models? If yes, describe one area that you felt the article did a good job explaining the concept to a public audience. If no, describe an area for improvement.\n",
        "* How would you weigh the costs and benefits of generative models for use in science, engineering, and the public? Do the benefits outweigh the risks? If so, why? If not, what should we do about it?"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "M6xsDZa_ba6W"
      },
      "source": [
        "generative_models_costs_and_benefits = '''\n",
        "Your Response Here (300-500 words).\n",
        "'''"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SQf259AWbtMR"
      },
      "source": [
        "**In 200-300 words, answer the following questions. Then post to your slack channel AND reply to another pod member's post with a response:**\n",
        "* What are our responsibilities as researchers and engineers to inform the public of the risks associated with these technologies? Should regulations be put in place (if so, what kind, if not, why not?), and how might we better inform the public of the risks? If engineers are part of the problem, are they also part of the solution?\n",
        "* How can we channel our efforts towards generative models for social good? Can you think of any other positive uses for generative models?\n",
        "* **Don't forget to post to slack and respond to a pod member's post. Do you agree with their response? Did you find any points particularly interesting? Did you learn something new from their point of view?**"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "V6cEx80VlxYw"
      },
      "source": [
        "researcher_responsibilities = '''\n",
        "Your Response Here (200-300 words).\n",
        "'''\n",
        "\n",
        "try:t6;\n",
        "except NameError: t6 = time.time()"
      ],
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Itbr5IuF85qz"
      },
      "source": [
        "---\n",
        "# Submit to Airtable\n",
        "**Don't forget to contribute to the conversation in your pod slack channel.** You can do so by copying and pasting some or all of the answers to the above questions, or by commenting and responding to other people's posts. "
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "T9H-l0T68_XT",
        "cellView": "form"
      },
      "source": [
        "#@markdown #Run Cell to Show Airtable Form\n",
        "#@markdown ##**Confirm your answers and then click \"Submit\"**\n",
        "\n",
        "def prefill_form(src, fields: dict):\n",
        "  '''\n",
        "  src: the original src url to embed the form\n",
        "  fields: a dictionary of field:value pairs,\n",
        "  e.g. {\"pennkey\": my_pennkey, \"location\": my_location}\n",
        "  '''\n",
        "  prefills = \"&\".join([\"prefill_%s=%s\"%(key, fields[key]) for key in fields])\n",
        "  src = src + prefills\n",
        "  src = \"+\".join(src.split(\" \"))\n",
        "  return src\n",
        "\n",
        "#autofill fields if they are not present\n",
        "#a missing pennkey and pod will result in an Airtable warning\n",
        "#which is easily fixed user-side.\n",
        "try: my_pennkey;\n",
        "except NameError: my_pennkey = \"\"\n",
        "try: my_pod;\n",
        "except NameError: my_pod = \"Select\"\n",
        "\n",
        "times = [(t-t0) for t in [t1,t2,t3,t4,t5,t6]]\n",
        "\n",
        "fields = {\"pennkey\": my_pennkey,\n",
        "          \"pod\": my_pod,\n",
        "          \"vae_dataset\": vae_dataset,\n",
        "          \"vae_k_hidden\": vae_k_hidden,\n",
        "          \"vae_num_filters\": vae_num_filters,\n",
        "          \"vae_elbo\": vae_elbo,\n",
        "          \"vae_sig_x\": vae_sig_x,\n",
        "          \"vae_changes\": vae_changes,\n",
        "          \"know_a_pod_1\": know_a_pod_1,\n",
        "          \"know_a_pod_2\": know_a_pod_2,\n",
        "          \"paper_choice\": paper_choice,\n",
        "          \"paper_problem_addressed\": paper_problem_addressed,\n",
        "          \"paper_proposed_solution\": paper_proposed_solution,\n",
        "          \"paper_how_evaluated\": paper_how_evaluated,\n",
        "          \"paper_miscellaneous\": paper_miscellaneous,\n",
        "          \"kaggle_description\": kaggle_description,\n",
        "          \"deepfakes_initial_reaction\": generative_initial_reaction,\n",
        "          \"generative_models_costs_and_benefits\": generative_models_costs_and_benefits,\n",
        "          \"researcher_responsibilities\": researcher_responsibilities,\n",
        "          \"cumulative_times\": times}\n",
        "\n",
        "src = \"https://airtable.com/embed/shrqbfoUrHoAg5Mza?\"\n",
        "\n",
        "#now instead of the original source url, we do: src = prefill_form(src, fields)\n",
        "display(IFrame(src = prefill_form(src, fields), width = 800, height = 400))"
      ],
      "execution_count": null,
      "outputs": []
    }
  ]
}