{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "kR-4eNdK6lYS"
      },
      "source": [
        "Deep Learning\n",
        "=============\n",
        "\n",
        "Assignment 3\n",
        "------------\n",
        "\n",
        "Previously in `2_fullyconnected.ipynb`, you trained a logistic regression and a neural network model.\n",
        "\n",
        "The goal of this assignment is to explore regularization techniques."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "both",
        "id": "JLpLa8Jt7Vu4"
      },
      "outputs": [],
      "source": [
        "# These are all the modules we'll be using later. Make sure you can import them\n",
        "# before proceeding further.\n",
        "from __future__ import print_function\n",
        "import numpy as np\n",
        "import tensorflow as tf\n",
        "from six.moves import cPickle as pickle"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1HrCK6e17WzV"
      },
      "source": [
        "First reload the data we generated in `1_notmnist.ipynb`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "both",
        "id": "y3-cj1bpmuxc"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Training set (200000, 28, 28) (200000,)\n",
            "Validation set (10000, 28, 28) (10000,)\n",
            "Test set (18724, 28, 28) (18724,)\n"
          ]
        }
      ],
      "source": [
        "pickle_file = 'notMNIST.pickle'\n",
        "\n",
        "with open(pickle_file, 'rb') as f:\n",
        "  save = pickle.load(f)\n",
        "  train_dataset = save['train_dataset']\n",
        "  train_labels = save['train_labels']\n",
        "  valid_dataset = save['valid_dataset']\n",
        "  valid_labels = save['valid_labels']\n",
        "  test_dataset = save['test_dataset']\n",
        "  test_labels = save['test_labels']\n",
        "  del save  # hint to help gc free up memory\n",
        "  print('Training set', train_dataset.shape, train_labels.shape)\n",
        "  print('Validation set', valid_dataset.shape, valid_labels.shape)\n",
        "  print('Test set', test_dataset.shape, test_labels.shape)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "L7aHrm6nGDMB"
      },
      "source": [
        "Reformat into a shape that's more adapted to the models we're going to train:\n",
        "- data as a flat matrix,\n",
        "- labels as float 1-hot encodings."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "both",
        "id": "IRSyYiIIGIzS"
      },
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "Training set (200000, 784) (200000, 10)\n",
            "Validation set (10000, 784) (10000, 10)\n",
            "Test set (18724, 784) (18724, 10)\n"
          ]
        }
      ],
      "source": [
        "image_size = 28\n",
        "num_labels = 10\n",
        "\n",
        "def reformat(dataset, labels):\n",
        "  dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)\n",
        "  # Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]\n",
        "  labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)\n",
        "  return dataset, labels\n",
        "train_dataset, train_labels = reformat(train_dataset, train_labels)\n",
        "valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)\n",
        "test_dataset, test_labels = reformat(test_dataset, test_labels)\n",
        "print('Training set', train_dataset.shape, train_labels.shape)\n",
        "print('Validation set', valid_dataset.shape, valid_labels.shape)\n",
        "print('Test set', test_dataset.shape, test_labels.shape)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "both",
        "id": "RajPLaL_ZW6w"
      },
      "outputs": [],
      "source": [
        "def accuracy(predictions, labels):\n",
        "  return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))\n",
        "          / predictions.shape[0])"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "sgLbUAQ1CW-1"
      },
      "source": [
        "---\n",
        "Problem 1\n",
        "---------\n",
        "\n",
        "Introduce and tune L2 regularization for both logistic and neural network models. Remember that L2 amounts to adding a penalty on the norm of the weights to the loss. In TensorFlow, you can compute the L2 loss for a tensor `t` using `nn.l2_loss(t)`. The right amount of regularization should improve your validation / test accuracy.\n",
        "\n",
        "---"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "na8xX2yHZzNF"
      },
      "source": [
        "---\n",
        "Problem 2\n",
        "---------\n",
        "Let's demonstrate an extreme case of overfitting. Restrict your training data to just a few batches. What happens?\n",
        "\n",
        "---"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ww3SCBUdlkRc"
      },
      "source": [
        "---\n",
        "Problem 3\n",
        "---------\n",
        "Introduce Dropout on the hidden layer of the neural network. Remember: Dropout should only be introduced during training, not evaluation, otherwise your evaluation results would be stochastic as well. TensorFlow provides `nn.dropout()` for that, but you have to make sure it's only inserted during training.\n",
        "\n",
        "What happens to our extreme overfitting case?\n",
        "\n",
        "---"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-b1hTz3VWZjw"
      },
      "source": [
        "---\n",
        "Problem 4\n",
        "---------\n",
        "\n",
        "Try to get the best performance you can using a multi-layer model! The best reported test accuracy using a deep network is [97.1%](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html?showComment=1391023266211#c8758720086795711595).\n",
        "\n",
        "One avenue you can explore is to add multiple layers.\n",
        "\n",
        "Another one is to use learning rate decay:\n",
        "\n",
        "    global_step = tf.Variable(0)  # count the number of steps taken.\n",
        "    learning_rate = tf.train.exponential_decay(0.5, global_step, ...)\n",
        "    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)\n",
        " \n",
        " ---\n"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "name": "3_regularization.ipynb",
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
