{"nbformat":4,"nbformat_minor":0,"metadata":{"accelerator":"GPU","colab":{"name":"fully_connected_networks.ipynb","provenance":[{"file_id":"1CuDDjoA-tGsvlcreYt0LXXcrutgWn-nN","timestamp":1603106521037}],"collapsed_sections":[]},"kernelspec":{"display_name":"Python 3","name":"python3"},"language_info":{"codemirror_mode":{"name":"ipython","version":3},"file_extension":".py","mimetype":"text/x-python","name":"python","nbconvert_exporter":"python","pygments_lexer":"ipython3","version":"3.7.1"}},"cells":[{"cell_type":"markdown","metadata":{"id":"DDJwQPZcupab"},"source":["# EECS 498-007/598-005 Assignment 3-1: Fully-Connected Neural Networks and Dropout\n","\n","Before we start, please put your name and UMID in following format\n","\n",": Firstname LASTNAME, #00000000   //   e.g.) Justin JOHNSON, #12345678"]},{"cell_type":"markdown","metadata":{"id":"PwUfQWdbuvSa"},"source":["**Your Answer:**   \n","Hello WORLD, #XXXXXXXX"]},{"cell_type":"markdown","metadata":{"id":"ZeH0OvuEe1CN","tags":["pdf-title"]},"source":["# Fully-connected neural networks\n","In the previous homework you implemented a fully-connected two-layer neural network on CIFAR-10. The implementation was simple but not very modular since the loss and gradient were computed in a single monolithic function. This is manageable for a simple two-layer network, but would become impractical as we move to bigger models. Ideally we want to build networks using a more modular design so that we can implement different layer types in isolation and then snap them together into models with different architectures."]},{"cell_type":"markdown","metadata":{"id":"3Qiu9_4pe1CP","tags":["pdf-ignore"]},"source":["In this exercise we will implement fully-connected networks using a more modular approach. For each layer we will implement a `forward` and a `backward` function. The `forward` function will receive inputs, weights, and other parameters and will return both an output and a `cache` object storing data needed for the backward pass, like this:\n","\n","```python\n","def forward(x, w):\n","  \"\"\" Receive inputs x and weights w \"\"\"\n","  # Do some computations ...\n","  z = # ... some intermediate value\n","  # Do some more computations ...\n","  out = # the output\n","   \n","  cache = (x, w, z, out) # Values we need to compute gradients\n","   \n","  return out, cache\n","```\n","\n","The backward pass will receive upstream derivatives and the `cache` object, and will return gradients with respect to the inputs and weights, like this:\n","\n","```python\n","def backward(dout, cache):\n","  \"\"\"\n","  Receive dout (derivative of loss with respect to outputs) and cache,\n","  and compute derivative with respect to inputs.\n","  \"\"\"\n","  # Unpack cache values\n","  x, w, z, out = cache\n","  \n","  # Use values in cache to compute derivatives\n","  dx = # Derivative of loss with respect to x\n","  dw = # Derivative of loss with respect to w\n","  \n","  return dx, dw\n","```\n","\n","After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.\n","\n","In addition to implementing fully-connected networks of arbitrary depth, we will also explore different update rules for optimization, and introduce **Dropout** as a regularizer as a tool to more efficiently optimize deep networks.\n","  "]},{"cell_type":"markdown","metadata":{"id":"ubB_0e-UAOVK"},"source":["## Install starter code\n","We will continue using the utility functions that we've used for Assignment 1 and 2: [`coutils` package](https://github.com/deepvision-class/starter-code). Run this cell to download and install it.\n"]},{"cell_type":"code","metadata":{"id":"ASkY27ZtA7Is"},"source":["!pip install git+https://github.com/deepvision-class/starter-code"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"MzqbYcKdz6ew"},"source":["## Setup code\n","Run some setup code for this notebook: Import some useful packages and increase the default figure size."]},{"cell_type":"code","metadata":{"id":"HzRdJ3uhe1CR","tags":["pdf-ignore"]},"source":["import math\n","import torch\n","import coutils\n","from coutils import fix_random_seed, rel_error, compute_numeric_gradient, Solver\n","import matplotlib.pyplot as plt\n","\n","# for plotting\n","plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\n","plt.rcParams['image.interpolation'] = 'nearest'\n","plt.rcParams['image.cmap'] = 'gray'\n","\n","# data type and device for torch.tensor\n","to_float = {'dtype': torch.float, 'device': 'cpu'}\n","to_float_cuda = {'dtype': torch.float, 'device': 'cuda'}\n","to_double = {'dtype': torch.double, 'device': 'cpu'}\n","to_double_cuda = {'dtype': torch.double, 'device': 'cuda'}\n","to_long = {'dtype': torch.long, 'device': 'cpu'}\n","to_long_cuda = {'dtype': torch.long, 'device': 'cuda'}"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"xK-SJIqLDRaa"},"source":["## Load CIFAR-10 data\n","Here we provide a function to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset."]},{"cell_type":"code","metadata":{"id":"npVJoyXGX2ql"},"source":["def get_CIFAR10_data(validation_ratio=0.05, cuda=False, reshape_to_2d=False,\n","                     visualize=False):\n","  \"\"\"\n","  Load the CIFAR-10 dataset from disk and perform preprocessing to prepare\n","  it for the linear classifier. These are the same steps as we used for the\n","  SVM, but condensed to a single function.  \n","  \"\"\"\n","  X_train, y_train, X_test, y_test = coutils.data.cifar10()\n","\n","  # Load every data on cuda\n","  if cuda:\n","    X_train = X_train.cuda()\n","    y_train = y_train.cuda()\n","    X_test = X_test.cuda()\n","    y_test = y_test.cuda()\n","\n","  # 0. Visualize some examples from the dataset.\n","  class_names = [\n","      'plane', 'car', 'bird', 'cat', 'deer',\n","      'dog', 'frog', 'horse', 'ship', 'truck'\n","  ]\n","  if visualize:\n","    img = coutils.utils.visualize_dataset(X_train, y_train, 12, class_names)\n","    plt.imshow(img)\n","    plt.axis('off')\n","    plt.show()\n","\n","  # 1. Normalize the data: subtract the mean RGB (zero mean)\n","  mean_image = X_train.mean(dim=0, keepdim=True).mean(dim=2, keepdim=True).mean(dim=3, keepdim=True)\n","  X_train -= mean_image\n","  X_test -= mean_image\n","\n","  # 2. Reshape the image data into rows\n","  if reshape_to_2d:\n","    X_train = X_train.reshape(X_train.shape[0], -1)\n","    X_test = X_test.reshape(X_test.shape[0], -1)\n","\n","  # 3. Take the validation set from the training set\n","  # Note: It should not be taken from the test set\n","  # For random permumation, you can use torch.randperm or torch.randint\n","  # But, for this homework, we use slicing instead.\n","  num_training = int( X_train.shape[0] * (1.0 - validation_ratio) )\n","  num_validation = X_train.shape[0] - num_training\n","\n","  # return the dataset\n","  data_dict = {}\n","  data_dict['X_val'] = X_train[num_training:num_training + num_validation]\n","  data_dict['y_val'] = y_train[num_training:num_training + num_validation]\n","  data_dict['X_train'] = X_train[0:num_training]\n","  data_dict['y_train'] = y_train[0:num_training]\n","\n","  data_dict['X_test'] = X_test\n","  data_dict['y_test'] = y_test\n","  return data_dict"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"vBwH_5l6DCoz"},"source":["# Invoke the above function to get our data.\n","data_dict = get_CIFAR10_data(visualize=True)\n","print('Train data shape: ', data_dict['X_train'].shape)\n","print('Train labels shape: ', data_dict['y_train'].shape)\n","print('Validation data shape: ', data_dict['X_val'].shape)\n","print('Validation labels shape: ', data_dict['y_val'].shape)\n","print('Test data shape: ', data_dict['X_test'].shape)\n","print('Test labels shape: ', data_dict['y_test'].shape)"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"JB7Eu3qJ9xnm"},"source":["# Linear layer"]},{"cell_type":"markdown","metadata":{"id":"bRdnxsvZunFu"},"source":["For each layer we implement, we will define a class with two static methods `forward` and `backward`.\n","\n","For now the `forward` and `backward` methods are stubs. We will actually implement them in the following cells."]},{"cell_type":"code","metadata":{"id":"6ZCEVswF96aq"},"source":["class Linear(object):\n","\n","  @staticmethod\n","  def forward(x, w, b):\n","    raise NotImplementedError\n","\n","  @staticmethod\n","  def backward(dout, cache):\n","    raise NotImplementedError"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"0NNv3l-ne1Cb"},"source":["## Linear layer: forward\n","Implement the `Linear.forward` function. Once you are done you can test your implementaion by running the next cell:"]},{"cell_type":"code","metadata":{"id":"Vc-qQp11axc0"},"source":["def linear_forward(x, w, b):\n","  \"\"\"\n","  Computes the forward pass for an linear (fully-connected) layer.\n","  The input x has shape (N, d_1, ..., d_k) and contains a minibatch of N\n","  examples, where each example x[i] has shape (d_1, ..., d_k). We will\n","  reshape each input into a vector of dimension D = d_1 * ... * d_k, and\n","  then transform it to an output vector of dimension M.\n","  Inputs:\n","  - x: A tensor containing input data, of shape (N, d_1, ..., d_k)\n","  - w: A tensor of weights, of shape (D, M)\n","  - b: A tensor of biases, of shape (M,)\n","  Returns a tuple of:\n","  - out: output, of shape (N, M)\n","  - cache: (x, w, b)\n","  \"\"\"\n","  out = None\n","  #############################################################################\n","  # TODO: Implement the linear forward pass. Store the result in out. You     #\n","  # will need to reshape the input into rows.                                 #\n","  #############################################################################\n","  # Replace \"pass\" statement with your code\n","  pass\n","  #############################################################################\n","  #                              END OF YOUR CODE                             #\n","  #############################################################################\n","  cache = (x, w, b)\n","  return out, cache\n","\n","Linear.forward = linear_forward"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"z8_B1QwbuHeq"},"source":["Test the `Linear.forward` function. You should see errors less than `1e-8`."]},{"cell_type":"code","metadata":{"id":"sjq2Sq4Ze1Cc"},"source":["# Test the Linear.forward function\n","num_inputs = 2\n","input_shape = torch.tensor((4, 5, 6))\n","output_dim = 3\n","\n","input_size = num_inputs * torch.prod(input_shape)\n","weight_size = output_dim * torch.prod(input_shape)\n","\n","x = torch.linspace(-0.1, 0.5, steps=input_size, **to_double_cuda).reshape(num_inputs, *input_shape)\n","w = torch.linspace(-0.2, 0.3, steps=weight_size, **to_double_cuda).reshape(torch.prod(input_shape), output_dim)\n","b = torch.linspace(-0.3, 0.1, steps=output_dim, **to_double_cuda)\n","\n","out, _ = Linear.forward(x, w, b)\n","correct_out = torch.tensor([[ 1.49834967,  1.70660132,  1.91485297],\n","                            [ 3.25553199,  3.5141327,   3.77273342]],\n","                            **to_double_cuda)\n","\n","print('Testing Linear.forward function:')\n","print('difference: ', rel_error(out, correct_out))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"4mxIDo46e1Cf"},"source":["## Linear layer: backward\n","Now implement the `Linear.backward` function and test your implementation using numeric gradient checking."]},{"cell_type":"code","metadata":{"id":"7emXOqm7fO7T"},"source":["def linear_backward(dout, cache):\n","  \"\"\"\n","  Computes the backward pass for an linear layer.\n","  Inputs:\n","  - dout: Upstream derivative, of shape (N, M)\n","  - cache: Tuple of:\n","    - x: Input data, of shape (N, d_1, ... d_k)\n","    - w: Weights, of shape (D, M)\n","    - b: Biases, of shape (M,)\n","  Returns a tuple of:\n","  - dx: Gradient with respect to x, of shape (N, d1, ..., d_k)\n","  - dw: Gradient with respect to w, of shape (D, M)\n","  - db: Gradient with respect to b, of shape (M,)\n","  \"\"\"\n","  x, w, b = cache\n","  dx, dw, db = None, None, None\n","  #############################################################################\n","  # TODO: Implement the linear backward pass.                                 #\n","  #############################################################################\n","  # Replace \"pass\" statement with your code\n","  pass\n","  #############################################################################\n","  #                              END OF YOUR CODE                             #\n","  #############################################################################\n","  return dx, dw, db\n","\n","Linear.backward = linear_backward"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"aoJQSun1vJCa"},"source":["Run the following to test your implementation of `Linear.backward`. You should see errors less than `1e-8`."]},{"cell_type":"code","metadata":{"id":"ts85gmote1Cg"},"source":["# Test the Linear.backward function\n","fix_random_seed(0)\n","x = torch.randn(10, 2, 3, **to_double_cuda)\n","w = torch.randn(6, 5, **to_double_cuda)\n","b = torch.randn(5, **to_double_cuda)\n","dout = torch.randn(10, 5, **to_double_cuda)\n","\n","dx_num = compute_numeric_gradient(lambda x: Linear.forward(x, w, b)[0], x, dout)\n","dw_num = compute_numeric_gradient(lambda w: Linear.forward(x, w, b)[0], w, dout)\n","db_num = compute_numeric_gradient(lambda b: Linear.forward(x, w, b)[0], b, dout)\n","\n","_, cache = Linear.forward(x, w, b)\n","dx, dw, db = Linear.backward(dout, cache)\n","\n","# The error should be around e-10 or less\n","print('Testing Linear.backward function:')\n","print('dx error: ', rel_error(dx_num, dx))\n","print('dw error: ', rel_error(dw_num, dw))\n","print('db error: ', rel_error(db_num, db))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"bdIqQzqiJQE6"},"source":["# ReLU activation"]},{"cell_type":"markdown","metadata":{"id":"YdX98A_qvTRt"},"source":["We will now implement the ReLU nonlinearity. As above, we will define a class with two empty static methods, and implement them in upcoming cells."]},{"cell_type":"code","metadata":{"id":"WZ4d9xi5JZ4h"},"source":["class ReLU(object):\n","\n","  @staticmethod\n","  def forward(x, w, b):\n","    raise NotImplementedError\n","\n","  @staticmethod\n","  def backward(dout, cache):\n","    raise NotImplementedError"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"n2DyqL4Ae1Cl"},"source":["## ReLU activation: forward\n","Implement the forward pass for the ReLU activation function in the `ReLU.forward` function.\n","\n","You should not change the input tensor with an in-place operation."]},{"cell_type":"code","metadata":{"id":"suy8VIfUxSTZ"},"source":["def relu_forward(x):\n","  \"\"\"\n","  Computes the forward pass for a layer of rectified linear units (ReLUs).\n","  Input:\n","  - x: Input; a tensor of any shape\n","  Returns a tuple of:\n","  - out: Output, a tensor of the same shape as x\n","  - cache: x\n","  \"\"\"\n","  out = None\n","  #############################################################################\n","  # TODO: Implement the ReLU forward pass.                                    #\n","  # You should not change the input tensor with an in-place operation.        #\n","  #############################################################################\n","  # Replace \"pass\" statement with your code\n","  pass\n","  #############################################################################\n","  #                              END OF YOUR CODE                             #\n","  #############################################################################\n","  cache = x\n","  return out, cache\n","\n","ReLU.forward = relu_forward"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"9XGN5pd0wyrg"},"source":["Run the following to test your implementation of the ReLU forward pass. Your errors should be less than `1e-7`."]},{"cell_type":"code","metadata":{"id":"QblpieUJe1Cm"},"source":["# Test the relu_forward function\n","x = torch.linspace(-0.5, 0.5, steps=12, **to_double_cuda).reshape(3, 4)\n","\n","out, _ = ReLU.forward(x)\n","correct_out = torch.tensor([[ 0.,          0.,          0.,          0.,        ],\n","                            [ 0.,          0.,          0.04545455,  0.13636364,],\n","                            [ 0.22727273,  0.31818182,  0.40909091,  0.5,       ]],\n","                            **to_double_cuda)\n","\n","# Compare your output with ours. The error should be on the order of e-8\n","print('Testing ReLU.forward function:')\n","print('difference: ', rel_error(out, correct_out))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"3bSInb7xe1Cq"},"source":["## ReLU activation: backward\n","Now implement the backward pass for the ReLU activation function.\n","\n","Again, you should not change the input tensor with an in-place operation."]},{"cell_type":"code","metadata":{"id":"IWffiu_dyDhr"},"source":["def relu_backward(dout, cache):\n","  \"\"\"\n","  Computes the backward pass for a layer of rectified linear units (ReLUs).\n","  Input:\n","  - dout: Upstream derivatives, of any shape\n","  - cache: Input x, of same shape as dout\n","  Returns:\n","  - dx: Gradient with respect to x\n","  \"\"\"\n","  dx, x = None, cache\n","  #############################################################################\n","  # TODO: Implement the ReLU backward pass.                                   #\n","  # You should not change the input tensor with an in-place operation.        #\n","  #############################################################################\n","  # Replace \"pass\" statement with your code\n","  pass\n","  #############################################################################\n","  #                              END OF YOUR CODE                             #\n","  #############################################################################\n","  return dx\n","\n","ReLU.backward = relu_backward"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"KKRsZPVHxZuz"},"source":["Run the following to test your implementation of `ReLU.backward`. Your errors should be less than `1e-10`."]},{"cell_type":"code","metadata":{"id":"odiV48zBe1Cr"},"source":["fix_random_seed(0)\n","x = torch.randn(10, 10, **to_double_cuda)\n","dout = torch.randn(*x.shape, **to_double_cuda)\n","\n","dx_num = compute_numeric_gradient(lambda x: ReLU.forward(x)[0], x, dout)\n","\n","_, cache = ReLU.forward(x)\n","dx = ReLU.backward(dout, cache)\n","\n","# The error should be on the order of e-12\n","print('Testing ReLU.backward function:')\n","print('dx error: ', rel_error(dx_num, dx))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"eVTMuUOZe1Cv"},"source":["# \"Sandwich\" layers\n","There are some common patterns of layers that are frequently used in neural nets. For example, linear layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define a convenience layer.\n","\n","This also shows how our layer abstraction allows us to implement new layers by composing existing layer implementations. This is a powerful mechanism for structuring deep learning code in a modular fashion.\n","\n","For now take a look at the `forward` and `backward` functions in `Linear_ReLU`, and run the following to numerically gradient check the backward pass:"]},{"cell_type":"code","metadata":{"id":"L1a6zFSQMOOo"},"source":["class Linear_ReLU(object):\n","\n","  @staticmethod\n","  def forward(x, w, b):\n","    \"\"\"\n","    Convenience layer that perorms an linear transform followed by a ReLU.\n","\n","    Inputs:\n","    - x: Input to the linear layer\n","    - w, b: Weights for the linear layer\n","    Returns a tuple of:\n","    - out: Output from the ReLU\n","    - cache: Object to give to the backward pass\n","    \"\"\"\n","    a, fc_cache = Linear.forward(x, w, b)\n","    out, relu_cache = ReLU.forward(a)\n","    cache = (fc_cache, relu_cache)\n","    return out, cache\n","\n","  @staticmethod\n","  def backward(dout, cache):\n","    \"\"\"\n","    Backward pass for the linear-relu convenience layer\n","    \"\"\"\n","    fc_cache, relu_cache = cache\n","    da = ReLU.backward(dout, relu_cache)\n","    dx, dw, db = Linear.backward(da, fc_cache)\n","    return dx, dw, db"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"PnnnLp_pyN0W"},"source":["Run the following to test the implementation of the `Linear_ReLU` layer using numeric gradient checking. You should see errors less than `1e-8`"]},{"cell_type":"code","metadata":{"id":"-gaY5YfAe1Cw"},"source":["fix_random_seed(0)\n","x = torch.randn(2, 3, 4, **to_double_cuda)\n","w = torch.randn(12, 10, **to_double_cuda)\n","b = torch.randn(10, **to_double_cuda)\n","dout = torch.randn(2, 10, **to_double_cuda)\n","\n","out, cache = Linear_ReLU.forward(x, w, b)\n","dx, dw, db = Linear_ReLU.backward(dout, cache)\n","\n","dx_num = compute_numeric_gradient(lambda x: Linear_ReLU.forward(x, w, b)[0], x, dout)\n","dw_num = compute_numeric_gradient(lambda w: Linear_ReLU.forward(x, w, b)[0], w, dout)\n","db_num = compute_numeric_gradient(lambda b: Linear_ReLU.forward(x, w, b)[0], b, dout)\n","\n","# Relative error should be around e-8 or less\n","print('Testing Linear_ReLU.forward and Linear_ReLU.backward:')\n","print('dx error: ', rel_error(dx_num, dx))\n","print('dw error: ', rel_error(dw_num, dw))\n","print('db error: ', rel_error(db_num, db))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"rAGgiyP5e1C0"},"source":["# Loss layers: Softmax and SVM\n","You implemented these loss functions in the last assignment, so we'll give them to you for free here. You should still make sure you understand how they work by looking at the implementations.\n","\n","You can make sure that the implementations are correct by running the following:"]},{"cell_type":"code","metadata":{"id":"m-OG6d7RNOxj"},"source":["def svm_loss(x, y):\n","  \"\"\"\n","  Computes the loss and gradient using for multiclass SVM classification.\n","  Inputs:\n","  - x: Input data, of shape (N, C) where x[i, j] is the score for the jth\n","    class for the ith input.\n","  - y: Vector of labels, of shape (N,) where y[i] is the label for x[i] and\n","    0 <= y[i] < C\n","  Returns a tuple of:\n","  - loss: Scalar giving the loss\n","  - dx: Gradient of the loss with respect to x\n","  \"\"\"\n","  N = x.shape[0]\n","  correct_class_scores = x[torch.arange(N), y]\n","  margins = (x - correct_class_scores[:, None] + 1.0).clamp(min=0.)\n","  margins[torch.arange(N), y] = 0.\n","  loss = margins.sum() / N\n","  num_pos = (margins > 0).sum(dim=1)\n","  dx = torch.zeros_like(x)\n","  dx[margins > 0] = 1.\n","  dx[torch.arange(N), y] -= num_pos.to(dx.dtype)\n","  dx /= N\n","  return loss, dx\n","\n","\n","def softmax_loss(x, y):\n","  \"\"\"\n","  Computes the loss and gradient for softmax classification.\n","  Inputs:\n","  - x: Input data, of shape (N, C) where x[i, j] is the score for the jth\n","    class for the ith input.\n","  - y: Vector of labels, of shape (N,) where y[i] is the label for x[i] and\n","    0 <= y[i] < C\n","  Returns a tuple of:\n","  - loss: Scalar giving the loss\n","  - dx: Gradient of the loss with respect to x\n","  \"\"\"\n","  shifted_logits = x - x.max(dim=1, keepdim=True).values\n","  Z = shifted_logits.exp().sum(dim=1, keepdim=True)\n","  log_probs = shifted_logits - Z.log()\n","  probs = log_probs.exp()\n","  N = x.shape[0]\n","  loss = (-1.0/ N) * log_probs[torch.arange(N), y].sum()\n","  dx = probs.clone()\n","  dx[torch.arange(N), y] -= 1\n","  dx /= N\n","  return loss, dx"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"5JzILk6Q0FpO"},"source":["Run the following to perform numeric gradient checking on the two loss functions. You should see errors less than `1e-7`."]},{"cell_type":"code","metadata":{"id":"BU9xp64De1C1"},"source":["fix_random_seed(0)\n","num_classes, num_inputs = 10, 50\n","x = 0.001 * torch.randn(num_inputs, num_classes, **to_double_cuda)\n","y = torch.randint(num_classes, size=(num_inputs,), **to_long_cuda)\n","\n","dx_num = compute_numeric_gradient(lambda x: svm_loss(x, y)[0], x)\n","loss, dx = svm_loss(x, y)\n","\n","# Test svm_loss function. Loss should be around 9 and dx error should be around the order of e-9\n","print('Testing svm_loss:')\n","print('loss: ', loss.item())\n","print('dx error: ', rel_error(dx_num, dx))\n","\n","dx_num = compute_numeric_gradient(lambda x: softmax_loss(x, y)[0], x)\n","loss, dx = softmax_loss(x, y)\n","\n","# Test softmax_loss function. Loss should be close to 2.3 and dx error should be around e-8\n","print('\\nTesting softmax_loss:')\n","print('loss: ', loss.item())\n","print('dx error: ', rel_error(dx_num, dx))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"qq7-cyfQe1C4"},"source":["# Two-layer network\n","In the previous assignment you implemented a two-layer neural network in a single monolithic class. Now that you have implemented modular versions of the necessary layers, you will reimplement the two layer network using these modular implementations.\n","\n","Complete the implementation of the `TwoLayerNet` class. This class will serve as a model for the other networks you will implement in this assignment, so read through it to make sure you understand the API. You can run the cell below to test your implementation."]},{"cell_type":"code","metadata":{"id":"If6ZpLCfOLRL"},"source":["class TwoLayerNet(object):\n","  \"\"\"\n","  A two-layer fully-connected neural network with ReLU nonlinearity and\n","  softmax loss that uses a modular layer design. We assume an input dimension\n","  of D, a hidden dimension of H, and perform classification over C classes.\n","  The architecure should be linear - relu - linear - softmax.\n","  Note that this class does not implement gradient descent; instead, it\n","  will interact with a separate Solver object that is responsible for running\n","  optimization.\n","\n","  The learnable parameters of the model are stored in the dictionary\n","  self.params that maps parameter names to PyTorch tensors.\n","  \"\"\"\n","\n","  def __init__(self, input_dim=3*32*32, hidden_dim=100, num_classes=10,\n","         weight_scale=1e-3, reg=0.0, dtype=torch.float32, device='cpu'):\n","    \"\"\"\n","    Initialize a new network.\n","    Inputs:\n","    - input_dim: An integer giving the size of the input\n","    - hidden_dim: An integer giving the size of the hidden layer\n","    - num_classes: An integer giving the number of classes to classify\n","    - weight_scale: Scalar giving the standard deviation for random\n","      initialization of the weights.\n","    - reg: Scalar giving L2 regularization strength.\n","    - dtype: A torch data type object; all computations will be performed using\n","      this datatype. float is faster but less accurate, so you should use\n","      double for numeric gradient checking.\n","    - device: device to use for computation. 'cpu' or 'cuda'\n","    \"\"\"\n","    self.params = {}\n","    self.reg = reg\n","\n","    ###########################################################################\n","    # TODO: Initialize the weights and biases of the two-layer net. Weights   #\n","    # should be initialized from a Gaussian centered at 0.0 with              #\n","    # standard deviation equal to weight_scale, and biases should be          #\n","    # initialized to zero. All weights and biases should be stored in the     #\n","    # dictionary self.params, with first layer weights                        #\n","    # and biases using the keys 'W1' and 'b1' and second layer                #\n","    # weights and biases using the keys 'W2' and 'b2'.                        #\n","    ###########################################################################\n","    # Replace \"pass\" statement with your code\n","    pass\n","    ###########################################################################\n","    #                            END OF YOUR CODE                             #\n","    ###########################################################################\n","\n","\n","  def loss(self, X, y=None):\n","    \"\"\"\n","    Compute loss and gradient for a minibatch of data.\n","\n","    Inputs:\n","    - X: Tensor of input data of shape (N, d_1, ..., d_k)\n","    - y: int64 Tensor of labels, of shape (N,). y[i] gives the label for X[i].\n","\n","    Returns:\n","    If y is None, then run a test-time forward pass of the model and return:\n","    - scores: Tensor of shape (N, C) giving classification scores, where\n","      scores[i, c] is the classification score for X[i] and class c.\n","    If y is not None, then run a training-time forward and backward pass and\n","    return a tuple of:\n","    - loss: Scalar value giving the loss\n","    - grads: Dictionary with the same keys as self.params, mapping parameter\n","      names to gradients of the loss with respect to those parameters.\n","    \"\"\"\n","    scores = None\n","    ###########################################################################\n","    # TODO: Implement the forward pass for the two-layer net, computing the   #\n","    # class scores for X and storing them in the scores variable.             #\n","    ###########################################################################\n","    # Replace \"pass\" statement with your code\n","    pass\n","    ###########################################################################\n","    #                            END OF YOUR CODE                             #\n","    ###########################################################################\n","\n","    # If y is None then we are in test mode so just return scores\n","    if y is None:\n","      return scores\n","\n","    loss, grads = 0, {}\n","    ###########################################################################\n","    # TODO: Implement the backward pass for the two-layer net. Store the loss #\n","    # in the loss variable and gradients in the grads dictionary. Compute data#\n","    # loss using softmax, and make sure that grads[k] holds the gradients for #\n","    # self.params[k]. Don't forget to add L2 regularization!                  #\n","    #                                                                         #\n","    # NOTE: To ensure that your implementation matches ours and you pass the  #\n","    # automated tests, make sure that your L2 regularization does not include #\n","    # a factor of 0.5.                                                        #\n","    ###########################################################################\n","    # Replace \"pass\" statement with your code\n","    pass\n","    ###########################################################################\n","    #                            END OF YOUR CODE                             #\n","    ###########################################################################\n","\n","    return loss, grads"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"bIdqEy-l0-Uu"},"source":["Once you have finished implementing the forward and backward passes of your two-layer net, run the following to test your implementation:"]},{"cell_type":"code","metadata":{"id":"d3JOcfyze1C5"},"source":["fix_random_seed(0)\n","N, D, H, C = 3, 5, 50, 7\n","X = torch.randn(N, D, **to_double_cuda)\n","y = torch.randint(C, size=(N,), **to_long_cuda)\n","\n","std = 1e-3\n","model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std,\n","                    **to_double_cuda)\n","\n","print('Testing initialization ... ')\n","W1_std = torch.abs(model.params['W1'].std() - std)\n","b1 = model.params['b1']\n","W2_std = torch.abs(model.params['W2'].std() - std)\n","b2 = model.params['b2']\n","assert W1_std < std / 10, 'First layer weights do not seem right'\n","assert torch.all(b1 == 0), 'First layer biases do not seem right'\n","assert W2_std < std / 10, 'Second layer weights do not seem right'\n","assert torch.all(b2 == 0), 'Second layer biases do not seem right'\n","\n","print('Testing test-time forward pass ... ')\n","model.params['W1'] = torch.linspace(-0.7, 0.3, steps=D*H, **to_double_cuda).reshape(D, H)\n","model.params['b1'] = torch.linspace(-0.1, 0.9, steps=H, **to_double_cuda)\n","model.params['W2'] = torch.linspace(-0.3, 0.4, steps=H*C, **to_double_cuda).reshape(H, C)\n","model.params['b2'] = torch.linspace(-0.9, 0.1, steps=C, **to_double_cuda)\n","X = torch.linspace(-5.5, 4.5, steps=N*D, **to_double_cuda).reshape(D, N).t()\n","scores = model.loss(X)\n","correct_scores = torch.tensor(\n","  [[11.53165108,  12.2917344,   13.05181771,  13.81190102,  14.57198434, 15.33206765,  16.09215096],\n","   [12.05769098,  12.74614105,  13.43459113,  14.1230412,   14.81149128, 15.49994135,  16.18839143],\n","   [12.58373087,  13.20054771,  13.81736455,  14.43418138,  15.05099822, 15.66781506,  16.2846319 ]],\n","    **to_double_cuda)\n","scores_diff = torch.abs(scores - correct_scores).sum()\n","assert scores_diff < 1e-6, 'Problem with test-time forward pass'\n","\n","print('Testing training loss (no regularization)')\n","y = torch.tensor([0, 5, 1])\n","loss, grads = model.loss(X, y)\n","correct_loss = 3.4702243556\n","assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss'\n","\n","model.reg = 1.0\n","loss, grads = model.loss(X, y)\n","correct_loss = 49.719461034881775\n","assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss'\n","\n","# Errors should be around e-6 or less\n","for reg in [0.0, 0.7]:\n","  print('Running numeric gradient check with reg = ', reg)\n","  model.reg = reg\n","  loss, grads = model.loss(X, y)\n","\n","  for name in sorted(grads):\n","    f = lambda _: model.loss(X, y)[0]\n","    grad_num = compute_numeric_gradient(f, model.params[name])\n","    print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"q1Odj9XQe1C9"},"source":["# Solver\n","In the previous assignment, the logic for training models was coupled to the models themselves. Following a more modular design, for this assignment we have split the logic for training models into a separate class.\n","\n","Read through `help(Solver)` to familiarize yourself with the API. After doing so, use a `Solver` instance to train a `TwoLayerNet` that achieves at least `50%` accuracy on the validation set."]},{"cell_type":"code","metadata":{"id":"lZ-8wKffRoDu"},"source":["print(help(Solver))"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"6unJrOule1C_"},"source":["fix_random_seed(0)\n","model = TwoLayerNet(dtype=torch.float, device='cuda')\n","solver = None\n","\n","##############################################################################\n","# TODO: Use a Solver instance to train a TwoLayerNet that achieves at least  #\n","# 50% accuracy on the validation set.                                        #\n","##############################################################################\n","# Replace \"pass\" statement with your code\n","pass\n","##############################################################################\n","#                             END OF YOUR CODE                               #\n","##############################################################################"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"gSSy7LTde1DE"},"source":["# Run this cell to visualize training loss and train / val accuracy\n","\n","plt.subplot(2, 1, 1)\n","plt.title('Training loss')\n","plt.plot(solver.loss_history, 'o')\n","plt.xlabel('Iteration')\n","\n","plt.subplot(2, 1, 2)\n","plt.title('Accuracy')\n","plt.plot(solver.train_acc_history, '-o', label='train')\n","plt.plot(solver.val_acc_history, '-o', label='val')\n","plt.plot([0.5] * len(solver.val_acc_history), 'k--')\n","plt.xlabel('Epoch')\n","plt.legend(loc='lower right')\n","plt.gcf().set_size_inches(15, 12)\n","plt.show()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"eNyFLT1We1DI"},"source":["# Multilayer network\n","Next you will implement a fully-connected network with an arbitrary number of hidden layers.\n","\n","Read through the `FullyConnectedNet` class.\n","\n","Implement the initialization, the forward pass, and the backward pass. For the moment don't worry about implementing dropout; we will add this feature soon."]},{"cell_type":"code","metadata":{"id":"7p-goSyucyZH"},"source":["class FullyConnectedNet(object):\n","  \"\"\"\n","  A fully-connected neural network with an arbitrary number of hidden layers,\n","  ReLU nonlinearities, and a softmax loss function.\n","  For a network with L layers, the architecture will be:\n","\n","  {linear - relu - [dropout]} x (L - 1) - linear - softmax\n","\n","  where dropout is optional, and the {...} block is repeated L - 1 times.\n","\n","  Similar to the TwoLayerNet above, learnable parameters are stored in the\n","  self.params dictionary and will be learned using the Solver class.\n","  \"\"\"\n","\n","  def __init__(self, hidden_dims, input_dim=3*32*32, num_classes=10,\n","               dropout=0.0, reg=0.0, weight_scale=1e-2, seed=None,\n","               dtype=torch.float, device='cpu'):\n","    \"\"\"\n","    Initialize a new FullyConnectedNet.\n","\n","    Inputs:\n","    - hidden_dims: A list of integers giving the size of each hidden layer.\n","    - input_dim: An integer giving the size of the input.\n","    - num_classes: An integer giving the number of classes to classify.\n","    - dropout: Scalar between 0 and 1 giving the drop probability for networks\n","      with dropout. If dropout=0 then the network should not use dropout.\n","    - reg: Scalar giving L2 regularization strength.\n","    - weight_scale: Scalar giving the standard deviation for random\n","      initialization of the weights.\n","    - seed: If not None, then pass this random seed to the dropout layers. This\n","      will make the dropout layers deteriminstic so we can gradient check the\n","      model.\n","    - dtype: A torch data type object; all computations will be performed using\n","      this datatype. float is faster but less accurate, so you should use\n","      double for numeric gradient checking.\n","    - device: device to use for computation. 'cpu' or 'cuda'\n","    \"\"\"\n","    self.use_dropout = dropout != 0\n","    self.reg = reg\n","    self.num_layers = 1 + len(hidden_dims)\n","    self.dtype = dtype\n","    self.params = {}\n","\n","    ############################################################################\n","    # TODO: Initialize the parameters of the network, storing all values in    #\n","    # the self.params dictionary. Store weights and biases for the first layer #\n","    # in W1 and b1; for the second layer use W2 and b2, etc. Weights should be #\n","    # initialized from a normal distribution centered at 0 with standard       #\n","    # deviation equal to weight_scale. Biases should be initialized to zero.   #\n","    ############################################################################\n","    # Replace \"pass\" statement with your code\n","    pass\n","    ############################################################################\n","    #                             END OF YOUR CODE                             #\n","    ############################################################################\n","\n","    # When using dropout we need to pass a dropout_param dictionary to each\n","    # dropout layer so that the layer knows the dropout probability and the mode\n","    # (train / test). You can pass the same dropout_param to each dropout layer.\n","    self.dropout_param = {}\n","    if self.use_dropout:\n","      self.dropout_param = {'mode': 'train', 'p': dropout}\n","      if seed is not None:\n","        self.dropout_param['seed'] = seed\n","\n","\n","  def loss(self, X, y=None):\n","    \"\"\"\n","    Compute loss and gradient for the fully-connected net.\n","    Input / output: Same as TwoLayerNet above.\n","    \"\"\"\n","    X = X.to(self.dtype)\n","    mode = 'test' if y is None else 'train'\n","\n","    # Set train/test mode for batchnorm params and dropout param since they\n","    # behave differently during training and testing.\n","    if self.use_dropout:\n","      self.dropout_param['mode'] = mode\n","    scores = None\n","    ############################################################################\n","    # TODO: Implement the forward pass for the fully-connected net, computing  #\n","    # the class scores for X and storing them in the scores variable.          #\n","    #                                                                          #\n","    # When using dropout, you'll need to pass self.dropout_param to each       #\n","    # dropout forward pass.                                                    #\n","    ############################################################################\n","    # Replace \"pass\" statement with your code\n","    pass\n","    ############################################################################\n","    #                             END OF YOUR CODE                             #\n","    ############################################################################\n","\n","    # If test mode return early\n","    if mode == 'test':\n","      return scores\n","\n","    loss, grads = 0.0, {}\n","    ############################################################################\n","    # TODO: Implement the backward pass for the fully-connected net. Store the #\n","    # loss in the loss variable and gradients in the grads dictionary. Compute #\n","    # data loss using softmax, and make sure that grads[k] holds the gradients #\n","    # for self.params[k]. Don't forget to add L2 regularization!               #\n","    # NOTE: To ensure that your implementation matches ours and you pass the   #\n","    # automated tests, make sure that your L2 regularization includes a factor #\n","    # of 0.5 to simplify the expression for the gradient.                      #\n","    ############################################################################\n","    # Replace \"pass\" statement with your code\n","    pass\n","    ############################################################################\n","    #                             END OF YOUR CODE                             #\n","    ############################################################################\n","\n","    return loss, grads"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"3abR1_qhe1DK"},"source":["## Initial loss and gradient check\n","\n","As a sanity check, run the following to check the initial loss and to gradient check the network both with and without regularization. Do the initial losses seem reasonable?\n","\n","For gradient checking, you should expect to see errors less than `1e-6`, except for the check on `W1` and `W2` with `reg=0` where your errors should be less than `1e-5`."]},{"cell_type":"code","metadata":{"id":"1waPtKRDe1DL"},"source":["fix_random_seed(0)\n","N, D, H1, H2, C = 2, 15, 20, 30, 10\n","X = torch.randn(N, D, **to_double_cuda)\n","y = torch.randint(C, size=(N,), **to_long_cuda)\n","\n","for reg in [0, 3.14]:\n","  print('Running check with reg = ', reg)\n","  model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n","                            reg=reg, weight_scale=5e-2, **to_double_cuda)\n","\n","  loss, grads = model.loss(X, y)\n","  print('Initial loss: ', loss.item())\n","\n","  for name in sorted(grads):\n","    f = lambda _: model.loss(X, y)[0]\n","    grad_num = compute_numeric_gradient(f, model.params[name])\n","    print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"-q6aWzNfe1DQ"},"source":["As another sanity check, make sure you can overfit a small dataset of 50 images. First we will try a three-layer network with 100 units in each hidden layer. In the following cell, tweak the **learning rate** and **weight initialization scale** to overfit and achieve 100% training accuracy within 20 epochs."]},{"cell_type":"code","metadata":{"id":"2NccCDJ3e1DR"},"source":["# TODO: Use a three-layer Net to overfit 50 training examples by \n","# tweaking just the learning rate and initialization scale.\n","fix_random_seed(0)\n","\n","num_train = 50\n","small_data = {\n","  'X_train': data_dict['X_train'][:num_train],\n","  'y_train': data_dict['y_train'][:num_train],\n","  'X_val': data_dict['X_val'],\n","  'y_val': data_dict['y_val'],\n","}\n","\n","weight_scale = 1e-2   # Experiment with this!\n","learning_rate = 1e-4  # Experiment with this!\n","############################################################################\n","# TODO: Change weight_scale and learning_rate so your model achieves 100%  #\n","# training accuracy within 20 epochs.                                      #\n","############################################################################\n","# Replace \"pass\" statement with your code\n","pass\n","############################################################################\n","#                             END OF YOUR CODE                             #\n","############################################################################\n","model = FullyConnectedNet([100, 100],\n","              weight_scale=weight_scale, **to_float_cuda)\n","solver = Solver(model, small_data,\n","                print_every=10, num_epochs=20, batch_size=25,\n","                optim_config={\n","                  'learning_rate': learning_rate,\n","                },\n","                device='cuda',\n","         )\n","solver.train()\n","\n","plt.plot(solver.loss_history, 'o')\n","plt.title('Training loss history')\n","plt.xlabel('Iteration')\n","plt.ylabel('Training loss')\n","plt.show()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"tskjw8VUe1DV"},"source":["Now try to use a five-layer network with 100 units on each layer to overfit 50 training examples. Again, you will have to adjust the learning rate and weight initialization scale, but you should be able to achieve 100% training accuracy within 20 epochs."]},{"cell_type":"code","metadata":{"id":"D5mAWrrPe1Dc"},"source":["# TODO: Use a five-layer Net to overfit 50 training examples by \n","# tweaking just the learning rate and initialization scale.\n","fix_random_seed(0)\n","\n","num_train = 50\n","small_data = {\n","  'X_train': data_dict['X_train'][:num_train],\n","  'y_train': data_dict['y_train'][:num_train],\n","  'X_val': data_dict['X_val'],\n","  'y_val': data_dict['y_val'],\n","}\n","\n","learning_rate = 2e-3  # Experiment with this!\n","weight_scale = 1e-5   # Experiment with this!\n","############################################################################\n","# TODO: Change weight_scale and learning_rate so your model achieves 100%  #\n","# training accuracy within 20 epochs.                                      #\n","############################################################################\n","# Replace \"pass\" statement with your code\n","pass\n","############################################################################\n","#                             END OF YOUR CODE                             #\n","############################################################################\n","model = FullyConnectedNet([100, 100, 100, 100],\n","                weight_scale=weight_scale, **to_float_cuda)\n","solver = Solver(model, small_data,\n","                print_every=10, num_epochs=20, batch_size=25,\n","                optim_config={\n","                  'learning_rate': learning_rate,\n","                },\n","                device='cuda',\n","         )\n","solver.train()\n","\n","plt.plot(solver.loss_history, 'o')\n","plt.title('Training loss history')\n","plt.xlabel('Iteration')\n","plt.ylabel('Training loss')\n","plt.show()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"T4eWrnY7e1Di"},"source":["# Update rules\n","So far we have used vanilla stochastic gradient descent (SGD) as our update rule. More sophisticated update rules can make it easier to train deep networks. We will implement a few of the most commonly used update rules and compare them to vanilla SGD."]},{"cell_type":"markdown","metadata":{"id":"zBDJqbeVe1Dn"},"source":["## SGD+Momentum\n","Stochastic gradient descent with momentum is a widely used update rule that tends to make deep networks converge faster than vanilla stochastic gradient descent. See the Momentum Update section at http://cs231n.github.io/neural-networks-3/#sgd for more information.\n","\n","We will implement various first-order update rules that are commonly used\n","for training neural networks. Each update rule accepts current weights and the\n","gradient of the loss with respect to those weights and produces the next set of\n","weights. Each update rule has the same interface:\n","```python\n","def update(w, dw, config=None):\n","Inputs:\n","  - w: A tensor giving the current weights.\n","  - dw: A tensor of the same shape as w giving the gradient of the\n","    loss with respect to w.\n","  - config: A dictionary containing hyperparameter values such as learning\n","    rate, momentum, etc. If the update rule requires caching values over many\n","    iterations, then config will also hold these cached values.\n","Returns:\n","  - next_w: The next point after the update.\n","  - config: The config dictionary to be passed to the next iteration of the\n","    update rule.\n","NOTE: For most update rules, the default learning rate will probably not\n","perform well; however the default values of the other hyperparameters should\n","work well for a variety of different problems.\n","For efficiency, update rules may perform in-place updates, mutating w and\n","setting next_w equal to w.\n","```\n","We provide the implementation of the SGD update rule for your reference:"]},{"cell_type":"code","metadata":{"id":"NT42gX8IBU6s"},"source":["def sgd(w, dw, config=None):\n","    \"\"\"\n","    Performs vanilla stochastic gradient descent.\n","    config format:\n","    - learning_rate: Scalar learning rate.\n","    \"\"\"\n","    if config is None: config = {}\n","    config.setdefault('learning_rate', 1e-2)\n","\n","    w -= config['learning_rate'] * dw\n","    return w, config"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"lVbUveEWBXoI"},"source":["Now implement the SGD+Momentum update rule using the same interface:"]},{"cell_type":"code","metadata":{"id":"ym9RjlaRXakL"},"source":["def sgd_momentum(w, dw, config=None):\n","  \"\"\"\n","  Performs stochastic gradient descent with momentum.\n","  config format:\n","  - learning_rate: Scalar learning rate.\n","  - momentum: Scalar between 0 and 1 giving the momentum value.\n","    Setting momentum = 0 reduces to sgd.\n","  - velocity: A numpy array of the same shape as w and dw used to store a\n","    moving average of the gradients.\n","  \"\"\"\n","  if config is None: config = {}\n","  config.setdefault('learning_rate', 1e-2)\n","  config.setdefault('momentum', 0.9)\n","  v = config.get('velocity', torch.zeros_like(w))\n","\n","  next_w = None\n","  #############################################################################\n","  # TODO: Implement the momentum update formula. Store the updated value in   #\n","  # the next_w variable. You should also use and update the velocity v.       #\n","  #############################################################################\n","  # Replace \"pass\" statement with your code\n","  pass\n","  #############################################################################\n","  #                              END OF YOUR CODE                             #\n","  #############################################################################\n","  config['velocity'] = v\n","\n","  return next_w, config"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"9KREnzX_Bgl4"},"source":["Run the following to check your implementation of SGD+Momentum. You should see errors less than `1e-7`."]},{"cell_type":"code","metadata":{"id":"RbQrkNo_e1Dp"},"source":["fix_random_seed(0)\n","\n","N, D = 4, 5\n","w = torch.linspace(-0.4, 0.6, steps=N*D, **to_double_cuda).reshape(N, D)\n","dw = torch.linspace(-0.6, 0.4, steps=N*D, **to_double_cuda).reshape(N, D)\n","v = torch.linspace(0.6, 0.9, steps=N*D, **to_double_cuda).reshape(N, D)\n","\n","config = {'learning_rate': 1e-3, 'velocity': v}\n","next_w, _ = sgd_momentum(w, dw, config=config)\n","\n","expected_next_w = torch.tensor([\n","  [ 0.1406,      0.20738947,  0.27417895,  0.34096842,  0.40775789],\n","  [ 0.47454737,  0.54133684,  0.60812632,  0.67491579,  0.74170526],\n","  [ 0.80849474,  0.87528421,  0.94207368,  1.00886316,  1.07565263],\n","  [ 1.14244211,  1.20923158,  1.27602105,  1.34281053,  1.4096    ]],\n","   **to_double_cuda)\n","expected_velocity = torch.tensor([\n","  [ 0.5406,      0.55475789,  0.56891579, 0.58307368,  0.59723158],\n","  [ 0.61138947,  0.62554737,  0.63970526,  0.65386316,  0.66802105],\n","  [ 0.68217895,  0.69633684,  0.71049474,  0.72465263,  0.73881053],\n","  [ 0.75296842,  0.76712632,  0.78128421,  0.79544211,  0.8096    ]],\n","   **to_double_cuda)\n","\n","# Should see relative errors around e-8 or less\n","print('next_w error: ', rel_error(next_w, expected_next_w))\n","print('velocity error: ', rel_error(expected_velocity, config['velocity']))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"7QQj73zje1D2"},"source":["Once you have done so, run the following to train a six-layer network with both SGD and SGD+momentum. You should see the SGD+momentum update rule converge faster."]},{"cell_type":"code","metadata":{"id":"qXdMNC9Ve1D4"},"source":["num_train = 4000\n","small_data = {\n","  'X_train': data_dict['X_train'][:num_train],\n","  'y_train': data_dict['y_train'][:num_train],\n","  'X_val': data_dict['X_val'],\n","  'y_val': data_dict['y_val'],\n","}\n","\n","solvers = {}\n","\n","for update_rule_name, update_rule_fn in [('sgd', sgd), ('sgd_momentum', sgd_momentum)]:\n","  print('running with ', update_rule_name)\n","  model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2,\n","                            **to_float_cuda)\n","\n","  solver = Solver(model, small_data,\n","                  num_epochs=5, batch_size=100,\n","                  update_rule=update_rule_fn,\n","                  optim_config={\n","                    'learning_rate': 5e-2,\n","                  },\n","                  print_every=1000,\n","                  verbose=True,\n","                  device='cuda')\n","  solvers[update_rule_name] = solver\n","  solver.train()\n","  print()\n","  \n","plt.subplot(3, 1, 1)\n","plt.title('Training loss')\n","plt.xlabel('Iteration')\n","for update_rule, solver in solvers.items():\n","  plt.plot(solver.loss_history, 'o', label=\"loss_%s\" % update_rule)\n","plt.legend(loc='upper center', ncol=4)\n","  \n","plt.subplot(3, 1, 2)\n","plt.title('Training accuracy')\n","plt.xlabel('Epoch')\n","for update_rule, solver in solvers.items():\n","  plt.plot(solver.train_acc_history, '-o', label=\"train_acc_%s\" % update_rule)\n","plt.legend(loc='upper center', ncol=4)\n","\n","  \n","plt.subplot(3, 1, 3)\n","plt.title('Validation accuracy')\n","plt.xlabel('Epoch')\n","for update_rule, solver in solvers.items():\n","  plt.plot(solver.val_acc_history, '-o', label=\"val_acc_%s\" % update_rule)\n","plt.legend(loc='upper center', ncol=4)\n","\n","plt.gcf().set_size_inches(15, 15)\n","plt.show()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"wYtKqDdEe1D-"},"source":["## RMSProp\n","RMSProp [1] is an update rule that set per-parameter learning rates by using a running average of the second moments of gradients.\n","\n","Implement the RMSProp update rule in the `rmsprop` function below:\n","\n","[1] Tijmen Tieleman and Geoffrey Hinton. \"Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude.\" COURSERA: Neural Networks for Machine Learning 4 (2012)."]},{"cell_type":"code","metadata":{"id":"Zy_OhbVEYVqz"},"source":["def rmsprop(w, dw, config=None):\n","  \"\"\"\n","  Uses the RMSProp update rule, which uses a moving average of squared\n","  gradient values to set adaptive per-parameter learning rates.\n","  config format:\n","  - learning_rate: Scalar learning rate.\n","  - decay_rate: Scalar between 0 and 1 giving the decay rate for the squared\n","    gradient cache.\n","  - epsilon: Small scalar used for smoothing to avoid dividing by zero.\n","  - cache: Moving average of second moments of gradients.\n","  \"\"\"\n","  if config is None: config = {}\n","  config.setdefault('learning_rate', 1e-2)\n","  config.setdefault('decay_rate', 0.99)\n","  config.setdefault('epsilon', 1e-8)\n","  config.setdefault('cache', torch.zeros_like(w))\n","\n","  next_w = None\n","  ###########################################################################\n","  # TODO: Implement the RMSprop update formula, storing the next value of w #\n","  # in the next_w variable. Don't forget to update cache value stored in    #\n","  # config['cache'].                                                        #\n","  ###########################################################################\n","  # Replace \"pass\" statement with your code\n","  pass\n","  ###########################################################################\n","  #                             END OF YOUR CODE                            #\n","  ###########################################################################\n","\n","  return next_w, config"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"izr7eqIxDrZs"},"source":["Run the following to test your RMSProp implementation. You should see errors less than `1e-6`."]},{"cell_type":"code","metadata":{"id":"RBBpJhJie1D_"},"source":["# Test RMSProp implementation\n","fix_random_seed(0)\n","\n","N, D = 4, 5\n","w = torch.linspace(-0.4, 0.6, steps=N*D, **to_double_cuda).reshape(N, D)\n","dw = torch.linspace(-0.6, 0.4, steps=N*D, **to_double_cuda).reshape(N, D)\n","cache = torch.linspace(0.6, 0.9, steps=N*D, **to_double_cuda).reshape(N, D)\n","\n","config = {'learning_rate': 1e-2, 'cache': cache}\n","next_w, _ = rmsprop(w, dw, config=config)\n","\n","expected_next_w = torch.tensor([\n","  [-0.39223849, -0.34037513, -0.28849239, -0.23659121, -0.18467247],\n","  [-0.132737,   -0.08078555, -0.02881884,  0.02316247,  0.07515774],\n","  [ 0.12716641,  0.17918792,  0.23122175,  0.28326742,  0.33532447],\n","  [ 0.38739248,  0.43947102,  0.49155973,  0.54365823,  0.59576619]],\n","   **to_double_cuda)\n","expected_cache = torch.tensor([\n","  [ 0.5976,      0.6126277,   0.6277108,   0.64284931,  0.65804321],\n","  [ 0.67329252,  0.68859723,  0.70395734,  0.71937285,  0.73484377],\n","  [ 0.75037008,  0.7659518,   0.78158892,  0.79728144,  0.81302936],\n","  [ 0.82883269,  0.84469141,  0.86060554,  0.87657507,  0.8926    ]],\n","   **to_double_cuda)\n","\n","print('next_w error: ', rel_error(expected_next_w, next_w))\n","print('cache error: ', rel_error(expected_cache, config['cache']))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"bMdq7WRFDiJw"},"source":["## Adam\n","Adam [2] extends RMSprop with a first-order gradient cache similar to momentum, and a bias correction mechanism to prevent large steps at the start of optimization. Adam is one of the most commonly used update rules used in practice for training deep neural networks.\n","\n","Implement the Adam update rule in the `adam` function below:\n","\n","**NOTE:** Please implement the _complete_ Adam update rule (with the bias correction mechanism), not the first simplified version mentioned in the course notes. \n","\n","[2] Diederik Kingma and Jimmy Ba, \"Adam: A Method for Stochastic Optimization\", ICLR 2015."]},{"cell_type":"code","metadata":{"id":"GBXOxGn-DK1E"},"source":["def adam(w, dw, config=None):\n","  \"\"\"\n","  Uses the Adam update rule, which incorporates moving averages of both the\n","  gradient and its square and a bias correction term.\n","  config format:\n","  - learning_rate: Scalar learning rate.\n","  - beta1: Decay rate for moving average of first moment of gradient.\n","  - beta2: Decay rate for moving average of second moment of gradient.\n","  - epsilon: Small scalar used for smoothing to avoid dividing by zero.\n","  - m: Moving average of gradient.\n","  - v: Moving average of squared gradient.\n","  - t: Iteration number.\n","  \"\"\"\n","  if config is None: config = {}\n","  config.setdefault('learning_rate', 1e-3)\n","  config.setdefault('beta1', 0.9)\n","  config.setdefault('beta2', 0.999)\n","  config.setdefault('epsilon', 1e-8)\n","  config.setdefault('m', torch.zeros_like(w))\n","  config.setdefault('v', torch.zeros_like(w))\n","  config.setdefault('t', 0)\n","\n","  next_w = None\n","  #############################################################################\n","  # TODO: Implement the Adam update formula, storing the next value of w in   #\n","  # the next_w variable. Don't forget to update the m, v, and t variables     #\n","  # stored in config.                                                         #\n","  #                                                                           #\n","  # NOTE: In order to match the reference output, please modify t _before_    #\n","  # using it in any calculations.                                             #\n","  #############################################################################\n","  # Replace \"pass\" statement with your code\n","  pass\n","  #############################################################################\n","  #                              END OF YOUR CODE                             #\n","  #############################################################################\n","\n","  return next_w, config"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"VIEwAIYCEOnO"},"source":["Run the following to test your Adam implementation. You should see error less than `1e-6` for `next_w`, and errors less than `1e-8` for `v` and `m`:"]},{"cell_type":"code","metadata":{"id":"ovUXV51Le1EE"},"source":["# Test Adam implementation\n","fix_random_seed(0)\n","\n","N, D = 4, 5\n","w = torch.linspace(-0.4, 0.6, steps=N*D, **to_double_cuda).reshape(N, D)\n","dw = torch.linspace(-0.6, 0.4, steps=N*D, **to_double_cuda).reshape(N, D)\n","m = torch.linspace(0.6, 0.9, steps=N*D, **to_double_cuda).reshape(N, D)\n","v = torch.linspace(0.7, 0.5, steps=N*D, **to_double_cuda).reshape(N, D)\n","\n","config = {'learning_rate': 1e-2, 'm': m, 'v': v, 't': 5}\n","next_w, _ = adam(w, dw, config=config)\n","\n","expected_next_w = torch.tensor([\n","  [-0.40094747, -0.34836187, -0.29577703, -0.24319299, -0.19060977],\n","  [-0.1380274,  -0.08544591, -0.03286534,  0.01971428,  0.0722929],\n","  [ 0.1248705,   0.17744702,  0.23002243,  0.28259667,  0.33516969],\n","  [ 0.38774145,  0.44031188,  0.49288093,  0.54544852,  0.59801459]],\n","   **to_double_cuda)\n","expected_v = torch.tensor([\n","  [ 0.69966,     0.68908382,  0.67851319,  0.66794809,  0.65738853,],\n","  [ 0.64683452,  0.63628604,  0.6257431,   0.61520571,  0.60467385,],\n","  [ 0.59414753,  0.58362676,  0.57311152,  0.56260183,  0.55209767,],\n","  [ 0.54159906,  0.53110598,  0.52061845,  0.51013645,  0.49966,   ]],\n","   **to_double_cuda)\n","expected_m = torch.tensor([\n","  [ 0.48,        0.49947368,  0.51894737,  0.53842105,  0.55789474],\n","  [ 0.57736842,  0.59684211,  0.61631579,  0.63578947,  0.65526316],\n","  [ 0.67473684,  0.69421053,  0.71368421,  0.73315789,  0.75263158],\n","  [ 0.77210526,  0.79157895,  0.81105263,  0.83052632,  0.85      ]],\n","   **to_double_cuda)\n","\n","# You should see relative errors around e-7 or less\n","print('next_w error: ', rel_error(expected_next_w, next_w))\n","print('v error: ', rel_error(expected_v, config['v']))\n","print('m error: ', rel_error(expected_m, config['m']))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"1T_qzgxte1EI"},"source":["Once you have debugged your RMSProp and Adam implementations, run the following to train a pair of deep networks using these new update rules:"]},{"cell_type":"code","metadata":{"id":"6TFopQgre1EJ"},"source":["for update_rule_name, update_rule_fn, learning_rate in [('adam', adam, 1e-3), ('rmsprop', rmsprop, 1e-4)]:\n","  print('running with ', update_rule)\n","  model = FullyConnectedNet([100, 100, 100, 100, 100], weight_scale=5e-2, device='cuda')\n","\n","  solver = Solver(model, small_data,\n","                  num_epochs=5, batch_size=100,\n","                  update_rule=update_rule_fn,\n","                  optim_config={\n","                    'learning_rate': learning_rate\n","                  },\n","                  print_every=1000,\n","                  verbose=True, device='cuda')\n","  solvers[update_rule_name] = solver\n","  solver.train()\n","  print()\n","\n","plt.subplot(3, 1, 1)\n","plt.title('Training loss')\n","plt.xlabel('Iteration')\n","for update_rule, solver in list(solvers.items()):\n","  plt.plot(solver.loss_history, 'o', label=update_rule)\n","plt.legend(loc='upper center', ncol=4)\n","  \n","plt.subplot(3, 1, 2)\n","plt.title('Training accuracy')\n","plt.xlabel('Epoch')\n","for update_rule, solver in list(solvers.items()):\n","  plt.plot(solver.train_acc_history, '-o', label=update_rule)\n","plt.legend(loc='upper center', ncol=4)\n","\n","plt.subplot(3, 1, 3)\n","plt.title('Validation accuracy')\n","plt.xlabel('Epoch')\n","for update_rule, solver in list(solvers.items()):\n","  plt.plot(solver.val_acc_history, '-o', label=update_rule)\n","plt.legend(loc='upper center', ncol=4)\n","\n","plt.gcf().set_size_inches(15, 15)\n","plt.show()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"C2_BL-2TwxKR","tags":["pdf-title"]},"source":["# Dropout\n","Dropout [1] is a technique for regularizing neural networks by randomly setting some output activations to zero during the forward pass. In this exercise you will implement a dropout layer and modify your fully-connected network to optionally use dropout.\n","\n","[1] [Geoffrey E. Hinton et al, \"Improving neural networks by preventing co-adaptation of feature detectors\", arXiv 2012](https://arxiv.org/abs/1207.0580)"]},{"cell_type":"code","metadata":{"id":"NlSCDbeLZEhq"},"source":["class Dropout(object):\n","\n","  @staticmethod\n","  def forward(x, w, b):\n","    raise NotImplementedError\n","\n","  @staticmethod\n","  def backward(dout, cache):\n","    raise NotImplementedError"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"s68cb0QBwxKj"},"source":["## Dropout: forward\n","Implement the forward pass for dropout. Since dropout behaves differently during training and testing, make sure to implement the operation for both modes.\n","\n","Once you have done so, run the cell below to test your implementation."]},{"cell_type":"code","metadata":{"id":"Vt_V_ICfY38-"},"source":["def dropout_forward(x, dropout_param):\n","  \"\"\"\n","  Performs the forward pass for (inverted) dropout.\n","  Inputs:\n","  - x: Input data: tensor of any shape\n","  - dropout_param: A dictionary with the following keys:\n","    - p: Dropout parameter. We *drop* each neuron output with probability p.\n","    - mode: 'test' or 'train'. If the mode is train, then perform dropout;\n","    if the mode is test, then just return the input.\n","    - seed: Seed for the random number generator. Passing seed makes this\n","    function deterministic, which is needed for gradient checking but not\n","    in real networks.\n","  Outputs:\n","  - out: Tensor of the same shape as x.\n","  - cache: tuple (dropout_param, mask). In training mode, mask is the dropout\n","    mask that was used to multiply the input; in test mode, mask is None.\n","  NOTE: Please implement **inverted** dropout, not the vanilla version of dropout.\n","  See http://cs231n.github.io/neural-networks-2/#reg for more details.\n","  NOTE 2: Keep in mind that p is the probability of **dropping** a neuron\n","  output; this might be contrary to some sources, where it is referred to\n","  as the probability of keeping a neuron output.\n","  \"\"\"\n","  p, mode = dropout_param['p'], dropout_param['mode']\n","  if 'seed' in dropout_param:\n","    torch.manual_seed(dropout_param['seed'])\n","\n","  mask = None\n","  out = None\n","\n","  if mode == 'train':\n","    ###########################################################################\n","    # TODO: Implement training phase forward pass for inverted dropout.       #\n","    # Store the dropout mask in the mask variable.                            #\n","    ###########################################################################\n","    # Replace \"pass\" statement with your code\n","    pass\n","    ###########################################################################\n","    #                             END OF YOUR CODE                            #\n","    ###########################################################################\n","  elif mode == 'test':\n","    ###########################################################################\n","    # TODO: Implement the test phase forward pass for inverted dropout.       #\n","    ###########################################################################\n","    # Replace \"pass\" statement with your code\n","    pass\n","    ###########################################################################\n","    #                             END OF YOUR CODE                            #\n","    ###########################################################################\n","\n","  cache = (dropout_param, mask)\n","\n","  return out, cache\n","\n","Dropout.forward = dropout_forward"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"CFns3S76Gxy_"},"source":["Run the following to test your dropout implementation. The mean of the output should be approximately the same during training and testing. During training the number of outputs set to zero should be approximately equal to the drop probability `p`, and during testing no outputs should be set to zero."]},{"cell_type":"code","metadata":{"id":"vFAmI9VxwxKk"},"source":["fix_random_seed(0)\n","x = torch.randn(500, 500, **to_double_cuda) + 10\n","\n","for p in [0.25, 0.4, 0.7]:\n","  out, _ = Dropout.forward(x, {'mode': 'train', 'p': p})\n","  out_test, _ = Dropout.forward(x, {'mode': 'test', 'p': p})\n","\n","  print('Running tests with p = ', p)\n","  print('Mean of input: ', x.mean().item())\n","  print('Mean of train-time output: ', out.mean().item())\n","  print('Mean of test-time output: ', out_test.mean().item())\n","  print('Fraction of train-time output set to zero: ', (out == 0).type(torch.float32).mean().item())\n","  print('Fraction of test-time output set to zero: ', (out_test == 0).type(torch.float32).mean().item())\n","  print()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"dt2BpwxswxKn"},"source":["## Dropout: backward\n","Implement the backward pass for dropout. After doing so, run the following cell to numerically gradient-check your implementation."]},{"cell_type":"code","metadata":{"id":"YEFzMPC7ZuUt"},"source":["def dropout_backward(dout, cache):\n","  \"\"\"\n","  Perform the backward pass for (inverted) dropout.\n","  Inputs:\n","  - dout: Upstream derivatives, of any shape\n","  - cache: (dropout_param, mask) from Dropout.forward.\n","  \"\"\"\n","  dropout_param, mask = cache\n","  mode = dropout_param['mode']\n","\n","  dx = None\n","  if mode == 'train':\n","    ###########################################################################\n","    # TODO: Implement training phase backward pass for inverted dropout       #\n","    ###########################################################################\n","    # Replace \"pass\" statement with your code\n","    pass\n","    ###########################################################################\n","    #                            END OF YOUR CODE                             #\n","    ###########################################################################\n","  elif mode == 'test':\n","    dx = dout\n","  return dx\n","\n","Dropout.backward = dropout_backward"],"execution_count":null,"outputs":[]},{"cell_type":"code","metadata":{"id":"3uctLwyIwxKo"},"source":["fix_random_seed(0)\n","x = torch.randn(10, 10, **to_double_cuda) + 10\n","dout = torch.randn_like(x)\n","\n","dropout_param = {'mode': 'train', 'p': 0.2, 'seed': 0}\n","out, cache = Dropout.forward(x, dropout_param)\n","dx = Dropout.backward(dout, cache)\n","dx_num = compute_numeric_gradient(lambda xx: Dropout.forward(xx, dropout_param)[0], x, dout)\n","\n","# Error should be around e-10 or less\n","print('dx relative error: ', rel_error(dx, dx_num))"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"OLzMLx-iwxKs"},"source":["# Fully-connected nets with dropout\n","Modify [your implementation](#scrollTo=7p-goSyucyZH) of `FullyConnectedNet` to use dropout. Specifically, if the constructor of the network receives a value that is not 0 for the `dropout` parameter, then the net should add a dropout layer immediately after every ReLU nonlinearity.\n","\n","After doing so, run the following to numerically gradient-check your implementation. You should see errors less than `1e-5`, and different dropout rates should result different error values."]},{"cell_type":"code","metadata":{"id":"18ugsX0iwxKu"},"source":["fix_random_seed(0)\n","N, D, H1, H2, C = 2, 15, 20, 30, 10\n","X = torch.randn(N, D, **to_double_cuda)\n","y = torch.randint(C, size=(N,), **to_long_cuda)\n","\n","for dropout in [0, 0.25, 0.5]:\n","  print('Running check with dropout = ', dropout)\n","  model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n","                            weight_scale=5e-2, dropout=dropout,\n","                            seed=0, **to_double_cuda)\n","\n","  loss, grads = model.loss(X, y)\n","  print('Initial loss: ', loss.item())\n","  \n","  # Relative errors should be around e-5 or less.\n","  for name in sorted(grads):\n","    f = lambda _: model.loss(X, y)[0]\n","    grad_num = compute_numeric_gradient(f, model.params[name])\n","    print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n","  print()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"dmhrgg5hwxKy"},"source":["## Regularization experiment\n","To get a sense of the way that dropout can regularize a neural network, we will train three different two-layer networks:\n","\n","1. Hidden size 256, dropout = 0\n","2. Hidden size 512, dropout = 0\n","3. Hidden size 512, dropout = 0.5\n","\n","We will then visualize the training and validation accuracies of these three networks."]},{"cell_type":"code","metadata":{"id":"N6srh4BLwxKz"},"source":["# Train two identical nets, one with dropout and one without\n","fix_random_seed(0)\n","num_train = 20000\n","small_data = {\n","  'X_train': data_dict['X_train'][:num_train],\n","  'y_train': data_dict['y_train'][:num_train],\n","  'X_val': data_dict['X_val'],\n","  'y_val': data_dict['y_val'],\n","}\n","\n","solvers = {}\n","dropout_choices = [0, 0, 0.5]\n","width_choices = [256, 512, 512]\n","for dropout, width in zip(dropout_choices, width_choices):\n","# for dropout in dropout_choices:\n","  model = FullyConnectedNet([width], dropout=dropout, **to_float_cuda)\n","  print('Training a model with dropout=%.2f and width=%d' % (dropout, width))\n","\n","  solver = Solver(model, small_data,\n","                  num_epochs=100, batch_size=512,\n","                  update_rule=adam,\n","                  optim_config={\n","                    'learning_rate': 5e-3,\n","                  },\n","                  print_every=100000, print_acc_every=10,\n","                  verbose=True, device='cuda')\n","  solver.train()\n","  solvers[(dropout, width)] = solver\n","  print()"],"execution_count":null,"outputs":[]},{"cell_type":"markdown","metadata":{"id":"mjdKh3apRtX4"},"source":["If everything worked as expected, you should see that the network with dropout has lower training accuracies than the networks without dropout, but that it achieves higher validation accuracies.\n","\n","You should also see that a network with width 512 and dropout 0.5 achieves higher validation accuracies than a network with width 256 and no dropout. This demonstrates that reducing the model size is not generally an effective regularization strategy -- it's often better to use a larger model with explicit regularization."]},{"cell_type":"code","metadata":{"id":"aCDhFCR0wxK2"},"source":["plt.subplot(3, 1, 1)\n","for (dropout, width), solver in solvers.items():\n","  train_acc = solver.train_acc_history\n","  label = 'dropout=%.2f, width=%d' % (dropout, width)\n","  plt.plot(train_acc, 'o', label=label)\n","plt.title('Train accuracy')\n","plt.xlabel('Epoch')\n","plt.ylabel('Accuracy')\n","plt.legend(ncol=2, loc='lower right')\n","  \n","plt.subplot(3, 1, 2)\n","for (dropout, width), solver in solvers.items():\n","  val_acc = solver.val_acc_history\n","  label = 'dropout=%.2f, width=%d' % (dropout, width)\n","  plt.plot(val_acc, 'o', label=label)\n","plt.ylim(0.4, 0.52)\n","plt.title('Val accuracy')\n","plt.xlabel('Epoch')\n","plt.ylabel('Accuracy')\n","plt.legend(ncol=2, loc='lower right')\n","\n","plt.gcf().set_size_inches(15, 15)\n","plt.show()"],"execution_count":null,"outputs":[]}]}