{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": [
     "pdf-title"
    ]
   },
   "source": [
    "# 这是什么东西?\n",
    "在这次作业中，你已经写了很多代码来提供整个神经网络的功能。Dropout、批处理规范和2D卷积是计算机视觉中深度学习的主要部分。您还努力使代码更高效、更向量化。\n",
    "不过，对于本作业的最后一部分，我们将离开您漂亮的代码库，转而迁移到两个流行的深度学习框架之一:在本例中，PyTorch(或者TensorFlow，如果您选择使用该记事本)。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": [
     "pdf-ignore"
    ]
   },
   "source": [
    "### 什么是PyTorch?\n",
    "PyTorch是一个在张量对象上执行动态计算图形的系统，其行为类似于numpy ndarray。它提供了一个强大的自动微分引擎，消除了手动反向传播的需要。\n",
    "### 为什么?\n",
    "* 我们的代码现在将运行在gpu !更快的训练。当使用像PyTorch或TensorFlow这样的框架时，您可以利用GPU的能力来构建自己的自定义神经网络架构，而不必直接编写CUDA代码(这超出了本类的范围)。\n",
    "* 我们希望你能在你的项目中使用这些框架，这样你就能更有效地进行实验，而不是手工编写你想要使用的每个特性。\n",
    "* 我们要你站在巨人的肩膀上!TensorFlow和PyTorch都是非常好的框架，它们可以让您的生活变得更加轻松，现在您已经了解了它们的核心，可以自由地使用它们了:)\n",
    "* 我们希望你能接触到你可能在学术界或工业界遇到的那种深度学习代码。\n",
    "\n",
    "### PyTorch版本\n",
    "本笔记本假设您正在使用**PyTorch 1.0**。在以前的一些版本中(例如0.4之前)，张量必须包装在变量对象中才能在autograd中使用;然而，变量现在已经被弃用了。此外，1.0还将张量的数据类型与其设备分离，并使用数字类型的工厂来构造张量，而不是直接调用张量构造函数。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": [
     "pdf-ignore"
    ]
   },
   "source": [
    "## How will I learn PyTorch?\n",
    "\n",
    "贾斯汀·约翰逊做了一个很好的 [tutorial](https://github.com/jcjohnson/pytorch-examples) for PyTorch. \n",
    "\n",
    "你也可以找到详细的 [API doc](http://pytorch.org/docs/stable/index.html). 如果你有其他API文档没有解决的问题，可以使用[PyTorch forum](https://discuss.pytorch.org/) 是一个比StackOverflow更好的地方。\n",
    "\n",
    "\n",
    "# 目录\n",
    "\n",
    "这个作业有5个部分。您将在**三个不同的抽象级别**上学习PyTorch，这将帮助您更好地理解它并为最终项目做好准备。\n",
    "1. 第一部分，准备工作:我们将使用CIFAR-10数据集。\n",
    "2. 第二部分，基本的PyTorch: **抽象级别1**，我们将直接使用最低级别的PyTorch张量。\n",
    "3. 第三部分，PyTorch模块API: **抽象级别2**，我们将使用' nn。模块定义任意的神经网络结构。\n",
    "4. 第四部分，PyTorch序列API: **抽象级别3**，我们将使用' nn。非常方便地定义一个线性前馈网络。\n",
    "5. 第五部分，CIFAR-10开放式挑战:请实现您自己的网络，在CIFAR-10上获得尽可能高的准确性。您可以使用任何层、优化器、超参数或其他高级特性进行试验。\n",
    "以下是比较表:\n",
    "\n",
    "| API           | Flexibility | Convenience |\n",
    "|---------------|-------------|-------------|\n",
    "| Barebone      | High        | Low         |\n",
    "| `nn.Module`     | High        | Medium      |\n",
    "| `nn.Sequential` | Low         | High        |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第一部分准备\n",
    "首先，我们加载CIFAR-10数据集。第一次执行时可能需要几分钟，但之后应该保持缓存。\n",
    "在之前的作业中，我们必须编写自己的代码来下载CIFAR-10数据集，对其进行预处理，并在小批量中迭代;PyTorch为我们提供了方便的工具来自动化这个过程。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {
    "tags": [
     "pdf-ignore"
    ]
   },
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "from torch.utils.data import DataLoader\n",
    "from torch.utils.data import sampler\n",
    "\n",
    "import torchvision.datasets as dset\n",
    "import torchvision.transforms as T\n",
    "\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "plt.rcParams['figure.figsize'] = (12.0, 9.0) # set default size of plots\n",
    "plt.rcParams['image.interpolation'] = 'nearest'\n",
    "plt.rcParams['image.cmap'] = 'gray'\n",
    "\n",
    "# for auto-reloading extenrnal modules\n",
    "# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n",
    "%load_ext autoreload\n",
    "%autoreload 2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {
    "tags": [
     "pdf-ignore"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Files already downloaded and verified\n",
      "Files already downloaded and verified\n",
      "Files already downloaded and verified\n"
     ]
    }
   ],
   "source": [
    "NUM_TRAIN = 49000\n",
    "\n",
    "# The torchvision.transforms package provides tools for preprocessing data\n",
    "# and for performing data augmentation; here we set up a transform to\n",
    "# preprocess the data by subtracting the mean RGB value and dividing by the\n",
    "# standard deviation of each RGB value; we've hardcoded the mean and std.\n",
    "transform = T.Compose([\n",
    "                T.ToTensor(),\n",
    "                T.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))\n",
    "            ])\n",
    "\n",
    "# We set up a Dataset object for each split (train / val / test); Datasets load\n",
    "# training examples one at a time, so we wrap each Dataset in a DataLoader which\n",
    "# iterates through the Dataset and forms minibatches. We divide the CIFAR-10\n",
    "# training set into train and val sets by passing a Sampler object to the\n",
    "# DataLoader telling how it should sample from the underlying Dataset.\n",
    "cifar10_train = dset.CIFAR10('./cs231n/datasets', train=True, download=True,\n",
    "                             transform=transform)\n",
    "loader_train = DataLoader(cifar10_train, batch_size=64, \n",
    "                          sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN)))\n",
    "\n",
    "cifar10_val = dset.CIFAR10('./cs231n/datasets', train=True, download=True,\n",
    "                           transform=transform)\n",
    "loader_val = DataLoader(cifar10_val, batch_size=64, \n",
    "                        sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN, 50000)))\n",
    "\n",
    "cifar10_test = dset.CIFAR10('./cs231n/datasets', train=False, download=True, \n",
    "                            transform=transform)\n",
    "loader_test = DataLoader(cifar10_test, batch_size=64)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": [
     "pdf-ignore"
    ]
   },
   "source": [
    "你有一个选项**使用GPU设置为真以下**。这个任务不需要使用GPU。注意，如果您的计算机没有启用CUDA， ' torch.cuda.is_available() '将返回False，本笔记本将退回到CPU模式。\n",
    "全局变量' dtype '和' device '将在整个赋值过程中控制数据类型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {
    "tags": [
     "pdf-ignore-input"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "using device: cuda\n"
     ]
    }
   ],
   "source": [
    "USE_GPU = True\n",
    "\n",
    "dtype = torch.float32 # we will be using float throughout this tutorial\n",
    "\n",
    "if USE_GPU and torch.cuda.is_available():\n",
    "    device = torch.device('cuda')\n",
    "else:\n",
    "    device = torch.device('cpu')\n",
    "\n",
    "# Constant to control how frequently we print train loss\n",
    "print_every = 100\n",
    "\n",
    "print('using device:', device)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第二部分Barebones PyTorch\n",
    "\n",
    "PyTorch附带高级api来帮助我们方便地定义模型架构，我们将在本教程的第二部分中介绍。在本节中，我们将从barebone PyTorch元素开始，以便更好地理解autograd引擎。在此练习之后，您将更加欣赏高级模型API。\n",
    "我们将从一个简单的全连接的ReLU网络开始，它有两个隐藏层，对CIFAR分类没有偏见。\n",
    "该实现使用PyTorch张量上的操作计算正向传递，并使用PyTorch autograd计算梯度。理解每一行是很重要的，因为在示例之后您将编写一个更难的版本。\n",
    "当我们创建一个带有`requires_grad=True`的PyTorch张量时，涉及到这个张量的运算将不仅仅是计算值;他们也将在后台建立一个计算图形，使我们可以很容易地通过图形反向传播来计算一些张量相对于下游损失的梯度。具体来说，如果x是一个带`x.requires_grad == True`然后在反向传播之后`x.grad`是另一个张量，它包含了x的梯度和最后的标量损失。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": [
     "pdf-ignore"
    ]
   },
   "source": [
    "### PyTorch张量:Flatten函数\n",
    "PyTorch张量在概念上类似于numpy数组:它是一个n维数字网格，与numpy PyTorch一样，它提供了许多函数来有效地操作张量。作为一个简单的例子，我们提供了一个“flatten”函数，它可以对图像数据进行整形，以便在全连接的神经网络中使用。\n",
    "回想一下，图像数据通常存储在一个形状为N x C x H x W的张量中，其中:\n",
    "* N是数据点的数量\n",
    "* C是通道的数量\n",
    "* H是中间feature map的高度，以像素为单位\n",
    "* W是中间feature map的高度，以像素为单位\n",
    "\n",
    "这是表示数据的正确方法当我们做二维卷积的时候，这需要对中间特征相互之间的空间关系的理解。然而，当我们使用完全连接的仿射层来处理图像时，我们希望每个数据点由单个向量表示——隔离数据的不同通道、行和列不再有用。因此，我们使用“flatten”操作将每个表示的“C x H x W”值折叠成一个长向量。下面的flatten函数首先从给定的一批数据中读取N、C、H和W的值，然后返回该数据的“视图”。“视图”类似于numpy的“重塑”方法:它将x的维度重塑为nx ??,在哪里? ?可以是任何值(在本例中，它是C x H x W，但是我们不需要明确地指定)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {
    "tags": [
     "pdf-ignore-input"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Before flattening:  tensor([[[[ 0,  1],\n",
      "          [ 2,  3],\n",
      "          [ 4,  5]]],\n",
      "\n",
      "\n",
      "        [[[ 6,  7],\n",
      "          [ 8,  9],\n",
      "          [10, 11]]]])\n",
      "After flattening:  tensor([[ 0,  1,  2,  3,  4,  5],\n",
      "        [ 6,  7,  8,  9, 10, 11]])\n"
     ]
    }
   ],
   "source": [
    "def flatten(x):\n",
    "    N = x.shape[0] # read in N, C, H, W\n",
    "    return x.view(N, -1)  # \"flatten\" the C * H * W values into a single vector per image\n",
    "\n",
    "def test_flatten():\n",
    "    x = torch.arange(12).view(2, 1, 3, 2)\n",
    "    print('Before flattening: ', x)\n",
    "    print('After flattening: ', flatten(x))\n",
    "\n",
    "test_flatten()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": [
     "pdf-ignore"
    ]
   },
   "source": [
    "### Barebones PyTorch: Two-Layer Network\n",
    "\n",
    "在这里，我们定义了一个函数`two_layer_fc`，它对一批图像数据执行两层全连接ReLU网络的前向传递。在定义了向前传递之后，我们检查它是否崩溃，并通过在网络中运行零来生成正确形状的输出。\n",
    "\n",
    "您不必在这里编写任何代码，但是阅读和理解实现是非常重要的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {
    "tags": [
     "pdf-ignore-input"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([64, 10])\n"
     ]
    }
   ],
   "source": [
    "import torch.nn.functional as F  # useful stateless functions\n",
    "\n",
    "def two_layer_fc(x, params):\n",
    "    \"\"\"\n",
    "    A fully-connected neural networks; the architecture is:\n",
    "    NN is fully connected -> ReLU -> fully connected layer.\n",
    "    Note that this function only defines the forward pass; \n",
    "    PyTorch will take care of the backward pass for us.\n",
    "    \n",
    "    The input to the network will be a minibatch of data, of shape\n",
    "    (N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,\n",
    "    and the output layer will produce scores for C classes.\n",
    "    \n",
    "    Inputs:\n",
    "    - x: A PyTorch Tensor of shape (N, d1, ..., dM) giving a minibatch of\n",
    "      input data.\n",
    "    - params: A list [w1, w2] of PyTorch Tensors giving weights for the network;\n",
    "      w1 has shape (D, H) and w2 has shape (H, C).\n",
    "    \n",
    "    Returns:\n",
    "    - scores: A PyTorch Tensor of shape (N, C) giving classification scores for\n",
    "      the input data x.\n",
    "    \"\"\"\n",
    "    # first we flatten the image\n",
    "    x = flatten(x)  # shape: [batch_size, C x H x W]\n",
    "    \n",
    "    w1, w2 = params\n",
    "    \n",
    "    # Forward pass: compute predicted y using operations on Tensors. Since w1 and\n",
    "    # w2 have requires_grad=True, operations involving these Tensors will cause\n",
    "    # PyTorch to build a computational graph, allowing automatic computation of\n",
    "    # gradients. Since we are no longer implementing the backward pass by hand we\n",
    "    # don't need to keep references to intermediate values.\n",
    "    # you can also use `.clamp(min=0)`, equivalent to F.relu()\n",
    "    x = F.relu(x.mm(w1))\n",
    "    x = x.mm(w2)\n",
    "    return x\n",
    "    \n",
    "\n",
    "def two_layer_fc_test():\n",
    "    hidden_layer_size = 42\n",
    "    x = torch.zeros((64, 50), dtype=dtype)  # minibatch size 64, feature dimension 50\n",
    "    w1 = torch.zeros((50, hidden_layer_size), dtype=dtype)\n",
    "    w2 = torch.zeros((hidden_layer_size, 10), dtype=dtype)\n",
    "    scores = two_layer_fc(x, [w1, w2])\n",
    "    print(scores.size())  # you should see [64, 10]\n",
    "\n",
    "two_layer_fc_test()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Barebones PyTorch: Three-Layer ConvNet\n",
    "\n",
    "在这里，您将完成函数`three_layer_convnet`的实现，该函数将执行一个三层卷积网络的前向传递。像上面一样，我们可以通过在网络中传递zeros来立即测试我们的实现。该网络应具有以下架构:\n",
    "\n",
    "1. 一个卷积层(带偏置)，带有' channel_1 '过滤器，每个过滤器的形状为' KW1 x KH1 '，填充0为2\n",
    "2. ReLU非线性\n",
    "3. 一个卷积层(带偏置)，带有' channel_2 '过滤器，每个过滤器的形状为' KW2 x KH2 '，填充0为1\n",
    "4. ReLU非线性\n",
    "5. 带偏见的全连接层，生成C类分数。\n",
    "\n",
    "注意，在我们的全连接层之后，这里没有**softmax激活**:这是因为PyTorch的交叉熵损失为您执行了softmax激活，并且通过绑定该步骤使计算更有效。\n",
    "\n",
    "**提示**:For convolutions: http://pytorch.org/docs/stable/nn.html#torch.nn.functional.conv2d;注意卷积滤波器的形状!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [],
   "source": [
    "def three_layer_convnet(x, params):\n",
    "    \"\"\"\n",
    "    Performs the forward pass of a three-layer convolutional network with the\n",
    "    architecture defined above.\n",
    "\n",
    "    Inputs:\n",
    "    - x: A PyTorch Tensor of shape (N, 3, H, W) giving a minibatch of images\n",
    "    - params: A list of PyTorch Tensors giving the weights and biases for the\n",
    "      network; should contain the following:\n",
    "      - conv_w1: PyTorch Tensor of shape (channel_1, 3, KH1, KW1) giving weights\n",
    "        for the first convolutional layer\n",
    "      - conv_b1: PyTorch Tensor of shape (channel_1,) giving biases for the first\n",
    "        convolutional layer\n",
    "      - conv_w2: PyTorch Tensor of shape (channel_2, channel_1, KH2, KW2) giving\n",
    "        weights for the second convolutional layer\n",
    "      - conv_b2: PyTorch Tensor of shape (channel_2,) giving biases for the second\n",
    "        convolutional layer\n",
    "      - fc_w: PyTorch Tensor giving weights for the fully-connected layer. Can you\n",
    "        figure out what the shape should be?\n",
    "      - fc_b: PyTorch Tensor giving biases for the fully-connected layer. Can you\n",
    "        figure out what the shape should be?\n",
    "    \n",
    "    Returns:\n",
    "    - scores: PyTorch Tensor of shape (N, C) giving classification scores for x\n",
    "    \"\"\"\n",
    "    conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params\n",
    "    scores = None\n",
    "    ################################################################################\n",
    "    # TODO: Implement the forward pass for the three-layer ConvNet.                #\n",
    "    ################################################################################\n",
    "    # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "\n",
    "    scores=F.relu_(F.conv2d(x,conv_w1,conv_b1,padding=2))\n",
    "    scores=F.relu_(F.conv2d(scores,conv_w2,conv_b2,padding=1))\n",
    "    scores=F.linear(flatten(scores),fc_w.T,fc_b)\n",
    "    \n",
    "\n",
    "    # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "    ################################################################################\n",
    "    #                                 END OF YOUR CODE                             #\n",
    "    ################################################################################\n",
    "    return scores"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在定义了上面的前向通道之后，运行以下单元来测试您的实现。\n",
    "当你运行这个函数时，分数应该有形状(64,10)。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {
    "tags": [
     "pdf-ignore-input"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([64, 10])\n"
     ]
    }
   ],
   "source": [
    "def three_layer_convnet_test():\n",
    "    x = torch.zeros((64, 3, 32, 32), dtype=dtype)  # minibatch size 64, image size [3, 32, 32]\n",
    "\n",
    "    conv_w1 = torch.zeros((6, 3, 5, 5), dtype=dtype)  # [out_channel, in_channel, kernel_H, kernel_W]\n",
    "    conv_b1 = torch.zeros((6,))  # out_channel\n",
    "    conv_w2 = torch.zeros((9, 6, 3, 3), dtype=dtype)  # [out_channel, in_channel, kernel_H, kernel_W]\n",
    "    conv_b2 = torch.zeros((9,))  # out_channel\n",
    "\n",
    "    # you must calculate the shape of the tensor after two conv layers, before the fully-connected layer\n",
    "    fc_w = torch.zeros((9 * 32 * 32, 10))\n",
    "    fc_b = torch.zeros(10)\n",
    "\n",
    "    scores = three_layer_convnet(x, [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b])\n",
    "    print(scores.size())  # you should see [64, 10]\n",
    "three_layer_convnet_test()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Barebones PyTorch: Initialization\n",
    "让我们写几个实用程序方法来初始化我们的模型的权重矩阵。\n",
    "- `random_weight(shape)`使用kaim归一化方法初始化一个权张量。\n",
    "- `zero_weight(shape)`用所有的0初始化一个权张量。用于实例化偏差参数。\n",
    "\n",
    "`random_weight`函数使用的是kaim普通初始化方法，具体描述如下:\n",
    "\n",
    "He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, ICCV 2015, https://arxiv.org/abs/1502.01852"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {
    "tags": [
     "pdf-ignore-input"
    ]
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 0.0079,  0.4542,  0.8575,  0.4253,  0.3410],\n",
       "        [ 0.0303,  0.7374, -1.3735, -0.4506, -0.1054],\n",
       "        [-0.6689, -1.1883, -0.3662, -0.4428,  0.8126]], device='cuda:0',\n",
       "       requires_grad=True)"
      ]
     },
     "execution_count": 57,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "def random_weight(shape):\n",
    "    \"\"\"\n",
    "    Create random Tensors for weights; setting requires_grad=True means that we\n",
    "    want to compute gradients for these Tensors during the backward pass.\n",
    "    We use Kaiming normalization: sqrt(2 / fan_in)\n",
    "    \"\"\"\n",
    "    if len(shape) == 2:  # FC weight\n",
    "        fan_in = shape[0]\n",
    "    else:\n",
    "        fan_in = np.prod(shape[1:]) # conv weight [out_channel, in_channel, kH, kW]\n",
    "    # randn is standard normal distribution generator. \n",
    "    w = torch.randn(shape, device=device, dtype=dtype) * np.sqrt(2. / fan_in)\n",
    "    w.requires_grad = True\n",
    "    return w\n",
    "\n",
    "def zero_weight(shape):\n",
    "    return torch.zeros(shape, device=device, dtype=dtype, requires_grad=True)\n",
    "\n",
    "# create a weight of shape [3 x 5]\n",
    "# you should see the type `torch.cuda.FloatTensor` if you use GPU. \n",
    "# Otherwise it should be `torch.FloatTensor`\n",
    "random_weight((3, 5))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Barebones PyTorch: Check Accuracy\n",
    "在训练模型时，我们将使用以下功能来检查我们的模型在训练集或验证集上的准确性。\n",
    "在检查精度时，我们不需要计算任何梯度;因此，在计算分数时，我们不需要PyTorch为我们构建计算图。为了防止构建一个图，我们在一个`torch.no_grad()`上下文管理器下确定计算范围。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {
    "tags": [
     "pdf-ignore-input"
    ]
   },
   "outputs": [],
   "source": [
    "def check_accuracy_part2(loader, model_fn, params):\n",
    "    \"\"\"\n",
    "    Check the accuracy of a classification model.\n",
    "    \n",
    "    Inputs:\n",
    "    - loader: A DataLoader for the data split we want to check\n",
    "    - model_fn: A function that performs the forward pass of the model,\n",
    "      with the signature scores = model_fn(x, params)\n",
    "    - params: List of PyTorch Tensors giving parameters of the model\n",
    "    \n",
    "    Returns: Nothing, but prints the accuracy of the model\n",
    "    \"\"\"\n",
    "    split = 'val' if loader.dataset.train else 'test'\n",
    "    print('Checking accuracy on the %s set' % split)\n",
    "    num_correct, num_samples = 0, 0\n",
    "    with torch.no_grad():\n",
    "        for x, y in loader:\n",
    "            x = x.to(device=device, dtype=dtype)  # move to device, e.g. GPU\n",
    "            y = y.to(device=device, dtype=torch.int64)\n",
    "            scores = model_fn(x, params)\n",
    "            _, preds = scores.max(1)\n",
    "            num_correct += (preds == y).sum()\n",
    "            num_samples += preds.size(0)\n",
    "        acc = float(num_correct) / num_samples\n",
    "        print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### BareBones PyTorch: Training Loop\n",
    "我们现在可以建立一个基本的训练循环来训练我们的网络。我们将使用无动量的随机梯度下降法来训练模型。我们将使用`torch.cross_entropy`计算损失的交叉熵; you can [read about it here](http://pytorch.org/docs/stable/nn.html#cross-entropy).\n",
    "\n",
    "训练循环以神经网络函数、初始化参数列表(在我们的示例中为‘[w1, w2]’)和学习率作为输入。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {
    "tags": [
     "pdf-ignore-input"
    ]
   },
   "outputs": [],
   "source": [
    "def train_part2(model_fn, params, learning_rate):\n",
    "    \"\"\"\n",
    "    Train a model on CIFAR-10.\n",
    "    \n",
    "    Inputs:\n",
    "    - model_fn: A Python function that performs the forward pass of the model.\n",
    "      It should have the signature scores = model_fn(x, params) where x is a\n",
    "      PyTorch Tensor of image data, params is a list of PyTorch Tensors giving\n",
    "      model weights, and scores is a PyTorch Tensor of shape (N, C) giving\n",
    "      scores for the elements in x.\n",
    "    - params: List of PyTorch Tensors giving weights for the model\n",
    "    - learning_rate: Python scalar giving the learning rate to use for SGD\n",
    "    \n",
    "    Returns: Nothing\n",
    "    \"\"\"\n",
    "    for t, (x, y) in enumerate(loader_train):\n",
    "        # Move the data to the proper device (GPU or CPU)\n",
    "        x = x.to(device=device, dtype=dtype)\n",
    "        y = y.to(device=device, dtype=torch.long)\n",
    "\n",
    "        # Forward pass: compute scores and loss\n",
    "        scores = model_fn(x, params)\n",
    "        loss = F.cross_entropy(scores, y)\n",
    "\n",
    "        # Backward pass: PyTorch figures out which Tensors in the computational\n",
    "        # graph has requires_grad=True and uses backpropagation to compute the\n",
    "        # gradient of the loss with respect to these Tensors, and stores the\n",
    "        # gradients in the .grad attribute of each Tensor.\n",
    "        loss.backward()\n",
    "\n",
    "        # Update parameters. We don't want to backpropagate through the\n",
    "        # parameter updates, so we scope the updates under a torch.no_grad()\n",
    "        # context manager to prevent a computational graph from being built.\n",
    "        with torch.no_grad():\n",
    "            for w in params:\n",
    "                w -= learning_rate * w.grad\n",
    "\n",
    "                # Manually zero the gradients after running the backward pass\n",
    "                w.grad.zero_()\n",
    "\n",
    "        if t % print_every == 0:\n",
    "            print('Iteration %d, loss = %.4f' % (t, loss.item()))\n",
    "            check_accuracy_part2(loader_val, model_fn, params)\n",
    "            print()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### BareBones PyTorch: Train a Two-Layer Network\n",
    "现在我们准备运行训练循环。我们需要为完全连接的权值“w1”和“w2”显式地分配张量。\n",
    "每一小批CIFAR有64个例子，所以张量的形状是“[64,3,32,32]”。\n",
    "压扁后的“x”形为“[64,3 * 32 * 32]”。这将是' w1 '的第一个维度的大小。\n",
    "“w1”的第二个维度是隐藏层大小，这也是“w2”的第一个维度。\n",
    "最后，网络的输出是一个10维向量，表示10个类的概率分布。\n",
    "您不需要调整任何超参数，但在训练一个epoch后，您应该看到准确率超过40%。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Iteration 0, loss = 3.0948\n",
      "Checking accuracy on the val set\n",
      "Got 140 / 1000 correct (14.00%)\n",
      "\n",
      "Iteration 100, loss = 2.4246\n",
      "Checking accuracy on the val set\n",
      "Got 313 / 1000 correct (31.30%)\n",
      "\n",
      "Iteration 200, loss = 1.8758\n",
      "Checking accuracy on the val set\n",
      "Got 382 / 1000 correct (38.20%)\n",
      "\n",
      "Iteration 300, loss = 2.1613\n",
      "Checking accuracy on the val set\n",
      "Got 379 / 1000 correct (37.90%)\n",
      "\n",
      "Iteration 400, loss = 1.8938\n",
      "Checking accuracy on the val set\n",
      "Got 400 / 1000 correct (40.00%)\n",
      "\n",
      "Iteration 500, loss = 1.7793\n",
      "Checking accuracy on the val set\n",
      "Got 395 / 1000 correct (39.50%)\n",
      "\n",
      "Iteration 600, loss = 1.5302\n",
      "Checking accuracy on the val set\n",
      "Got 448 / 1000 correct (44.80%)\n",
      "\n",
      "Iteration 700, loss = 1.7227\n",
      "Checking accuracy on the val set\n",
      "Got 428 / 1000 correct (42.80%)\n",
      "\n"
     ]
    }
   ],
   "source": [
    "hidden_layer_size = 4000\n",
    "learning_rate = 1e-2\n",
    "\n",
    "w1 = random_weight((3 * 32 * 32, hidden_layer_size))\n",
    "w2 = random_weight((hidden_layer_size, 10))\n",
    "\n",
    "train_part2(two_layer_fc, [w1, w2], learning_rate)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### BareBones PyTorch: Training a ConvNet\n",
    "\n",
    "在下面的示例中，您应该使用上面定义的函数在CIFAR上训练一个三层卷积网络。该网络应具有以下架构:\n",
    "1. 带有32个5x5滤波器的卷积层(带偏置)，填充0为2\n",
    "2. 线性整流函数（Rectified Linear Unit）\n",
    "3. 卷积层(带偏置)有16个3x3滤波器，填充0为1\n",
    "4. 线性整流函数（Rectified Linear Unit）\n",
    "5. 全连接层(带偏见)计算10个类的分数\n",
    "你应该使用上面定义的`random_weight`函数初始化你的权重矩阵，你应该使用上面的`zero_weight`函数初始化你的偏置向量。\n",
    "您不需要调优任何超参数，但是如果一切正常，您应该在一个epoch之后获得42%以上的精度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Iteration 0, loss = 3.1593\n",
      "Checking accuracy on the val set\n",
      "Got 118 / 1000 correct (11.80%)\n",
      "\n",
      "Iteration 100, loss = 1.8919\n",
      "Checking accuracy on the val set\n",
      "Got 357 / 1000 correct (35.70%)\n",
      "\n",
      "Iteration 200, loss = 1.6665\n",
      "Checking accuracy on the val set\n",
      "Got 411 / 1000 correct (41.10%)\n",
      "\n",
      "Iteration 300, loss = 1.9329\n",
      "Checking accuracy on the val set\n",
      "Got 420 / 1000 correct (42.00%)\n",
      "\n",
      "Iteration 400, loss = 1.7750\n",
      "Checking accuracy on the val set\n",
      "Got 448 / 1000 correct (44.80%)\n",
      "\n",
      "Iteration 500, loss = 1.3747\n",
      "Checking accuracy on the val set\n",
      "Got 457 / 1000 correct (45.70%)\n",
      "\n",
      "Iteration 600, loss = 1.4980\n",
      "Checking accuracy on the val set\n",
      "Got 461 / 1000 correct (46.10%)\n",
      "\n",
      "Iteration 700, loss = 1.5360\n",
      "Checking accuracy on the val set\n",
      "Got 463 / 1000 correct (46.30%)\n",
      "\n"
     ]
    }
   ],
   "source": [
    "learning_rate = 3e-3\n",
    "\n",
    "channel_1 = 32\n",
    "channel_2 = 16\n",
    "\n",
    "conv_w1 = None\n",
    "conv_b1 = None\n",
    "conv_w2 = None\n",
    "conv_b2 = None\n",
    "fc_w = None\n",
    "fc_b = None\n",
    "\n",
    "################################################################################\n",
    "# TODO: Initialize the parameters of a three-layer ConvNet.                    #\n",
    "################################################################################\n",
    "# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "\n",
    "conv_w1=random_weight((channel_1,3,5,5))\n",
    "conv_b1=zero_weight(channel_1)\n",
    "conv_w2=random_weight((channel_2,channel_1,3,3))\n",
    "conv_b2=zero_weight(channel_2)\n",
    "fc_w=random_weight((16*32*32,10))\n",
    "fc_b=zero_weight(10)\n",
    "\n",
    "# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "################################################################################\n",
    "#                                 END OF YOUR CODE                             #\n",
    "################################################################################\n",
    "\n",
    "params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]\n",
    "train_part2(three_layer_convnet, params, learning_rate)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第三部分。PyTorch模块API\n",
    "Barebone PyTorch要求我们手动跟踪所有的参数张量。这对于只有几个张量的小型网络来说很好，但是在大型网络中跟踪几十个或几百个张量会非常不方便而且容易出错。\n",
    "\n",
    "PyTorch提供了`nn`。模块的API为您定义任意的网络架构，同时为您跟踪每个可学习的参数。在第二部分中，我们自己实现了SGD。PyTorch还提供了torch.optim包，它实现了所有常见的优化器，比如RMSProp、Adagrad和Adam。它甚至支持近似的二阶方法，如L-BFGS!可以参考[doc](http://pytorch.org/docs/master/optim.html)了解每个优化器的确切规范。\n",
    "\n",
    "要使用模块API，请遵循以下步骤:\n",
    "1. 子类`nn.Module`给你的网络类起一个直观的名字，比如`TwoLayerFC`。\n",
    "2. 在构造函数`_init__()`中，将需要的所有层定义为类属性。层对象像`nn.Linear` 和 `nn.Conv2d`本身就是 `nn.Module` 的子类，并包含可学习的参数，因此您不必自己实例化原始张量。 `nn.Module` 会帮你追踪这些内部参数。参考[doc](http://pytorch.org/docs/master/nn.html)来了解更多关于构建层的信息。\n",
    "\n",
    "**警告**:不要忘记首先调用super(). init__() !\n",
    "3. 在“forward()”方法中，定义您的网络的“连接性”。你应该使用‘init__’中定义的属性作为函数调用，把张量作为输入，把“变换后的”张量作为输出。不要在' forward() '中创建任何带有可学习参数的新层!所有这些都必须在“__init__”中提前声明。\n",
    "\n",
    "定义了模块子类之后，可以将其实例化为对象并像第2部分中的NN前向函数那样调用它。\n",
    "### Module API: Two-Layer Network\n",
    "下面是一个2层全连接网络的具体例子:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([64, 10])\n"
     ]
    }
   ],
   "source": [
    "class TwoLayerFC(nn.Module):\n",
    "    def __init__(self, input_size, hidden_size, num_classes):\n",
    "        super().__init__()\n",
    "        # assign layer objects to class attributes\n",
    "        self.fc1 = nn.Linear(input_size, hidden_size)\n",
    "        # nn.init package contains convenient initialization methods\n",
    "        # http://pytorch.org/docs/master/nn.html#torch-nn-init \n",
    "        nn.init.kaiming_normal_(self.fc1.weight)\n",
    "        self.fc2 = nn.Linear(hidden_size, num_classes)\n",
    "        nn.init.kaiming_normal_(self.fc2.weight)\n",
    "    \n",
    "    def forward(self, x):\n",
    "        # forward always defines connectivity\n",
    "        x = flatten(x)\n",
    "        scores = self.fc2(F.relu(self.fc1(x)))\n",
    "        return scores\n",
    "\n",
    "def test_TwoLayerFC():\n",
    "    input_size = 50\n",
    "    x = torch.zeros((64, input_size), dtype=dtype)  # minibatch size 64, feature dimension 50\n",
    "    model = TwoLayerFC(input_size, 42, 10)\n",
    "    scores = model(x)\n",
    "    print(scores.size())  # you should see [64, 10]\n",
    "test_TwoLayerFC()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Module API: Three-Layer ConvNet\n",
    "现在轮到你实现一个三层的卷积神经网络，然后是一个完全连接的层。网络架构应与第二部分相同:\n",
    "1. 卷积层' channel_1 ' 5x5过滤器，填充0为2\n",
    "2. 线性整流函数（Rectified Linear Unit）\n",
    "3. 卷积层与' channel_2 ' 3x3过滤器与零填充1\n",
    "4. 线性整流函数（Rectified Linear Unit）\n",
    "5. 完全连接的层' num_classes '类\n",
    "\n",
    "您应该使用koming normal初始化方法初始化模型的权值矩阵。\n",
    "\n",
    "**提示**:http://pytorch.org/docs/stable/nn.html\n",
    "\n",
    "在您实现了三层的ConvNet之后，`test_ThreeLayerConvNet`函数将运行您的实现;它应该打印'(64,10)'作为输出分数的形状。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([64, 10])\n"
     ]
    }
   ],
   "source": [
    "class ThreeLayerConvNet(nn.Module):\n",
    "    def __init__(self, in_channel, channel_1, channel_2, num_classes):\n",
    "        super().__init__()\n",
    "        ########################################################################\n",
    "        # TODO: Set up the layers you need for a three-layer ConvNet with the  #\n",
    "        # architecture defined above.                                          #\n",
    "        ########################################################################\n",
    "        # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "\n",
    "        self.conv1=nn.Conv2d(in_channel,channel_1,5,padding=2)\n",
    "        nn.init.kaiming_normal_(self.conv1.weight)\n",
    "        self.conv2=nn.Conv2d(channel_1,channel_2,3,padding=1)\n",
    "        nn.init.kaiming_normal_(self.conv2.weight)\n",
    "        self.fc=nn.Linear(channel_2*32*32,num_classes)\n",
    "        nn.init.kaiming_normal_(self.fc.weight)\n",
    "        self.relu=nn.ReLU(inplace=True)\n",
    "\n",
    "        # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "        ########################################################################\n",
    "        #                          END OF YOUR CODE                            #       \n",
    "        ########################################################################\n",
    "\n",
    "    def forward(self, x):\n",
    "        scores = None\n",
    "        ########################################################################\n",
    "        # TODO: Implement the forward function for a 3-layer ConvNet. you      #\n",
    "        # should use the layers you defined in __init__ and specify the        #\n",
    "        # connectivity of those layers in forward()                            #\n",
    "        ########################################################################\n",
    "        # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "\n",
    "        scores=self.relu(self.conv1(x))\n",
    "        scores=self.relu(self.conv2(scores))\n",
    "        scores=self.fc(flatten(scores))\n",
    "\n",
    "        # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "        ########################################################################\n",
    "        #                             END OF YOUR CODE                         #\n",
    "        ########################################################################\n",
    "        return scores\n",
    "\n",
    "\n",
    "def test_ThreeLayerConvNet():\n",
    "    x = torch.zeros((64, 3, 32, 32), dtype=dtype)  # minibatch size 64, image size [3, 32, 32]\n",
    "    model = ThreeLayerConvNet(in_channel=3, channel_1=12, channel_2=8, num_classes=10)\n",
    "    scores = model(x)\n",
    "    print(scores.size())  # you should see [64, 10]\n",
    "test_ThreeLayerConvNet()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Module API: Check Accuracy\n",
    "给定验证或测试集，我们可以检查神经网络的分类精度。\n",
    "这个版本与第二部分略有不同。您不再手动传递参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 83,
   "metadata": {},
   "outputs": [],
   "source": [
    "def check_accuracy_part34(loader, model):\n",
    "    if loader.dataset.train:\n",
    "        print('Checking accuracy on validation set')\n",
    "    else:\n",
    "        print('Checking accuracy on test set')   \n",
    "    num_correct = 0\n",
    "    num_samples = 0\n",
    "    model.eval()  # set model to evaluation mode\n",
    "    with torch.no_grad():\n",
    "        for x, y in loader:\n",
    "            x = x.to(device=device, dtype=dtype)  # move to device, e.g. GPU\n",
    "            y = y.to(device=device, dtype=torch.long)\n",
    "            scores = model(x)\n",
    "            _, preds = scores.max(1)\n",
    "            num_correct += (preds == y).sum()\n",
    "            num_samples += preds.size(0)\n",
    "        acc = float(num_correct) / num_samples\n",
    "        print('Got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))\n",
    "        return acc"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Module API: Training Loop\n",
    "我们还使用了一个稍微不同的训练循环。我们使用来自“torch”的优化器对象，而不是自己更新权重的torch.optim包，它抽象了优化算法的概念，并提供了通常用于优化神经网络的大多数算法的实现。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 84,
   "metadata": {},
   "outputs": [],
   "source": [
    "def train_part34(model, optimizer, epochs=1):\n",
    "    \"\"\"\n",
    "    Train a model on CIFAR-10 using the PyTorch Module API.\n",
    "    \n",
    "    Inputs:\n",
    "    - model: A PyTorch Module giving the model to train.\n",
    "    - optimizer: An Optimizer object we will use to train the model\n",
    "    - epochs: (Optional) A Python integer giving the number of epochs to train for\n",
    "    \n",
    "    Returns: Nothing, but prints model accuracies during training.\n",
    "    \"\"\"\n",
    "    model = model.to(device=device)  # move the model parameters to CPU/GPU\n",
    "    acc=[]\n",
    "    loss_value=[]\n",
    "    for e in range(epochs):\n",
    "        print('='*50)\n",
    "        print('Epochs %d' % e)\n",
    "        for t, (x, y) in enumerate(loader_train):\n",
    "            model.train()  # put model to training mode\n",
    "            x = x.to(device=device, dtype=dtype)  # move to device, e.g. GPU\n",
    "            y = y.to(device=device, dtype=torch.long)\n",
    "\n",
    "            scores = model(x)\n",
    "            loss = F.cross_entropy(scores, y)\n",
    "\n",
    "            # Zero out all of the gradients for the variables which the optimizer\n",
    "            # will update.\n",
    "            optimizer.zero_grad()\n",
    "\n",
    "            # This is the backwards pass: compute the gradient of the loss with\n",
    "            # respect to each  parameter of the model.\n",
    "            loss.backward()\n",
    "\n",
    "            # Actually update the parameters of the model using the gradients\n",
    "            # computed by the backwards pass.\n",
    "            optimizer.step()\n",
    "    \n",
    "\n",
    "            if t % print_every == 0:\n",
    "                print('Iteration %d, loss = %.4f' % (t, loss.item()))\n",
    "                acc.append(check_accuracy_part34(loader_val, model))\n",
    "                loss_value.append(loss.item())\n",
    "                print()\n",
    "    return (loss_value, acc)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Module API: Train a Two-Layer Network\n",
    "现在我们准备运行训练循环。与第二部分不同，我们不再显式地分配参数张量。\n",
    "只需将输入大小、隐藏层大小和类数(即输出大小)传递给“TwoLayerFC”的构造函数即可。\n",
    "你还需要定义一个优化器，跟踪所有的可学习的参数' TwoLayerFC '。\n",
    "您不需要调整任何超参数，但是经过一个epoch的训练后，您应该可以看到模型精度超过40%。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 85,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================================\n",
      "Epochs 0\n",
      "Iteration 0, loss = 3.5298\n",
      "Checking accuracy on validation set\n",
      "Got 142 / 1000 correct (14.20)\n",
      "\n",
      "Iteration 100, loss = 2.2770\n",
      "Checking accuracy on validation set\n",
      "Got 349 / 1000 correct (34.90)\n",
      "\n",
      "Iteration 200, loss = 2.1204\n",
      "Checking accuracy on validation set\n",
      "Got 380 / 1000 correct (38.00)\n",
      "\n",
      "Iteration 300, loss = 2.5554\n",
      "Checking accuracy on validation set\n",
      "Got 401 / 1000 correct (40.10)\n",
      "\n",
      "Iteration 400, loss = 2.2101\n",
      "Checking accuracy on validation set\n",
      "Got 377 / 1000 correct (37.70)\n",
      "\n",
      "Iteration 500, loss = 1.5127\n",
      "Checking accuracy on validation set\n",
      "Got 400 / 1000 correct (40.00)\n",
      "\n",
      "Iteration 600, loss = 1.5269\n",
      "Checking accuracy on validation set\n",
      "Got 412 / 1000 correct (41.20)\n",
      "\n",
      "Iteration 700, loss = 1.7488\n",
      "Checking accuracy on validation set\n",
      "Got 428 / 1000 correct (42.80)\n",
      "\n",
      "==================================================\n",
      "Epochs 1\n",
      "Iteration 0, loss = 1.7918\n",
      "Checking accuracy on validation set\n",
      "Got 437 / 1000 correct (43.70)\n",
      "\n",
      "Iteration 100, loss = 1.4506\n",
      "Checking accuracy on validation set\n",
      "Got 485 / 1000 correct (48.50)\n",
      "\n",
      "Iteration 200, loss = 1.5467\n",
      "Checking accuracy on validation set\n",
      "Got 471 / 1000 correct (47.10)\n",
      "\n",
      "Iteration 300, loss = 1.1653\n",
      "Checking accuracy on validation set\n",
      "Got 485 / 1000 correct (48.50)\n",
      "\n",
      "Iteration 400, loss = 1.4481\n",
      "Checking accuracy on validation set\n",
      "Got 485 / 1000 correct (48.50)\n",
      "\n",
      "Iteration 500, loss = 1.2515\n",
      "Checking accuracy on validation set\n",
      "Got 484 / 1000 correct (48.40)\n",
      "\n",
      "Iteration 600, loss = 1.6440\n",
      "Checking accuracy on validation set\n",
      "Got 471 / 1000 correct (47.10)\n",
      "\n",
      "Iteration 700, loss = 1.3400\n",
      "Checking accuracy on validation set\n",
      "Got 464 / 1000 correct (46.40)\n",
      "\n"
     ]
    }
   ],
   "source": [
    "hidden_layer_size = 4000\n",
    "learning_rate = 1e-2\n",
    "model = TwoLayerFC(3 * 32 * 32, hidden_layer_size, 10)\n",
    "optimizer = optim.SGD(model.parameters(), lr=learning_rate)\n",
    "\n",
    "(x,y)=train_part34(model, optimizer,epochs=2)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 87,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAtcAAAImCAYAAACYQKbhAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4xLjMsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+AADFEAAAgAElEQVR4nOzdd3yV5f3/8dcnCYSVhBUIJGHL3omIqGjVKuIWcFOtdfGzblu1aquttra1ztbt1z0ZTsA9wIpoWAl7j4RMRibZ1++PHCxiQCAn5z7n5P18PPLgnHPfuc87RzDvXLnOdZlzDhERERERabgIrwOIiIiIiIQLlWsRERERET9RuRYRERER8ROVaxERERERP1G5FhERERHxE5VrERERERE/UbkWEWkizOxSM/t6P8dnm9klgcwkIhJuVK5FRALMzDaa2Yle59ibc+4U59yLP3eemTkz6xOITCIioUblWkREAsbMorzOICLSmFSuRUSCiJldYWZrzWy7mb1nZl19j5uZPWRmeWZWaGbpZjbYd2y8mS03s2IzyzKzW37mOR4wsx1mtsHMTtnj8S/N7HLf7T5m9pXvuQrM7E3f43N8py8xsxIzO29/uX3HnJldY2ZrgDVm9h8z+9demd43sxsa/gqKiHhL5VpEJEiY2fHA34BzgS7AJuAN3+GTgLFAX6AtcB6wzXfsOeAq51wMMBj4fD9PcwSwCugI/AN4zsysnvP+AnwMtAOSgMcAnHNjfceHOefaOOfe/Jncu53le+6BwIvABWYW4fu6OwInAK/vJ7eISEhQuRYRCR4XAf/nnFvonKsAbgeONLMeQBUQA/QHzDm3wjmX7fu8KmCgmcU653Y45xbu5zk2Oeeecc7VUFdyuwCd6zmvCugOdHXOlTvn9vlGyJ/JvdvfnHPbnXO7nHPfAYXUFWqA84EvnXO5+3kOEZGQoHItIhI8ulI36guAc66EutHpROfc58C/gf8AuWb2tJnF+k6dAIwHNvmmchy5n+fI2eP6Zb6bbeo57/eAAd+Z2TIzu+xQcu9xzpa9PudF4GLf7YuBl/dzfRGRkKFyLSISPLZSN1oMgJm1BjoAWQDOuUedcynAIOqmh/zO9/j3zrkzgU7AO8BbDQ3inMtxzl3hnOsKXAU8vp8VQvabe/cl9/qcV4AzzWwYMMCXW0Qk5Klci4h4o5mZtdjjIwp4Dfi1mQ03s2jgr8B859xGMzvczI4ws2ZAKVAO1JhZczO7yMzinHNVQBFQ09BwZjbJzJJ8d3dQV453XzcX6LXH6fvMva/rO+cyge+pG7Ge7pzb1dDMIiLBQOVaRMQbs4Bde3zc7Zz7DLgLmA5kA72pm48MEAs8Q13R3UTdtIsHfMcmAxvNrAi4mv9Nt2iIw4H5ZlYCvAdc75zb4Dt2N/Cime00s3N/Jvf+vAgMQVNCRCSMmHN7/6ZORESk8ZnZWOqmh/RwztV6nUdExB80ci0iIgHnm95yPfCsirWIhBOVaxERCSgzGwDspG4ZwIc9jiMi4leaFiIiIiIi4icauRYRERER8ROVaxERERERP4nyOoC/dOzY0fXo0cPrGCIiIiIS5hYsWFDgnIuv71jYlOsePXqQlpbmdQwRERERCXNmtmlfxzQtRERERETET1SuRURERET8ROVaRERERMRPVK5FRERERPxE5VpERERExE9UrkVERERE/ETluoHKKqvJKy73OoaIiIiIBIGAl2sza2Fm35nZEjNbZmb31HPOpWaWb2aLfR+XBzrngaipdRz/wFf848NVXkcRERERkSDgxch1BXC8c24YMBwYZ2aj6znvTefccN/Hs4GNeGAiI4zj+sUzKyOb0opqr+OIiIiIiMcCXq5dnRLf3Wa+DxfoHP4yKTWJssoaZmZkex1FRERERDzmyZxrM4s0s8VAHvCJc25+PadNMLN0M5tmZsn7uM6VZpZmZmn5+fmNmnlfRnZrR6/41kxLy/Tk+UVEREQkeHhSrp1zNc654UASMMrMBu91yvtAD+fcUOBT4MV9XOdp51yqcy41Pj6+cUPvg5kxMSWJ7zZuZ2NBqScZRERERCQ4eLpaiHNuJ/AlMG6vx7c55yp8d58BUgIc7aBMGJlEhMG0BRq9FhEREWnKvFgtJN7M2vputwROBFbudU6XPe6eAawIXMKD1zm2BWP7xjN9YSY1tSE7fVxEREREGsiLkesuwBdmlg58T92c6w/M7M9mdobvnOt8y/QtAa4DLvUg50GZlJJMdmE5/11b4HUUEREREfFIVKCf0DmXDoyo5/E/7nH7duD2QOZqqBMHdqJtq2ZMXZDJ2L7ezP8WEREREW9ph0Y/iY6K5MxhXfloWQ6FZVVexxERERERD6hc+9Gk1GQqq2t5L32r11FERERExAMq1340qGss/RNimJa2xesoIiIiIuIBlWs/MjMmpSazJLOQ1bnFXscRERERkQBTufazs4Z3JSrCmKrRaxEREZEmR+Xazzq0ieaEAZ14e1EWVTW1XscRERERkQBSuW4Ek1KSKSip5MtV+V5HEREREZEAUrluBMf1i6djm2hNDRERERFpYlSuG0FUZATnjEzk85V5FJRUeB1HRERERAJE5bqRTEpJorrW8c6iLK+jiIiIiEiAqFw3ksM6xzAsuS3TFmTinPM6joiIiIgEgMp1I5qUksTKnGKWZhV5HUVEREREAkDluhGdPqwr0VERTF2gNzaKiIiINAUq140ormUzTh6UwLuLt1JeVeN1HBERERFpZCrXjWxSahKFu6r4dEWu11FEREREpJGpXDeyMb070jWuBVPTMr2OIiIiIiKNTOW6kUVGGBNSkpi7Jp+cwnKv44iIiIhII1K5DoCJKUnUOpi+UKPXIiIiIuFM5ToAundozaie7bXmtYiIiEiYU7kOkEkpSWwoKGXBph1eRxERERGRRqJyHSDjh3ShVfNIvbFRREREJIypXAdI6+goTh3ShQ/St1JWWe11HBERERFpBCrXATQpNZnSyhpmZ+R4HUVEREREGoHKdQAd3qMdPTq00nboIiIiImFK5TqAzIyJKUl8u347m7eVeR1HRERERPxM5TrAzhmZhBlM05rXIiIiImFH5TrAurZtydF9OjJ9QSa1tVrzWkRERCScqFx7YGJKElk7dzFv/Tavo4iIiIiIH6lce+DkQQnEtIhiapre2CgiIiISTlSuPdCiWSRnDOvK7KU5FJVXeR1HRERERPxE5dojk1KTqaiu5YMl2V5HERERERE/Ubn2yLCkOA7r1EZrXouIiIiEEZVrj5gZk1KTWLR5J2vzir2OIyIiIiJ+oHLtobNGJBIZYUxdoDWvRURERMJBwMu1mbUws+/MbImZLTOze+o5J9rM3jSztWY238x6BDpnIHSKacEv+sUzY2EW1TW1XscRERERkQbyYuS6AjjeOTcMGA6MM7PRe53zG2CHc64P8BDw9wBnDJiJKcnkF1cwZ02+11FEREREpIECXq5dnRLf3Wa+j723KjwTeNF3expwgplZgCIG1PH9O9G+dXOmpmlqiIiIiEio82TOtZlFmtliIA/4xDk3f69TEoEtAM65aqAQ6FDPda40szQzS8vPD82R3+ZREZw1PJFPV+SyvbTS6zgiIiIi0gCelGvnXI1zbjiQBIwys8F7nVLfKPXeo9s45552zqU651Lj4+MbI2pATEpNoqrG8e7iLK+jiIiIiEgDeLpaiHNuJ/AlMG6vQ5lAMoCZRQFxwPaAhgugAV1iGZwYq6khIiIiIiHOi9VC4s2sre92S+BEYOVep70HXOK7PRH43Dn3k5HrcDIpJZnl2UUs21rodRQREREROURejFx3Ab4ws3Tge+rmXH9gZn82szN85zwHdDCztcBNwG0e5AyoM4d3pXlkhEavRUREREJYVKCf0DmXDoyo5/E/7nG7HJgUyFxea9uqOb8c2Jl3F2fxh/EDaB6l/X1EREREQo0aXBCZmJrEjrIqPluR63UUERERETkEKtdBZOxh8XSOjdZ26CIiIiIhSuU6iERGGOeMTOLLVXnkFZV7HUdEREREDpLKdZCZlJJErYMZi7TmtYiIiEioUbkOMr3i25DSvR1T07YQ5qsPioiIiIQdlesgNCkliXX5pSzastPrKCIiIiJyEFSug9CpQ7vQopnWvBYREREJNSrXQSimRTPGD+7CB0u2squyxus4IiIiInKAVK6D1MTUJIorqvloWY7XUURERETkAKlcB6nRPTuQ1K4lUxds8TqKiIiIiBwglesgFRFhTExJ4pt128jcUeZ1HBERERE5ACrXQWzCyCScg+kLtOa1iIiISChQuQ5iye1bMaZ3B6Yt3EJtrda8FhEREQl2KtdBblJqElu272L+hu1eRxERERGRn6FyHeTGDepCTHSU3tgoIiIiEgJUroNcy+aRnDasC7MzciipqPY6joiIiIjsh8p1CJiYksyuqhpmpm/1OoqIiIiI7IfKdQgY2a0tveJbazt0ERERkSCnch0CzIxJKcmkbdrB+vwSr+OIiIiIyD6oXIeIc0YmEmEwbYFGr0VERESClcp1iOgc24Jj+8YzY2EWNVrzWkRERCQoqVyHkEmpyeQUlTN3Tb7XUURERESkHirXIeSEAZ1o26oZUzU1RERERCQoqVyHkOioSM4ansgny3LZWVbpdRwRERER2YvKdYiZmJJEZU0t7y3RmtciIiIiwUblOsQMToxjQJdYrXktIiIiEoRUrkPQpJQkMrIKWZlT5HUUEREREdmDynUIOmtEIs0iTaPXIiIiIkFG5ToEtW/dnBP6d+adRVlU1dR6HUdEREREfFSuQ9Sk1CS2lVby+co8r6OIiIiIiI/KdYg6tm888THRmhoiIiIiEkRUrkNUVGQE54xI5ItVeeQXV3gdJ6xl7dzFlFcW8OHSbK+jiIiISJBTuQ5hk1KTqKl1vLMoy+soYWtNbjETHv+G2UtzuPqVhfzh7Qx2VdZ4HUtERESCVMDLtZklm9kXZrbCzJaZ2fX1nHOcmRWa2WLfxx8DnTMU9OkUw/DktkxdsAXnnNdxws7CzTuY9NQ8apzj/d8ezVXH9uK1+Zs58z9fsyqn2Ot4IiIiEoS8GLmuBm52zg0ARgPXmNnAes6b65wb7vv4c2Ajho5JqUmszi0hPbPQ6yhh5ctVeVz0zHziWjZj+tVjGJIUx+2nDOCly0axvbSKM/79NS9/u0k/1IiIiMiPBLxcO+eynXMLfbeLgRVAYqBzhIvTh3UlOiqCqQu2eB0lbLy7OIvLX0yjZ8fWTL36SLp1aPXDsbF94/nwhmMY3asDd72zlKtfWcDOskoP04qIiEgw8XTOtZn1AEYA8+s5fKSZLTGz2WY2KKDBQkhsi2aMG5zAe4u3Ul6lucAN9cJ/N3DDm4tJ6d6ON64aTaeYFj85p2ObaJ6/9HDuPHUAn6/M45RH5jJ//TYP0oqIiEiw8axcm1kbYDpwg3Nu7328FwLdnXPDgMeAd/ZxjSvNLM3M0vLz8xs3cBCblJJMUXk1Hy/P9TpKyHLO8eDHq7j7/eWcOKAzL142itgWzfZ5fkSEcfkxvZg+ZQzRURFc8My3PPTJaqq1qY+IiEiT5km5NrNm1BXrV51zM/Y+7pwrcs6V+G7PApqZWcd6znvaOZfqnEuNj49v9NzBakzvDiS2bcnUNE0NORQ1tY4731nKo5+v5dzUJJ64aCQtmkUe0OcOTWrLB9cdw1kjEnnkszVc+Mx8snbuauTEIiIiEqy8WC3EgOeAFc65B/dxToLvPMxsFHU59Xv3fYiIMCaMTOTrtQVsVbE7KBXVNVz3+iJenb+Zq4/tzd8nDCUq8uD+WbSJjuLBc4fz0HnDWLa1kPGPzNWa2CIiIk2UFyPXRwGTgeP3WGpvvJldbWZX+86ZCCw1syXAo8D5Tssy7NfElGScgxkLtWPjgSqpqOayF75nZkY2d4wfwG2n9Mf3M90hOXtEEjOvO4buHVpx9SsLuePtDM2DFxERaWKsIZ3VzHoDmc65CjM7DhgKvOSc2+mnfAcsNTXVpaWlBfppg8p5T80jt6icL245rkElsSnYVlLBr1/4nmVbi/j7hKFMTEny27Urq2v518ereGrOevp2bsNjF4ykX0KM364vIiIi3jKzBc651PqONXTkejpQY2Z9qJvq0RN4rYHXlEM0KTWZjdvK+H7jDq+jBLXMHWVMemoeq3KKeeriFL8Wa4DmURHcPl5rYouIiDRFDS3Xtc65auBs4GHn3I1Al4bHkkMxfkgCrZtH6o2N+7E6t5iJT8wjv7iCVy4/ghMHdm605xrbN57Z1x/DEVoTW0REpMloaLmuMrMLgEuAD3yP7Xv9MmlUrZpHcerQLszMyKa0otrrOEFn4eYdTHqybjvzt646ksN7tG/054yPieaFSw/njvF1a2KPf2Qu323Y3ujPKyIiIt5oaLn+NXAkcJ9zboOZ9QReaXgsOVSTUpMpq6xhVoZWq9jT7u3M27ZqxowpYxjQJTZgzx0RYVwxtm5N7OZREZz/9Dwe/lRrYouIiISjBpVr59xy59x1zrnXzawdEOOcu99P2eQQpHZvV7dt9wKtGrLbntuZT7t6DMntW/38JzWCPdfEfvjTujWxtXSiiIhIeGlQuTazL80s1szaA0uA582s3rWrJTDMjIkpSXy3YTubtpV6HcdzL/x3A9e/8b/tzONjoj3Ns/ea2Kc8MpcPl+Z4mklERET8p6HTQuJ8W5efAzzvnEsBTmx4LGmIc0YmEmEwrQmPXu+5nflJA39+O/NA+/Ga2Au48x2tiS0iIhIOGlquo8ysC3Au/3tDo3isS1xLjj4snukLMqmpbXrLv9XUOu7wbWd+Xmoyjx/EduaB1MM3TeWqsb145dvNnPHvr1mVU+x1LBEREWmAhpbrPwMfAeucc9+bWS9gTcNjSUNNSklia2E536wr8DpKQFVU13Dt6wt5bf5mphzXm/snDDno7cwDqb41sV/RmtgiIiIhq6FvaJzqnBvqnJviu7/eOTfBP9GkIX45sDOxLaKYmtZ0pobs3s58VkYOd546gFvHNWw780Dac03sO99ZypRXFmpNbBERkRDU0Dc0JpnZ22aWZ2a5ZjbdzPy73Z0ckhbNIjlzeCIfLcuhcFeV13Ea3baSCi54+lu+Xb+df00axuXH9PI60kHbc03sz1bmak1sERGRENTQ35c/D7wHdAUSgfd9j0kQmJSaREV1Le8v2ep1lEaVuaOMSU/OY3VuMU9PTmGCn7czD6T61sR+5NM1TXLuvIiISChqaLmOd84975yr9n28AMT7IZf4wZDEOPp1jgnrNa93b2deUFK3nfkJAxpvO/NA+mFN7OGJPPTpai545lutiS0iIhICGlquC8zsYjOL9H1cDGzzRzBpODNjUmoSS7bs5OqXF/Dekq1htS36gk1125nXOsebAdrOPJDaREfx4HnDefDcYSzL0prYIiIiocAasiqBmXUD/k3dFugO+Aa4zjm32T/xDlxqaqpLS0sL9NMGvV2VNfzjo5V8kJ5NfnEF0VERHNs3nlOHduH4/p2ICaK1nw/Gl6vymPLKQjrHRvPyb47wbNfFQNlYUMp1bywiPbOQi0d3485TBwbl8oIiIiJNgZktcM6l1nvM30t+mdkNzrmH/XrRA6ByvX81tY4Fm3YwKyOb2UuzyS2qoHlUBGMPi2f8kAROHNg5qDZZ2Z93F2dx81tL6JcQwwu/HuX5rouBUlldywMfr+LpOevp1zmGxy4cQd/OMV7HEhERaXICXa43O+e6+fWiB0Dl+sDV1joWbt7BrIwcZi/NJruwnGaRxjGHxTN+SBd+OaAzca2Cs2g//98N3PP+co7o2Z5nLkkNmR8I/Omr1fnc/NZiisur+ePpA7lwVLeQWXJQREQkHAS6XG9xziX79aIHQOX60NTWOhZn7mRWejazl+aQtXMXzSKNo/p0ZPzgLpw0qDNtWzX3OmbdduafrOaxz9dy0sDOPHrBiCY9LSK/uIKbpy5hzup8xg1K4P4JQ4Liv5OIiEhToJFrOSDOOZZkFjI7I5uZGdlk7thFVIRxZO8OnDqkCycNSqB968AXuJpax13vLuW1+Zs5LzWZ+84eHNS7LgZKba3jua838I+PVhLfJppHLhgRdm/qFBERCUZ+L9dmVkzdGxh/cgho6ZyLOuiLNpDKtX8551iaVcSspdnMyshm07YyIiOMI3t14JQhCZw8KIGObRp/rnNFdQ03vrmYWRk5TDmuN78/uZ+mQOwlPXMn172+iOzCcmZedwx9OrXxOpKIiEhYC+jItVdUrhuPc47l2UXMyshmVkYOGwpKiTA4omcHxg9J4OTBCXSKaeH35y2pqObKl9L4Zt027jx1QEjuuhgoecXlnPzQHLp1aM30q4/UyL6IiEgjUrkWv3HOsTKn+IepI+vySzGDw3u059QhXRg3OIHOsQ0v2ttKKrj0+e9Znl3EPyYMDeldFwNlZno217y2kFtO6stvjz/M6zgiIiJhS+VaGoVzjjV5JcxMr1veb3VuCWaQ2r0dpwzuwilDEugS1/Kgr5u5o4xfPfcdWTt38fhFI8Nm18VAuPb1RXy4NJt3rjmKQV3jvI4jIiISllSuJSDW5hUzKyOHWRnZrMwpBmBkt7aMH9KFU4Z0IbHtzxft1bnFTH5uPrsqa3ju0sP1Br2DtKO0kpMenkOH1s1597dHER3VdFdUERERaSwq1xJw6/JL+HBpDjPTs1meXQTA8OS2jB+SwCmDu9S7o+KCTTu47IXviY6K4KXfjKJ/QmygY4eFz1fmctkLaVzzi9787uT+XscREREJOyrX4qmNBaXMWprN7IwcMrIKARiaFMf4IV0YP7gL3Tq04otVeUx5ZQEJsS2axHbmje3WaelMXbCFaVPGMLJbO6/jiIiIhBWVawkam7eVMdu3vN+SzLqi3T8hhrV5JU1uO/PGVFxexbiH5xIdFcHM646hZXNNDxEREfEXlWsJSpk7ypidkcOHy3LoHBvN3ycMJaYJbmfeWL5ZW8CFz87n10f14E+nD/I6joiISNjYX7kO+GYvIrsltWvFFWN7ccVYrV/dGMb06cilY3rw/H838suBnRnTu6PXkURERMKedpoQCWO3jutPz46t+d3UdEoqqr2OIyIiEvZUrkXCWMvmkTwwaRjZhbu494PlXscREREJeyrXImEupXs7rjq2N298v4UvVuZ5HUdERCSsqVyLNAE3nHgY/RNiuHV6OjvLKr2OIyIiErZUrkWagOioSP517jC2l1byp/eWeR1HREQkbAW8XJtZspl9YWYrzGyZmV1fzzlmZo+a2VozSzezkYHOKRJuBnWN4/oTDuPdxVuZlZHtdRwREZGw5MXIdTVws3NuADAauMbMBu51zinAYb6PK4EnAhtRJDxNOa43w5LiuOPtDPKLK7yOIyIiEnYCXq6dc9nOuYW+28XACiBxr9POBF5ydb4F2ppZlwBHFQk7UZER/OvcYZRW1nD7jAzCZRMpERGRYOHpnGsz6wGMAObvdSgR2LLH/Ux+WsAxsyvNLM3M0vLz8xsrpkhY6dMpht+f3I9PV+QyY2GW13FERETCimfl2szaANOBG5xzRXsfrudTfjLE5px72jmX6pxLjY+Pb4yYImHpsqN6Mqpne+5+bxlbd+7yOo6IiEjY8KRcm1kz6or1q865GfWckgkk73E/CdgaiGwiTUFEhPHAxGHUOMfvp6VreoiIiIifeLFaiAHPASuccw/u47T3gF/5Vg0ZDRQ657S8gYgfdevQijtOHcDXawt4Zf5mr+OIiIiEhSgPnvMoYDKQYWaLfY/9AegG4Jx7EpgFjAfWAmXArz3IKRL2LhzVjY+W5fLXmSsYe1hHundo7XUkERGRkGbh8uvg1NRUl5aW5nUMkZCTU1jOSQ99Rb+EGN648kgiI+p7y4OIiIjsZmYLnHOp9R3TDo0iTVxCXAvuOXMQ32/cwXNfr/c6joiISEhTuRYRzhqeyMmDOvPAR6tZnVvsdRwREZGQpXItIpgZ9509hDYtorj5rSVU1dR6HUlERCQkqVyLCAAd20Tz17MHk5FVyONfrPM6joiISEhSuRaRH4wb3IWzRyTy2OdryMgs9DqOiIhIyFG5FpEfufv0QXRo05ybpy6mvKrG6zgiIiIhReVaRH4krlUz/j5hKKtzS3jo09VexxEREQkpKtci8hPH9evEBaO68fSc9SzYtN3rOCIiIiFD5VpE6nXHqQNIateSm95aQllltddxREREQoLKtYjUq010FP+cOIzN28u4f/ZKr+OIiIiEBJVrEdmn0b06cNlRPXlp3ia+XlPgdRwREZGgF+V1ABEJbr87uR9frMrj99OW8OGNY4lt0czrSBIkyqtqyC+uoKCkgoKSSgpKKiirrOHsEYm0b93c63giIp5QuRaR/WrRLJIHzx3OhCe+4S/vL+efk4Z5HUka0e7CnF9SQUHx/0pzQUnFj4t0cQXFFfXPxX/qq3U8dN5wjurTMcDpRUS8p3ItIj9reHJb/t9xvXns87WcPCiBEwd29jqSHIRdlTV15dhXmOv+rL80l+yjMMe1bEbHNs3p2CaaQV1j6dgmmviY6B8eq7sdTX5xBTe9tZiLn5vPVWN7c9Mv+9I8SjMQRaTpMOec1xn8IjU11aWlpXkdQyRsVVbXcuZ//kt+cQUf3zhWv/b3WFllNQXFlXVFee+SvFdxLq2sfzOgtq2a0bHNTwtyfJtoOsb877EOraMPqiCXVVbzlw9W8Pp3mxmaFMcj54+gZ8fW/vrSRUQ8Z2YLnHOp9R5TuRaRA7Uiu4gz/v01Jw1K4D8XjvQ6TpOyZMtOnpm7noysQgr2U5jb/VCYo+noG1n+UWn2FeeDLcyH4sOl2dw6PYOqmlruOWMQE1OSMLNGfU4RkUDYX7nWtBAROWADusRyw4l9+edHqxg3aCunD+vqdaSw5pzj67UFPPHlOr5Zt43YFlEc26/TDyPL8b4Cvbs0d2jTnGaRwTMFY9zgLgxLbsuNby7md9PS+Wp1PvedPYS4lnpTrIiEL41ci8hBqa6pZdJT89hQUMrHN4ylU2wLryOFnZpax+yl2Tz51TqWZhXROTaa3xzdkwtGdSMmBFdrqal1PPnVOh78ZDUJsTNgHGIAACAASURBVC14+PzhHN6jvdexREQOmaaFiIhfrc8vYfyjcxnTuyPPXZKqX/X7SXlVDdMXZvLMnPVs3FZGr46tuerYXpw1IpHoqEiv4zXYos07uP6NxWTuKOPa4w/j2uP7EBVEI+0iIgdK5VpE/O75/27gnveX848JQzn38GSv44S0ovIqXv12M899vYGCkgqGJcUx5bje/HJgApER4fWDS3F5FX96bxkzFmaR0r0dD583nOT2rbyOJSJyUFSuRcTvamsdFz77LUuzivjwhmNIaqeCdLDyisv5v6838uq3myiuqOaYwzoy5djeHNm7Q9j/NuDdxVnc+fZSAO47ZwhnaP6+iIQQlWsRaRRbtpdxyiNzGZIYx6uXH0FEmI2yNpaNBaU8NWc90xdmUl1TyylDujDl2N4MTozzOlpAbdlexnVvLGLR5p1MTEni7jMG0SZa77MXkeCn1UJEpFEkt2/FXacN4NbpGbw0byOXHtXT60hBbWlWIU98tY7ZGdlERUYwMSWJK4/pRY8mugZ0cvtWTL3qSB79bA3//mItaRu388j5IxiW3NbraCIih0wj1yLSIM45Lnvhe+at38as646hV3wbryMFFecc89Zt44mv1jF3TQEx0VFcfGR3fn1UDzrFaKWV3eav38aNby4mr7iCm0/qx1Vje+k3ISIStDQtREQaVW5ROSc9NIde8a2ZdvWYsHsT3qGoqXV8vCyHJ75aR3pmIfEx0Vx2VE8uGt2N2BBcTi8QCsuquP3tdGZl5DCmdwcePHc4CXH6AUQk0Cqqa7j3gxUU7qriD+MH6N9hPVSuRaTRvbs4i+vfWMyt4/oz5bjeXsfxTEV1DW8vzOLpOetZX1BK9w6tuGpsb84ZmUiLZqG/nF5jc84xNS2TP723jOhmEfx9wlBOHpTgdSyRJiOvuJyrX17Aws07aR4VQXRkBLeN788Fh3fTb5P2oHItIo3OOcc1ry3k0+V5vHftUfRPiPU6UkAVl1fx2vy65fTyiisYnBjLlGP7MG5w+C2nFwjr8ku4/o1FLM0q4qIjunHnqQNp2Vw/nIg0pqVZhVz5Uhrbyyp58NzhDOwSyx/ezuCbddsY1aM9f5swhN6a+geoXItIgGwvreSkh76iU0wL3rnmKJpHhf8GIfnFFbzwzQZemreJ4vJqjurTgauP7c3RfTqG/XJ6ja2yupZ/fbyKp+asp0+nNjx6/ggGdm1aP7SJBMrsjGxuemsJ7Vo14+lfpf6wepFzjqkLMrlv5gp2VdZw3Ql9uHJs7ybx//f9UbkWkYD5ZHkuV7yUxnXH9+Gmk/p5HafRbN5WxtNz1zE1LZPKmlrGDUrg6mN7a6WLRvD1mgJuemsxO8uquO2U/vz6qB76wUXET5xzPPrZWh76dDUjurXlqckp9b7ZOq+4nHveX87M9Gz6J8Rw/4ShDG/C/79TuRaRgLr5rSW8sziLGVPGhF3ZXLa1kCe/Ws/M9K1ERhgTRiZx5dheWiWlkW0rqeDW6el8uiKP4/rF88CkYXRsE+11LJGQtquyhlumLWFmejbnjEzkr2cP+dn3hnyyPJe73llKbnE5vx7Tk5tP6kvrJrg+vcq1iARU4a4qxj08h1bNI5l53TEh/0Y+5xzzN2zniS/X8dXqfFo3j+Ti0d257OiedI7Vu+gDxTnHK99u4t6ZK4hp0YwHJg3luH6dvI4lEpKyC3dxxUtpLNtaxO2n9OeKY3od8G+Eisur+MeHq3j5200ktm3JfWcPbnL/FlWuRSTg5q7JZ/Jz33H50T2587SBXsc5JLW1jk9W5PLEl+tYvGUnHVo357Kje3LxEd2Ja6Xl9LyyKqeY615fxKrcYn5zdE9+P64f0VGh/QOcSCAt2ryDK19ewK7KGh45fzgnDOh8SNdJ27id22ZksDavhLOGd+Wu0wbSoYn8RknlWkQ8cdc7S3ll/ibeuGI0R/Tq4HWcA1ZZXcs7i7N46qt1rMsvJbl9S64c25tJKUkhPwofLsqrarh/9kpe+GYjA7rE8tgFw+nTKcbrWA2SV1ROemYha/NLOGlgZ001kkbx9qJMbp2eQUJsC569JJW+nRv276aiuobHv1jH41+upU10FH88fSBnDU8M+/dFqFyLiCdKK6oZ/+hcap3jw+vHBv28vJKKat74bjPPzt1ATlE5A7rEMuW43owfnEBUZNN+Z3yw+mxFLr+blk5ZZTV/PG0QF4xKDolv6gUlFWRkFpKeWUhGViEZWTvJLar44Xib6CgeOm84vxx4aCOKInurqXX886NVPPnVOkb3as8TF6XQrnVzv11/dW4xt05PZ9HmnYztG899Zw0muX0rv10/2ARduTaz/wNOA/Kcc4PrOX4c8C6wwffQDOfcn/d3TZVrkeD0/cbtnPvUPC4Y1Y2/nj3koD63ttZRWVNLRVUtFTU1dX9W11JZXUtFdc0et+vu/3C7quZ/n1dd67v90/Mrfrhd93jmjjKKy6sZ3as9Vx/bm2P7xodEUWvq8orKuXnqEuauKeDkQZ25/5yhfi0NDbWjtNJXoAtJz9xJRmYhWwvLATCDXh1bMzSpLYMT4xiaFEe7Vs258c3FZGQVcuOJfbn2+D7avEMapKSimhveWMSnK/K46Ihu3H3GIJo1woBBTW3d+yL+8eFKah3ccnI/Lh3TIyzX+g/Gcj0WKAFe2k+5vsU5d9qBXlPlWiR4/XXWCp6es55TBifgHHVlt7quLP+vBO9Rjn1lt7KmtsHPbQbRURFER0USHRVRt+OY7/4Pt5tF0jwygvatm3H+qG6M7NbOD1+1BFJtreO5rzfwj49W0qF1NA+eN4wxvTsGPEdhWRVLt+4ekd5JemYhmTt2/XC8R4dWDElqy9DEOIYkxTGoaywxLX46f7+8qoY/zMhgxqIsThrYmX+dO6ze80R+zuZtZVz+0vesyy/l7tMHMvnIHo3+nFk7d3Hn2xl8sSqfYUlx3D9hKAO6hNca9UFXrgHMrAfwgcq1SPgrr6rht68tZG1eia/Q7q/oRtA8MpLoZhF7nPPT86PrOb95ZITv8/53vagI0+hzE7I0q5DrXl/Ehm2lTDm2Nzf+sm+jjNBB3YoJS7OKyMjaSUZWERmZO9m4reyH48ntWzI0sS1DkuIYmhjHoMQ44loeeEF2zvH8fzdy36wV9OzYmqcnp2gethyUb9dvY8orC6h18PhFIzmqT+B+4HTO8X56Nve8t4zCXVVcfWxvfnt8n7B530qoluvpQCawlbqivaye864ErgTo1q1byqZNmxoxsYiIhIKyymr+/P5y3vh+C8OS2/Lo+cPp3qF1g65ZWlHNsq1FddM7MneSnlXI+vzSH44ntm3JEN9o9NCkOAZ3jfPb1JRv1hZwzWsLqa51PHr+CH7Rv2kteSaH5rX5m/nju0vp3qEVz15yOD07NuzfwKHaUVrJvTNXMH1hJr06tuZv5wwJqTe470solutYoNY5V2Jm44FHnHOH7e96GrkWEZE9zcrI5rbp6dTUOv5y1mDOHnFgKxjsqqxheXbRDyU6w7eCx+5vlwmxLRiSFPdDmR6SGNfoG9ps2V7GVS8vYEVOEbec1I//d1xv/UZG6lVdU8u9M1fwwjcbObZvPI9dOILYIJhSNHdNPn94O4Mt23dx4RHduO2U/kGR61CFXLmu59yNQKpzrmBf56hci4jI3rbu3MUNby7muw3bOXN4V/5y1uAffUMvr6phZU5xXZH2rdyxJq+Emtq6740d20Qz1Fegd//ZyaONg3ZV1nDr9HTeW7KV8UMS+OfEYUG/Ao8EVmFZFde8tpCv1xZw+dE9uX38gKB6M2FZZTUPfbKa577eQMc20fz5zMGMG5zgdaxDEnLl2swSgFznnDOzUcA0oLvbT1iVaxERqU9NreOJL9fy0Kdr6BLXgl8f1ZO1ecWkZxayKqeYal+Rbt+6+Y9K9NCktnSOjQ6qEWLnHM/MXc/9s1dyWKcYnv5VSoOnvIQj5xzO0aRWWVmXX8LlL6aRuaOM+84ewrmpyV5H2qf0zJ3cOj2DFdlFjBuUwD1nDgq53W6Drlyb2evAcUBHIBf4E9AMwDn3pJn9FpgCVAO7gJucc9/s75oq1yIisj8LN+/g+jcWsWX7LuJaNvvRiPTgxDgS27YMqiK9P3NW53Pt64sAeOyCEYztG+9xouBQU+t44/vNPPDRKlo2i+TCI7px3uHdiI8J710Dv1qdz29fW0jzyAienJzC4T3aex3pZ1XV1PLs3A08/OlqmkdF8IfxAzj/8NBYpx6CsFw3BpVrERH5OeVVNRSUVIRUkd6XTdtKuerlBXWbd4zrz5Vje4X819QQCzfv4E/vLiMjq5BRPdvTPDKCr9cW0CzSGDe4C5NHd+fwHu3C6jXavaLMvTOX0y8hlmd+lUJSu9DauGVDQSm3z0jn2/XbOaJne/52zpCQWBVH5VpERCQMlVZU87tpS5iVkcPpw7ry9wlDaNW8ac3DLiip4B8fruSttEw6x0Zzx6kDOX1oF8yMdfklvPrtZqYu2EJxeTX9E2K4eHR3zhqRSJsQn69eWV3LXe8s5c20LZw8qDMPnjs8ZOfgO+d4K20L985cQUV1LdefcBhXju3VaMto+oPKtYiISJhyzvHEV+v450er6J8Qy9OTU8J62+ndqmtqeXX+Zv718SrKKmv4zTE9ufb4w+otzWWV1by/ZCsvzdvEsq1FtImO4pyRiVw8ujt9O8d4kL5htpVUMOWVhXy3cTvXHt+HG0/sGxbzy/OKyrn7/WXMysihf0IMf58wlGHJbb2OVS+VaxERkTD3xao8rnt9EVERxr8vDOyGIYH23Ybt/PHdpazMKeboPh25+4xB9On081MJnHMs3rKTl7/dxAfp2VRW13JEz/ZMPrI7Jw1MoHlU8I6U7rYiu4jLX0yjoKSCf04axhnDunodye8+WpbDH99dSn5xBZcd1ZObTuobdL+RUbkWERFpAjYUlHLlS2msLyjlD+MHcNlRPcJqjnFeUTl/m72Stxdl0TWuBXedNpBxgxMO6WvcXlrJW2lbeHX+JrZs30V8TDQXHJ7MBUd0o0tcy0ZI33AfL8vhhjcXE9MiiqcnpwbtqK4/FJVX8ffZK3l1/maS2rXkr2cPCao37qpci4iINBElFdXc9OZiPl6ey9kjEvnbOUNCfsvpqppaXvxmIw9/uobK6lquHNuL//eL3n4ZzaypdcxZnc/L327ii1V5RJhx4oBOTB7dgzG9OwTFdAvnHI9/uY4HPl7F0MQ4nv5VasgtXXeovtuwndtmpLM+v5RzRiZy16kD/bb7aUOoXIuIiDQhtbWOf3+xlgc/Wc3gxFiempxKYtvgHI39Od+sK+BP7y5jTV4Jv+gXz59OH0SPRtrKe8v2Ml6dv5m30rawvbSSXh1bc9Ho7kwcmURcK292Eyyvqts86N3FWzlzeFf+PmFoyP+wdLDKq2r4zxdreeLLdcS1bMYfTx/IGcO6evpbGZVrERGRJujT5bnc8OZioqMi+M9FIxndq4PXkQ5YduEu7p25gpnp2SS3b8mfThvECQM6BaRQVVTXMDsjh5fmbWTh5p20aBbBmcMSmXxkdwYnxjX68++WW1TOlS+lsSSzkN+drG3vV+YUcdv0DBZv2ckv+sVz79lDPPuhUeVaRESkiVqbV8KVL6exeVsZd502kF8d2T2oC1pldS3Pfr2exz5bS61zTDmuN1cf29uz0dqlWYW8On8T7yzayq6qGoYnt2Xy6O6cOrRLo2ZKz9zJFS+lUVxezcPnDeekQaG5Tbi/1dQ6Xpq3kX9+tAqAB88d7skW6irXIiIiTVhReRU3vrGYz1bmMTEliXvPGhyUUwvmrM7n7veWsb6glJMGduau0wYGzbKChbuqmLEwk5e/3cT6/FLatWrGuanJXHREd7p18G/G95Zs5XdTl9CxTTTPXpLKgC6xfr1+OMjcUcaf31/Obaf092TTGZVrERGRJq621vHwZ2t49LM1DEuK48nJKUGzKsaW7WXcO3M5Hy3LpWfH1vzp9IEc16+T17Hq5Zxj3rptvPztJj5enkutcxzbN57Jo7tzXL9ORDbgDZC1tY6HPl3NY5+vZVSP9jxx8Ug6tAnvrdtDlcq1iIiIAPDh0hxufmsxLZtH8cTFIzm8R3vPspRX1fD0nPX854u1RJjx2+P7cPkxPYmOCr5R9frkFJbz+nebef27zeQVV5DYtiUXje7GeanJB12KSyuquemtxXy0LJfzUpP5y1mDQ2Ld7aZK5VpERER+sCa3mCteSiNzxy7uPmMQF4/uHvAMn63I5Z73l7N5exmnDunCH04dELIrmlTV1PLJ8lxenreJeeu30TwygvFDEph8ZHdGdmv3s3PcM3eUcfmLaazOLebOUwfy6zBbnzwcqVyLiIjIjxTuquL6Nxbx5ap8LhiVzN1nDArIiPGmbaX8+f3lfLYyjz6d2nDPGYPCajfJNbnFvDp/M9MXZFJcUc2ALrFMHt2dM4d3pXU9W7OnbdzOVS8voLKmln9fOJJjg2ijFNk3lWsRERH5iZpax78+XsXjX65jZLe2PHFxSqNtTrKrsoYnvlzLk3PW0yzCuOHEvlwypkfYTn0orajm3cVbeWneRlbmFBMTHcWElCQuHt39h63a30rbwh1vZ5DUrhXPXpJKbw/emCeHRuVaRERE9mlmeja3TF1CTIsonpycwshu7fx2beccHy3L4S8frCBr5y7OGt6V28cPaDI7DDrnWLh5By/P28SsjBwqa2oZ07sDSe1a8lZaJkf36ch/Lhzp2SY1cmhUrkVERGS/VmQXceXLaeQWVvDnMwdx/qhuDb7muvwS7n5vGXPXFNA/IYZ7zhjEESG0kY2/FZRU8FbaFl79djNZO3dx6Zge3HnqAKIiw3P0PpypXIuIiMjP2llWybWvL2LumgImj+7OXacNPKRpG6UV1Tz2+Vqe+3o9LaIiuemkvkwe3V0l0qem1rF5exk9G2kbd2l8+yvXP51ZLyIiIk1S21bNef7Sw/nnR6t4as56VuYU8fhFKcTHHNiycs45PkjP5r6ZK8gpKmdiShK3jut/wJ/fVERGmIp1GFO5FhERkR9ERUZw+/gBDOway63T0zn9sa95anIKw5Lb7vfzVucW86d3lzFv/TYGJ8byn4tGktLdf3O3RUKFfj8jIiIiP3Hm8ESmTxlDZIQx6al5TFuQWe95xeVV/OWD5ZzyyFyWZxdx71mDefeao1WspcnSyLWIiIjUa1DXON6/9miueXUht0xdwtKsQu44dQDNIiNwzvH2oiz+Omsl20orOP/wbvzu5H60b93c69ginlK5FhERkX1q37o5L/9mFH+dtZL/++8GVmQXcf2Jh/HQJ6v5fuMOhiW35blLUn922ohIU6HVQkREROSAzFiYyW0zMqisrqV96+bcOq4fk1KSiYjQVt3StGi1EBEREWmwc0Ym0bdzDF+szONXR/bQxici9VC5FhERkQM2ODGOwYlxXscQCVpaLURERERExE9UrkVERERE/ETlWkRERETET1SuRURERET8ROVaRERERMRPVK5FRERERPxE5VpERERExE9UrkVERERE/CRstj83s3xgk0dP3xEo8Oi5w4Vew4bTa9hweg39Q69jw+k1bDi9hg2n13Dfujvn4us7EDbl2ktmlrav/eXlwOg1bDi9hg2n19A/9Do2nF7DhtNr2HB6DQ+NpoWIiIiIiPiJyrWIiIiIiJ+oXPvH014HCAN6DRtOr2HD6TX0D72ODafXsOH0GjacXsNDoDnXIiIiIiJ+opFrERERERE/UbkWEREREfETlesGMLNxZrbKzNaa2W1e5wlFZpZsZl+Y2QozW2Zm13udKVSZWaSZLTKzD7zOEorMrK2ZTTOzlb6/j0d6nSnUmNmNvn/HS83sdTNr4XWmUGBm/2dmeWa2dI/H2pvZJ2a2xvdnOy8zBrt9vIb/9P17Tjezt82srZcZg119r+Eex24xM2dmHb3IFmpUrg+RmUUC/wFOAQYCF5jZQG9ThaRq4Gbn3ABgNHCNXsdDdj2wwusQIewR4EPnXH9gGHotD4qZJQLXAanOucFAJHC+t6lCxgvAuL0euw34zDl3GPCZ777s2wv89DX8BBjsnBsKrAZuD3SoEPMCP30NMbNk4JfA5kAHClUq14duFLDWObfeOVcJvAGc6XGmkOOcy3bOLfTdLqau0CR6myr0mFkScCrwrNdZQpGZxQJjgecAnHOVzrmd3qYKSVFASzOLAloBWz3OExKcc3OA7Xs9fCbwou/2i8BZAQ0VYup7DZ1zHzvnqn13vwWSAh4shOzj7yHAQ8DvAa2AcYBUrg9dIrBlj/uZqBQ2iJn1AEYA871NEpIepu5/frVeBwlRvYB84Hnf1Jpnzay116FCiXMuC3iAutGtbKDQOfext6lCWmfnXDbUDUIAnTzOE+ouA2Z7HSLUmNkZQJZzbonXWUKJyvWhs3oe0091h8jM2gDTgRucc0Ve5wklZnYakOecW+B1lhAWBYwEnnDOjQBK0a/hD4pvTvCZQE+gK9DazC72NpUImNkd1E1BfNXrLKHEzFoBdwB/9DpLqFG5PnSZQPIe95PQr0APiZk1o65Yv+qcm+F1nhB0FHCGmW2kbnrS8Wb2ireRQk4mkOmc2/1bk2nUlW05cCcCG5xz+c65KmAGMMbjTKEs18y6APj+zPM4T0gys0uA04CLnDb2OFi9qftheYnv+0sSsNDMEjxNFQJUrg/d98BhZtbTzJpT98ad9zzOFHLMzKib57rCOfeg13lCkXPududcknOuB3V/Dz93zmnE8CA453KALWbWz/fQCcByDyOFos3AaDNr5ft3fQJ6U2hDvAdc4rt9CfCuh1lCkpmNA24FznDOlXmdJ9Q45zKcc52ccz18318ygZG+/1/KfqhcHyLfmyR+C3xE3TeQt5xzy7xNFZKOAiZTN9q62Pcx3utQ0iRdC7xqZunAcOCvHucJKb5R/2nAQiCDuu8v2jr5AJjZ68A8oJ+ZZZrZb4D7gV+a2RrqVmq438uMwW4fr+G/gRjgE9/3lic9DRnk9vEayiHQ9uciIiIiIn6ikWsRERERET9RuRYRERER8ROVaxERERERP1G5FhERERHxE5VrERERERE/UbkWEQkhZlbi+7OHmV3o52v/Ya/73/jz+iIiTYHKtYhIaOoBHFS5NrPInznlR+XaOacdFkVEDpLKtYhIaLofOMa3OcaNZhZpZv80s+/NLN3MrgIws+PM7Asze426zV0ws3fMbIGZLTOzK32P3Q+09F3vVd9ju0fJzXftpWaWYWbn7XHtL81smpmtNLNXfbsziog0WVFeBxARkUNyG3CLc+40AF9JLnTOHW5m0cB/zexj37mjgMHOuQ2++5c557abWUvgezOb7py7zcx+65wbXs9znUPdrpXDgI6+z5njOzYCGARsBf5L3a6rX/v/yxURCQ0auRYRCQ8nAb8ys8XAfKADcJjv2Hd7FGuA68xsCfAtkLzHeftyNPC6c67GOZcLfAUcvse1M51ztcBi6qariIg0WRq5FhEJDwZc65z76EcPmh0HlO51/0TgSOdcmZl9CbQ4gGvvS8Uet2vQ9xURaeI0ci0iEpqKgZg97n8ETDGzZgBm1tfMWtfzeXHADl+x7g+M3uNY1e7P38sc4DzfvO54YCzwnV++ChGRMKNyLSIhyczuNrNXGvH6y3yjvLvf0Pe8me0ws+/M7BgzW9UIz9nNzEoOYFUPgHSg2syWmNmNwLPAcmChmS0FnqL+UeQPgSgzSwf+Qt3UkN2eBtJ3v6FxD2/7nm8J8Dnwe+dczsF8bYFiZhvN7MR9HGuU/24iInsy55zXGURE6uVbx/kmoD91I7WLgfucc1+b2d1AH+fcxQHIcQzwOtDPOVf6c+cfxHU3Apc75z711zWbOn+8poH8uyUi4Ucj1yISlMzsJuBh4K9AZ6Ab8DhwpgdxugMb/VmsmxIza1LzsJva1ysiP6ZyLSJBx8zigD8D1zjnZjjnSp1zVc65951zv9vH50w1sxwzKzSzOWY2aI9j481suZkVm1mWmd3ie7yjmX1gZjvNbLuZzTWzCN+xjWZ2opn9hropF0f6pmzc41vfOXOP6yeb2QwzyzezbWb2b9/jvc3sc99jBb51oNv6jr1M3Q8M7/uu+3ur23XR7S5nZtbVzN7zZVtrZlfs8Zx3m9lbZvaS7+taZmap+3lNHzGzLf+/vTuPr6q+8z/++iSQECAQlrAk7Moum8alWrUqihWLtp222s1Obf05M452Wm1l2um0dlqd+qtdpv6mY61dx1q11lJBcalad0FZIiCKrFkIgZAFyJ7P749zEi7hJlxIbu69yfv5eOSRe8495+ZzT6K8883nfL9mVm3BHNfnRjyXbmb/ambvha/1hpmND5+bbWZPhTWUWbiKo5n9ysz+I+I12l+T7Wb2tbD95KCZ9TOzWyO+xkYz+3C7Gr9oZpsinj/VzG4xsz+2O+6/zOxHHb1XYL4Fc31XmdkfzGxABzV+Lfx5qDGzzWZ2kZldSrCYzifC78u6GL8XD5vZ78ysGrjVzA6Z2YiIY04Lfz6i9bSLSC+icC0iyeh9BDNY/Ok4znmcYEq5UcCbQGTf8C+A/+Pu2cApBH3DAF8BioBcgtHxfwWO6JVz918A1wOvuPtgd//3yOct6I9+DNhBMA1dPvBA69PA7UAeMJNg2rtvha/7GWAn8KHwdb8f5T39PqwvD/g74HtmdlHE80vCr5UDLAN+2sn1WUUwV/Vw4H7godbQSdB6czVwGTAE+DxwyMyygacJ+rTzgJOBZzr5Gu1dDSwGcty9CXgPOJfgpspvA78zs7EAZvYxgmvz2bCGJcA+4HfApRG/lPQDPgH8tpOv+3HgUmAyMBf4XPsDzGw6cANwevhzsYjgrxNPEPy15A/h92VeeMqxvhdXAA8TfC9+ADwX1tHq08AD7t7YSd0i0gso/cWEgwAAIABJREFUXItIMhoB7A0DWUzc/T53r3H3eoKQNs+CEXCARmCWmQ1x9/3u/mbE/rHAxHBk/AU//htRziAIXLeEI+x17v5iWNMWd3/K3evdvRy4Czg/lhcNR47fD3wtfM21BCPon4k47EV3X+HuzQRhc16UlyKs5Xfuvs/dm9z9B0AmMD18+gvAN9x9swfWufs+4HJgt7v/IKyhxt1fO45r8xN33+XutWEND7l7ibu3uPsfgHcJrl9rDd9391VhDVvcfYe7lxLMVvKx8LhLCX423jjG1y1x9wrgLwS/VLTXHF6DWWbW3923u/t70V4sxu/FK+7+aPjeaoFfEwTq1l/ArqbzXwhEpJdQuBaRZLQPGGkx9q6GbQ13hC0H1cD28KmR4eePEozK7jCz583sfeH+O4EtwJNmttXMbj2BWscDO6L9ImBmo8zsgbD1oJpgFHbkUa8QXR5Q4e41Eft2EIyMt4qcseMQMKCja2ZmXwlbLqrMrJJg9Li1lvEEo8rR3lvUwBmjXe1q+KwFy6tXhjWcEkMNEBFUw8/HCqntr8vg9ge4+xbgSwS/iO0Jv095HbxeLN+LXUeewp8JgvsU4GKC1TM1faFIH6BwLSLJ6BWgDrgyxuM/SfBn+YUEoXFSuN8AwtHQKwhaRh4FHgz317j7V9x9CvAh4Mvt/tQfi13AhA5C7e0EbSZz3X0IQTCMXJCls1HyEmB42JrRagJQfJz1tc528jWCNoVh7p4DVEXUsgs4KcqpHe2HYGGagRHbY6Ic0/b+zGwi8HOCVowRYQ1vxVADBN+zuWZ2CsFoevupAk+Iu9/v7u8nuGHVgf9sX3colu9F+3aiOoKfs08RjHBr1Fqkj1C4FpGk4+5VwDeBu83sSjMbaGb9zeyDZhatNzmbYKXAfQSB73utT5hZhpl9ysyGhv2u1QQtAZjZ5WZ2splZxP7m4yz3daAUuMPMBpnZADM7J6KuA0ClmeUD7W/GLAOmdHANdgEvA7eHrzkXuJYTC5bZQBNQTjDH9TcJ+ppb3Qt8x8ymWmBueDPeY8AYM/uSmWWaWbaZnRmesxa4zMyGm9kYglHgzgwiCKDlAGb29wQj15E13Bze+Gfh92VieC3qCPqZ7ydYbn3nCVyDI5jZdDO70MwyCX6Rq+Xw974MmGThza1d+F78hqDfewnBXy1EpA9QuBaRpOTudxHcaPcNgkC2i2DU89Eoh/+G4M/0xQQLqbza7vnPANvD1ozrOdxiMJXghr0DBKPl/8/dnzvOOpsJRr1PJrhBsYjghjsIbto7lWCUeDnwSLvTbwe+EbZJ3Bzl5a8mGIUvIbi589/d/anjqS+0kuCGz3cIrlMdR7Yx3EUwyvokwS8ZvwCywjaIi8P3t5ugR/qC8JzfEiwqsz087w+dFeDuGwlu9HuFILzOAV6KeP4h4LsEAbqG4Ps8POIlfh2e010jwJnAHcBegvc2iuCGVoCHws/7zKy1P/+4vxfu/hLQArzp7tu7qW4RSXJaREZERJKemU0A3gbGuHt1ouuJlZn9Fbjf3e9NdC0i0jMUrkVEJKmF7Rl3AUPc/fOJridWZnY68BQwvt3NkCLSi2kVKRERSVpmNoigjWQHwTR8KcHMfk1wQ+5NCtYifUtce67N7NJw1ast0aa4MrPPhStWrQ0/vhDx3DVm9m74cU086xQRkeQUzh0+2N1nhzcWpgR3v8bdh7r7rxJdi4j0rLi1hYST5r9DcDNMEcHqYFeHN7W0HvM5oMDdb2h37nBgNVBAcHf5G8Bp7r4/LsWKiIiIiHSDeLaFnAFscfetAGb2AME8tBs7PSuwCHgqXF0LM3uK4M+Bv+/ohJEjR/qkSZO6WrOIiIiISKfeeOONve6eG+25eIbrfI6c6qkIODPKcR81s/MIRrn/JfyzX7Rz86Oc22bSpEmsXr26axWLiIiIiByDme3o6Ll49lxblH3te1D+Akxy97kEc83++jjOxcyuM7PVZra6vLy8S8WKiIiIiHRVPMN1ETA+YnscweT7bdx9n7vXh5s/B06L9dzw/HvcvcDdC3Jzo47Mi4iIiIj0mHiG61XAVDObbGYZwFXAssgDzGxsxOYSYFP4eCVwiZkNM7NhwCXhPhERERGRpBW3nmt3bzKzGwhCcTpwn7tvMLPbgNXuvgy40cyWAE1ABfC58NwKM/sOQUAHuK315kYRERERkWTVa1ZoLCgocN3QKCIiIiLxZmZvuHtBtOfiuoiMiIiIiEhfouXPRUSkT3h0TTF3rtxMSWUteTlZ3LJoOlcu6HSWV2knFa5hsteY7PVJ1ylci4hIr/fommKWPlJIbWMzAMWVtSx9pBBAwSZG0a7h1/64nm17D3LetOSYsetv75Tzs+ffo76pBThcY2lVLRfPGkN6mtEvzUiP+Dh6O400A7NoswJ3jX4O+wb1XIuISK93zh1/pbiy9qj9gzLT+d6H5zB3XA4Thw8kLa37A1UqqzrUSGFxFeuLK/nJM+9S19iS6JJ6zBHh24z09Igg3rYdBPF+aWnBselGmnUU4NN4cUt51Gs4dugAXll6UQLepZyoznquNXItIiK9Wl1jc9RgDXCwvpmbHlgLQPaAfszJH8qccUOZm5/D3HFDGTcsKy4jmMmopq6Rt4qrKSyuZH1RFYXFVezYdyimc3/9+TPiXF1srrnv9Q6f+8nVC2huaaG5BZpbWmhqcVpanKYWpzn8aIr4fPi5due409Qccbw7zc1++Lm2c5zG5hZqGw+/fke/nJRW1fG+259hdt4QZo0dwqy8IczO61s/f72JwrWIiPRabxVX8S9/WNvh83k5A/jFNadTWBSMzhYWVfHLF7fT0ByEoJyB/ZmTP5S544YyJwzcY4cOSPnAc6ihiQ0l1UGILqpkfXEVW8sPtj2fn5PF3HFDuer0CcwdN5RT8oZy2U9eiPpLSn5OFucnSVtIfk5WhzUumZeXgIqO1NFfUIYM6MeZk4ezsbSav769h5awqSB7QD9mjh1yROieOiqbjH6ajyKZKVyLiEiv09zi/Oz59/jR0+8wfFAG158/hV+/vKOt1xUgq386X100g5ljhzBz7BA+fnqwMHBDUwvvlNWEo7fBKO7/PL+VpjDxjBycEY5w5zA3DN6jhgxIyPuMRV1jMxtLq4NfIML3tGXPgbYAN2bIAOaMG8qH5+czZ9xQ5uQPZcTgzKNe55ZF04/oF4bgGt6yaHpPvZVjSvYaO6rvtitOaeu5rmtsZvPuGjaWVrOhpIqNJdU88PqutnP6pxtTR2UHgTsM3TPzhjBkQP+EvCc5mnquRUSkV9m57xBffnAtq3fsZ/GcsXz3w6eQMzCjS7M01DU28/buGgqLKllXVEVhURXv7qlpC6ijh2S2jWwHbSXRA2q81TcFwWx9WOP64ireKauhOeIXg7njciJG44/vF4NUmOki2Ws8kfqaW5zt+w6ysaQ6DN3VbCypYu+BhrZjJgwfGNFSEnweMyT1/8qSrDrruVa4FhGRXsHdeWh1Ed/+ywbSzLjtytlcOT8/buHiUEMTG1tbK4qrWF9Uyda9B2n9ZzU/JysIseODHu45+UMZOrD7Rhcbm1t4t+zAET3Sb5fWtLW0DBvYv210fc64IEwrbPUue2rqwqBd3Ra8t+093N4zfFBGW+CeFbaXTB45iH7paivpKoVrERHp1fYdqGfpI4U8ubGMs6YM5wcfn09+TlaP11FT18iGkuq2UePCokq2R9wUOHHEwCN6uE/JH0J2xJ/zOxrVbG5x3is/cESP9MaS6rYp57IH9DuiL3xOvm6G66sO1DfxdmkQtDeWBKPcm8tqaAh/VjL7pTFjTDaz8oYebisZm83AjMOdwsk++p8MFK5FRKTX+uvbZXz14UKqaxu5ZdF0rn3/5KSaUq/qUCNvlVQd0cNdtP/wTW1TcgcxN38oACve2t0WgiCYDm7C8Cx2V9W39dwOykhndn7QejJ3fDAyPUHTCEonGptb2Fp+sK2Hu7W1pKq2EQAzmDxyELPGDgHgyQ1lbX8BgaAv/PaPzFHAjqBwLSIivc6hhia+u3wT//vaTmaMyeZHV81nxpghiS4rJhUHG1hfVBkxwl3F7uq6qMdmpKfxyTODWTvmjhvK5JGDSVeQli5yd0qq6sLR7cOhO/IXv0hZ/dP5+3MmMXrIAEYPyWTUkAGMHjKA3MGZfXL2EoVrERHpVdbs3M+XH1zH9n0H+eK5U/jKJdPI7Jee6LK6ZPKty4n2L7IB2+5Y3NPlSB/V0c8hQL80a5s1J9KIQRlh2M5kdPaR4Xv0kExGDxnAyMGZveqXQi0iIyIivUJjcws//esWfvrsFsYMGcD9XziL9500ItFldYu8DuZozktA77j0XR39HObnZPHCVy+g4lADZdV17Kmup6y6jrLqespq6tgTPt5YUs3eA/W0z+BpBrnZQdAelX04dLcF8XDfsIEZMbU4JXNfuMK1iIikhK3lB/iXP6xlXVEVH1mQz7eumN2r5vZN9jmapW/o7OcwLc0YOTiTkYMzmd3JmjxNzS3sO9hwOHxXHw7fZTV1FFfWsmbnfvYdbDjq3P7pxqjsAYzqZBR89fYKvvPYRmrDFS+LK2tZ+kghQFIEbIVrERFJau7O717byXeXb2RA/3Tu/uSpLJ47NtFldbvWUJCso3HSN3THz2G/9LQwDHc+h3p9UzPlNfWUVdeH4buOspr6tpHx98oP8PJ7e6muazrm16xtbObOlZuT4r8X9VyLiEjS2lNdx1f/uJ7nNpdz7tSR/N+PzTvmP9gi0rvUNjSzp+bwKPg//35N1ON68v4E9VyLiEjKeeKtUpY+Usihhma+vWQ2n33fRM3bLNIHZWWkM3HEICaOGATAHY+/ndT3J/S9uVNERCSp1dQ1cvND67j+d28ybthAlt94LtecPUnBWkSAoC88q/+RswMl0/0JGrkWEZGk8fq2Cr784FpKKmv55wtP5saLptJfSzWLSIRkvz9B4VpERBKuoamFHz79Dj97/j0mDB/IQ9efzWkThyW6LBFJUlcuyE+aMN1eXIcDzOxSM9tsZlvM7NZOjvs7M3MzKwi3J5lZrZmtDT9+Fs86RUQkcd4pq+HKu1/iv597j08UjGfFjecqWItIyorbyLWZpQN3AxcDRcAqM1vm7hvbHZcN3Ai81u4l3nP3+fGqT0REEqulxfnly9v5zyfeJjuzHz//bAEXzxqd6LJERLoknm0hZwBb3H0rgJk9AFwBbGx33HeA7wM3x7EWERFJIqVVtdz80Dpe2rKPhTNHcftH5pKbnZnoskREuiyebSH5wK6I7aJwXxszWwCMd/fHopw/2czWmNnzZnZutC9gZteZ2WozW11eXt5thYuISPz8eW0xi374N9bsrOSOj8zh558tULAWkV4jniPX0eZMaluxxszSgB8Cn4tyXCkwwd33mdlpwKNmNtvdq494Mfd7gHsgWESmuwoXEZHuV3WokX/781ssW1fCggk5/PDj85k0clCiyxIR6VbxDNdFwPiI7XFAScR2NnAK8Fw4d+kYYJmZLXH31UA9gLu/YWbvAdMALcEoIpKCXtqyl688uI69B+r5ysXT+IcPnEQ/TbEnIr1QPMP1KmCqmU0GioGrgE+2PunuVcDI1m0zew642d1Xm1kuUOHuzWY2BZgKbI1jrSIiEgd1jc18/4nN3PfSNqbkDuKRz57N3HE5iS5LRCRu4hau3b3JzG4AVgLpwH3uvsHMbgNWu/uyTk4/D7jNzJqAZuB6d6+IV60iItL9NpRU8aUH1vLungNc876J3PrBmWRlpB/7RBGRFGbuvaNVuaCgwFevVteIiEiiPLqmuG3FtOwB/ThQ38TIwZnc+bF5nD8tN9HliYh0GzN7w90Loj2nFRpFRKTLHl1TzNJHCqltbAaguq6JNIObLpqqYC0ifYrCtYiIHJe6xmaKK2vZWXGIoopD7Npfy29e2U5dY8sRx7U4/L/n3uNTZ01MTKEiIgmgcC0iIkdoaXH21NSzs+IQuyoOBZ/3B493VdSyu7ruiOMz+qXR0NQS9bVKKmt7omQRkaShcC0i0gdV1Tayq+IQRfvD8FxR2xaii/bXHhGWzWDMkAGMHz6Qc04eyYThAxk/PCv8PJDcwZmc+/1nKY4SpPNysnrybYmIJJzCtYhICoi8WTAvJ4tbFk3nygX5HR7f0NTS1rqxq/UjIkhX1TYecfzQrP6MH57FjDHZXDxzNOOGDwzC87As8odlkdmv81k+blk0/Yiea4Cs/uncsmh61964iEiKUbgWEUly7W8WLK6sZekj66mqbWB23tC2wNwanosqDlFaXUfkZFAZ6WmMG57F+GEDmT8+JwzOwcjz+OEDGZrVv0s1tgb94/kFQESkN9JUfCIiSe6s259hd1XdMY8LWjeygsA8bGBb28aE4QMZlZ1JWpr1QLUiIr2fpuITEUkR7s7WvQd5Y/t+Vm2v4I0d+zsN1r/6+9MZP3wg+TlZDOivBVpERBJN4VpEJIEamlooLK7ijR0VrNq+nzd37GffwQYAhg3sz2kTh7P3QD3VdU1HnZufk8UHpo/q6ZJFRKQTCtciIj2o6lAjb+ysYPX2/azevp91RZXUhzNzTBoxkAtmjKJg4jAKJg3npNxBmNlRPdegmwVFRJKVwrWISJy4O0X7a1m1vYLVO/azensF75QdAKBfmjE7fyifPmsip08axmkTh5ObnRn1dXSzoIhI6lC4FhHpJk3NLWwqrWnrlV61vYI9NfUAZGf249SJw/jQ3DwKJg1n/vgcsjJi75G+ckG+wrSISApQuBYROUE1dY2s2VnJ6h37eWNHBWt2VnKoIWjdyM/J4n0njWhr8Zg2Opt0zdYhItLrKVyLiMSotKo27JUObj58e3c1LQ5pBjPGDOFjp43jtEnDKZg4TCsTioj0UQrXItLnRVv98EPz8ninrIbVbf3S+9uW987qn86CCTnccOFUCiYOY8GEHLIHdG0RFhER6R20iIyI9GnRZuJIM+ifZtQ3B/9/HJWdScGkYRRMHE7BpGHMHDuE/ulpiSpZREQSTIvIiIiE6hqbea/8AJt317B5dw2/enl721R4rVoc0tPTuOujp1AwcTjjh2dhpn5pERE5NoVrEemVmlucnRWH2kL0O2U1vL27mu37DtHcEoxI9083Gpuj//WutqGZj5w6ridLFhGRXkDhWkRSmrtTXlPP5rIgRL8dBul3ymqoawxGpM1gwvCBTBudzWVzxjJtdDYzxmQzaeQgPnDnc2291JF0Q6KIiJwIhWsRSRk1dY28U1bD5t0H2Ly7ui1I7z/U2HbMyMGZzBiTzSfPmMiMMdlMG5PNtNGDGZgR/X93tyyartUPRUSk28Q1XJvZpcCPgXTgXne/o4Pj/g54CDjd3VeH+5YC1wLNwI3uvjKetYp0JNpMElrMI74amloO90WHI9Kbd9ccMcI8KCOdaWOyWTR7DNPHZAcfo7MZMTj6Kocd0eqHIiLSnY45W4iZDXf3iuN+YbN04B3gYqAIWAVc7e4b2x2XDSwHMoAb3H21mc0Cfg+cAeQBTwPT3L2ZDmi2EImHaDNJZPVP5/aPzFH4Og4d/YLS0hIsD/727uqwJzoI0dv2HqQp7Ivul2aclDv4iAA9fUw2+TlZpGlRFhERSYCuzhbympmtBX4JPO6xz913BrDF3beGRTwAXAFsbHfcd4DvAzdH7LsCeMDd64FtZrYlfL1XYvzaIt3ijsffPiJYA9Q2NnPnys0K1zFq/wtKcWUtX3loHT94ajP7DjS0rWgIMG5YFjPGZHPJ7NFhX/QQJo8cREY/TXsnIiKpIZZwPQ1YCHwe+C8z+wPwK3d/5xjn5QO7IraLgDMjDzCzBcB4d3/MzG5ud+6r7c5VkpG4c3c2ldbw9KYynt5Uxu7quqjHFVfWsu9A/XG3IPQ1+w7U861lG476BaW5xSmrrueTZ0xoG5GeNjqbwZm6DURERFLbMf8lC0eqnwKeMrMLgN8B/2hm64Bb3b2j0eRof69tG/U2szTgh8DnjvfciNe4DrgOYMKECZ28C5GONTS18Nq2fTy9sYynN+2huLIWM1gwPochA/pRXdcU9bwzvvcM7z95JEvm5XHJ7NFaoS+070A9KzeUsbywhFfe20dLB3/ramxq4VtLZvdscSIiInF2zHBtZiOATwOfAcqAfwaWAfMJbkKc3MGpRcD4iO1xQEnEdjZwCvBcuDjDGGCZmS2J4VwA3P0e4B4Ieq6P9V5EWlUeauC5zeU8tamMv20up6a+iQH90zh3ai43XTSVC2aMIjc7s4Oe6zRuuHAqB+qbWLa2hK88tI7MP6Vx0cxRLJmXxwemj2JA//QEvrue1xqoVxSW8srWfTS3OJNHDuIfP3AyD67exZ6a+qPO0VR3IiLSG8XyN9hXgN8CV7p7UcT+1Wb2s07OWwVMNbPJQDFwFfDJ1ifdvQoY2bptZs8BN4c3NNYC95vZXQQ3NE4FXo/tLYlEt2PfQZ7aGLR7rNq+n+YWJzc7k8vnjWXhzNGcc/LIo0LxsWaS+Oqi6by5s5Jla4tZXljKisLdZGf245LZY7hifh5nnzSCfr10meyKgw2s3LCb5esPB+pJIwZy/flTWDwnj5ljszEzTh41WFPdiYhInxHLbCF2HDcxtj/3MuBHBFPx3efu3zWz24DV7r6s3bHPEYbrcPvrBH3eTcCX3P3xzr6WZguR9lpanLVFlWG7RxnvlB0AYProbBbOGsXFs8YwN39ot8040dTcwitb97FsbQlPvLWbmvomRg7O4LI5Y1kyL49TJwxL+dktWgP1isJSXn7vcKBePHcsl80Zy6yxQ6IuE67pDEVEpDfpbLaQWML1U8DH3L0y3B5GMJPHom6vtAsUrgWCJatf3LKXpzeW8czbZew90EB6mnHm5OEsnDmahTNHM2HEwLjXUdfYzHOby/nLuhKe3lRGfVML+TlZfGheHkvmHR7VTQX7W0eo2wXqy+YEgXp2XvRALSIi0lt1NVyvdff57fatcfcF3Vhjlylc9117aur466Y9PL2pjBfe3Ut9UwvZmf34wIxRLJw5ig9MG8XQgYm72fBAfRNPbtjNsnUlvPDuXppbnJNHDeaKeXksmZ/HxBGDElZbR/YfbODJjbt5bP3hQD1xxEAWK1CLiIh0OVy/AXzY3XeG2xOBP7n7qd1eaRcoXPcd7s47ZQd4elMZT20sY+2uSgDyc7K4eNZoLp41mtMnDU/KuZErDjaworCUZWtLeH17sDbTvHFD+dC8PD40L4/RQwYkrLbWQL28cDcvb9lLU4szYXjQ8rFYgVpERKRNV8P1pQQzcjwf7joPuC7ZliNXuO7dGptbWLW9gqc3BiPUOysOAUEwXThzNAtnjWbGmNRptQAoqazlsfUlLFtXwlvF1ZjBWZNHsGR+Hh88ZQw5AzPiXkPloQae3FDGY4WlRwTqy+aM5fK5CtQiIiLRdClchy8wEjiLYP7pV9x9b/eW2HUK16mpsxvdqusaeX5zOU9vKuPZt/dQXddERr803n/ySBbOHM1FM0cldKS3O71XfoBla0v4y7oStu49SP9047ypuSyZn8fFs0YzMKP7FldpDdTLC0t5KQzU44dnsXhOHovnjOWUfAVqERGRznRHuB5GMB1eW5Jx9791W4XdQOE69USbQzqzXxqL54xhT00Dr27dR1OLM2JQBhfOGMXCWaM5d+rIbg2aycbd2VBSzZ/XFvOXdaXsrq4jq386C2eNZsm8PM6flntC7S5VhxpZuTGYNi8yUF82ZyyXz8lToBYRETkOXW0L+QJwE8FCLmsJRrBfcfcLu7vQrlC4Tj3n3PFXiitroz538qjBLJw5motnjWL++GGkp/gUdieipcVZtb2CZetKWFFYyv5DjQzN6s8HTxnDknl5nDllBOlp1uHof2ugXlFYyovvBoF63LCsth7qOflDFahFREROQFfDdSFwOvCqu883sxnAt939E91f6olTuE4t7s7kpSuiPmfAtjsW92xBSa6xuYUX393LsnUlPLlhNwcbmhmVncnMMdm8uq2C+qaWtmP7pxsn5w5mS/kBGpvDQB3O8jF3nAK1iIhIV3UWrmP5+3qdu9eZGWaW6e5vm5mWVpMT0tDUwmPrS/j5C9s6PEbLYh+tf3oaF8wYxQUzRlHb0Mwzb5exbG0JT24sO+rYxuZgNpUvnDtZgVpERKSHxRKui8wsB3gUeMrM9gMl8S1Lepuq2kbuf20nv3p5G2XV9UwdNZirTh/Po2uLqWs8POqqZbGPLSsjncvn5nH53Dwm37qcaH97anFn6WUze7w2ERGRvu6Y4drdPxw+/JaZPQsMBZ6Ia1XSa+yqOMR9L23jD6t2caihmXNOHsF/fnQu50/Lxcw4a8oILYvdBXk5WVH71jX6LyIikhidhmszSwPWu/spAO7+fGfHi7Ras3M/976wjcffKiXNjCXz8rj23MnMzht6xHFXLshXmO6CWxZNP2rGFY3+i4iIJE6n4drdW8xsnZlNaF2hUaQjzS3O05vKuPeFrazavp/sAf344nlT+NzZkxg7VCOp8dD6i4lG/0VERJJDLD3XY4ENZvY6cLB1p7sviVtVklJqG5p5+M0ifvHCVrbvO0R+ThbfvHwWHz99PIMze++c1MlCo/8iIiLJI5bk8+24VyEpqbymnt+8sp3fvbqD/YcamTc+h58ums6ls8fQL/34FzoRERERSXWx3NCoPms5wjtlNdz7wlYeXVNCY0sLC2eO5rrzplAwcZimfBMREZE+7Zjh2sxqoG22rwygP3DQ3YfEszBJLu7Oy+/t4+cvbOW5zeUM6J/Gx08fx+fPmcyU3MGJLk9EREQkKcQycp0duW1mVwJnxK0iSSqNzeGiL3/bxsbSakYOzuDLF0/j02dNZPigjESXJyIiIpJUjvtuM3d/1MxujUcxkjyqahv5/es7+dVL29ldXcfJowbznx+dwxXz8xnQPz3R5YmY0AtVAAAatElEQVSIiIgkpVjaQj4SsZkGFEDUReGkF9hVcYhfvrSdP6zaycGGZs4+aQS3f2QO50/LJS1N/dQiIiIinYll5PpDEY+bgO3AFXGpRhJm7a5Kfv7CVh4vDBZ9+dC8PK59/2ROyR967JNFREREBIit5/rve6IQ6XktbYu+bOP17RVkZ/bji+dO4XPnaNEXERERkRMRS1vIr4Gb3L0y3B4G/MDdPx/DuZcCPwbSgXvd/Y52z18P/BPQDBwArnP3jWY2CdgEbA4PfdXdr4/1TUnnahua+eObRfzixW1s23uQ/Jws/u3yWXxCi76IiIiIdEksSWpua7AGcPf9ZrbgWCeZWTpwN3AxUASsMrNl7r4x4rD73f1n4fFLgLuAS8Pn3nP3+TG+D4ni0TXFRyyL/X/On8Lemnp+27roy7ih/NfVC/jgKVr0RURERKQ7xBKu08xsmLvvBzCz4TGedwawxd23huc9QNCr3Rau3b064vhB6EbJbvPommKWPlJIbWMzAMWVtXzzzxsAWDhzNF88dzJnTB6uRV9EREREulEsIfkHwMtm9jBB+P048N0YzssHdkVsFwFntj/IzP4J+DLBAjUXRjw12czWANXAN9z9hSjnXgdcBzBhwoQYSuo77ly5uS1YRxqVncm91xQkoCIRERGR3u+YvQDu/hvgo0AZUA58xN1/G8NrRxsSPWpk2t3vdveTgK8B3wh3lwIT3H0BQfC+38yOWhHS3e9x9wJ3L8jNzY2hpL6jpLI26v7ymvoerkRERESk7zhmuDazs4Bd7v5Td/8vYJeZHTUCHUURMD5iexxQ0snxDwBXArh7vbvvCx+/AbwHTIvha0ooLyf6bB8d7RcRERGRrovlLrb/JpjJo9XBcN+xrAKmmtlkM8sArgKWRR5gZlMjNhcD74b7c8MbIjGzKcBUYGsMX1NCtyyaTka7mxSz+qdzy6LpCapIREREpPeLJVybu7e1c7h7C7HNj90E3ACsJJhW70F332Bmt4UzgwDcYGYbzGwtQfvHNeH+84D1ZrYOeBi43t0rYn5XwpUL8jltUg5G0J+Tn5PF7R+Zw5UL8hNdmoiIiEivFcsNjVvN7EYOj1b/IzGOIrv7CmBFu33fjHh8Uwfn/RH4YyxfQ6Jram7hnd0HWDx3LD/95KmJLkdERESkT4hl5Pp64GygmMMzflwXz6Kk617fVsG+gw0snjM20aWIiIiI9BmxtHfsIeiXlhSyvLCUgRnpfGD6qESXIiIiItJnxLL8+QDgWmA2MKB1fyzLn0tiNDW3sHLDbi6cMYqsjPRElyMiIiLSZ8TSFvJbYAywCHieYEq9mngWJV3z+rYK9h5QS4iIiIhIT4slXJ/s7v8GHHT3XxNMmTcnvmVJVywvLCWrv1pCRERERHpaLOG6MfxcaWanAEOBSXGrSLqkucWDlpCZagkRERER6WmxTMV3j5kNI1iafBkwGPi3uFYlJ+y1bfvUEiIiIiKSILHMFnJv+PBvwJT4liNdtSJsCblALSEiIiIiPS6WthBJEc0tzhNvqSVEREREJFEUrnsRtYSIiIiIJJbCdS+ilhARERGRxIrlhkbM7GyCGULajnf338SpJjkBQUtImRaOEREREUmgWFZo/C1wErAWaA53O6BwnUSChWPquUwtISIiIiIJE8vIdQEwy9093sXIiVtRWMqA/mlcMCM30aWIiIiI9Fmx9Fy/RbD8uSSp5hbn8bd2c9GM0QzMiKnTR0RERETiIJYkNhLYaGavA/WtO919SdyqkuOyartaQkRERESSQSzh+lvxLkK6Zvl6tYSIiIiIJINYVmh83sxGA6eHu1539z3xLUti1doScuGMUWoJEREREUmwY/Zcm9nHgdeBjwEfB14zs7+Ld2ESG7WEiIiIiCSPWIY6vw6c3jpabWa5wNPAw/EsTGLTOkvIhTO0cIyIiIhIosUyW0hauzaQfTGeh5ldamabzWyLmd0a5fnrzazQzNaa2YtmNiviuaXheZvNbFEsX6+vaW0JuWC6WkJEREREkkEsiewJM1sJ/D7c/gSw4lgnmVk6cDdwMVAErDKzZe6+MeKw+939Z+HxS4C7gEvDkH0VMBvIA542s2nu3oy0Wb29gvKaehbPVUuIiIiISDI45gi0u98C3APMBeYB97j712J47TOALe6+1d0bgAeAK9q9dnXE5iCClR8Jj3vA3evdfRuwJXw9ibBcLSEiIiIiSSWmXgJ3/yPwx+N87XxgV8R2EXBm+4PM7J+ALwMZwIUR577a7tz84/z6vZpaQkRERESST4cj12b2Yvi5xsyqIz5qzKy6o/MiXyLKvqOWUHf3u939JOBrwDeO51wzu87MVpvZ6vLy8hhK6j1aW0I0S4iIiIhI8ugwXLv7+8PP2e4+JOIj292HxPDaRcD4iO1xQEknxz8AXHk857r7Pe5e4O4Fubl9awGVFYWlZPZTS4iIiIhIMollnuvfxrIvilXAVDObbGYZBDcoLmv3OlMjNhcD74aPlwFXmVmmmU0GphLMtS1AS0RLyKBMtYSIiIiIJItYktnsyA0z6wecdqyT3L3JzG4AVgLpwH3uvsHMbgNWu/sy4AYzWwg0AvuBa8JzN5jZg8BGoAn4J80UctjqHfvZo1lCRERERJJOh+HazJYC/wpkRfRYG9BAMHvIMbn7CtpN2+fu34x4fFMn534X+G4sX6evUUuIiIiISHLqrOf6dnfPBu5s1289wt2X9mCNEqGlxVlRWKqWEBEREZEkdMx05u5LzWwYQd/zgIj9f4tnYRJda0vIZWoJEREREUk6xwzXZvYF4CaCGTvWAmcBr3B4TmrpQa0tIRepJUREREQk6RxzthCCYH06sMPdLwAWAH1rUukkEcwSUsoHpueqJUREREQkCcUSruvcvQ7AzDLd/W1genzLkmje2LmfsmotHCMiIiKSrGIZ/iwysxzgUeApM9tP54vBSJwsXx+2hMwcnehSRERERCSKWG5o/HD48Ftm9iwwFHgirlXJUSJbQgarJUREREQkKcWyQuNZZpYN4O7PA88S9F1LD1JLiIiIiEjyi6Xn+r+BAxHbB8N90oOWry8lQy0hIiIiIkktlnBt7u6tG+7eQmy92tJN2lpCpqklRERERCSZxRKut5rZjWbWP/y4Cdga78LksDfDlpDFWjhGREREJKnFEq6vB84GioEi4EzgungWJUdaXqiWEBEREZFUEMtsIXuAq3qgFomipcV5vHC3WkJEREREUkCHac3Mvuru3zez/wK8/fPufmNcKxMgaAnZXV3H0rkzEl2KiIiIiBxDZ0OhG8PPq3uiEIlOLSEiIiIiqaOzcP0J4DEgx91/3EP1SITWlpDz1RIiIiIikhI6u6HxNDObCHzezIaZ2fDIj54qsC9bsytoCVmshWNEREREUkJnw6E/I1jmfArwBmARz3m4X+Jo+frdYUvIqESXIiIiIiIx6HDk2t1/4u4zgfvcfYq7T474ULCOs9aFY86flkv2gP6JLkdEREREYtBhuDazIeHDr7dvCVFbSPyt2bWf0iq1hIiIiIikks7aQu4HLidoCXHUFtKj1BIiIiIikno6DNfufnn4efKJvriZXQr8GEgH7nX3O9o9/2XgC0ATUA583t13hM81A4XhoTvdfcmJ1pFqWltCzpuqlhARERGRVHLM5c/N7BwzGxQ+/rSZ3WVmE2I4Lx24G/ggMAu42sxmtTtsDVDg7nOBh4HvRzxX6+7zw48+E6wB1uyqDFpC5o5JdCkiIiIichyOGa6B/wYOmdk84KvADuC3MZx3BrDF3be6ewPwAHBF5AHu/qy7Hwo3XwXGxVx5L7aisJSMdC0cIyIiIpJqYgnXTe7uBMH4x+GCMtkxnJcP7IrYLgr3deRa4PGI7QFmttrMXjWzK6OdYGbXhcesLi8vj6Gk5BcsHFPKedNGMkQtISIiIiIpJZZwXWNmS4FPA8vDdo9YUp9F2edRDzT7NFAA3Bmxe4K7FwCfBH5kZicd9WLu97h7gbsX5ObmxlBS8ltbVElJVR2L52qWEBEREZFUE0u4/gRQD1zr7rsJRp/v7PwUIBipHh+xPQ4oaX+QmS0Evg4scff61v3uXhJ+3go8ByyI4WumvOXr1RIiIiIikqqOGa7dfbe73+XuL4TbO939NzG89ipgqplNNrMM4CpgWeQBZrYA+B+CYL0nYv8wM8sMH48EzgE2xvqmUpVaQkRERERSWyyzhZxlZqvM7ICZNZhZs5lVHes8d28CbgBWApuAB919g5ndZmats3/cCQwGHjKztWbWGr5nAqvNbB3wLHCHu/f6cN3aEnKZFo4RERERSUmdLSLT6qcEo84PEfRFfxaYGsuLu/sKYEW7fd+MeLywg/NeBubE8jV6kxVhS8jCWWoJEREREUlFsYRr3H2LmaW7ezPwSzN7Oc519TnuzuNv7ebcqWoJEREREUlVsYTrQ2HP9Foz+z5QCgyKb1l9z9pdlRRX1vKVS6YluhQREREROUGxzBbyGYLly28ADhLMAPLReBbVFy1XS4iIiIhIyjvmyLW77wgf1gLfjm85fZNaQkRERER6hw7DtZkV0sGiLwDuPjcuFfVBrS0hX75YLSEiIiIiqayzkevLe6yKPm5FYSn9000tISIiIiIprrNw3R8Y7e4vRe40s3OJstKinBh3Z0Xhbs6dmsvQLLWEiIiIiKSyzm5o/BFQE2V/bficdIN1RVUUV9Zq4RgRERGRXqCzcD3J3de33+nuq4FJcauoj2ltCblYLSEiIiIiKa+zcD2gk+eyuruQvsjdWb6+VC0hIiIiIr1EZ+F6lZl9sf1OM7sWeCN+JfUdagkRERER6V06u6HxS8CfzOxTHA7TBUAG8OF4F9YXqCVEREREpHfpMFy7exlwtpldAJwS7l7u7n/tkcp6udaWkPefPFItISIiIiK9RCwrND4LPNsDtfQp68OWkC8tnJroUkRERESkm3TWcy1x1NoScsmsMYkuRURERES6icJ1Arg7j7W2hAxUS4iIiIhIb6FwnQDrNUuIiIiISK+kcJ0AagkRERER6Z0UrnuYu7O8sJRz1BIiIiIi0usoXPewwuIqivarJURERESkN4pruDazS81ss5ltMbNbozz/ZTPbaGbrzewZM5sY8dw1ZvZu+HFNPOvsScsLS+mXZlyihWNEREREep24hWszSwfuBj4IzAKuNrNZ7Q5bAxS4+1zgYeD74bnDgX8HzgTOAP7dzIbFq9ae4u6sKCzl/VNHkjMwI9HliIiIiEg3i+fI9RnAFnff6u4NwAPAFZEHuPuz7n4o3HwVGBc+XgQ85e4V7r4feAq4NI619ojC4ip2VaglRERERKS3ime4zgd2RWwXhfs6ci3w+AmemxLUEiIiIiLSux1z+fMusCj7POqBZp8GCoDzj+dcM7sOuA5gwoQJJ1ZlD2ltCTnnZLWEiIiIiPRW8Ry5LgLGR2yPA0raH2RmC4GvA0vcvf54znX3e9y9wN0LcnNzu63weHiruJpdFbUsVkuIiIiISK8Vz3C9CphqZpPNLAO4ClgWeYCZLQD+hyBY74l4aiVwiZkNC29kvCTcl7LaWkJmqyVEREREpLeKW1uIuzeZ2Q0EoTgduM/dN5jZbcBqd18G3AkMBh4yM4Cd7r7E3SvM7DsEAR3gNneviFet8dbaEnK2WkJEREREerV49lzj7iuAFe32fTPi8cJOzr0PuC9+1fWct4qr2VlxiBsuODnRpYiIiIhIHGmFxh6glhARERGRvkHhOs7UEiIiIiLSdyhcx9mGkqAlZPGcMYkuRURERETiTOE6zpYXlpKeZlwyS+FaREREpLdTuI6jtpaQk0YwbJBaQkRERER6O4XrONpQUs2OfYe0cIyIiIhIH6FwHUetLSGLZqslRERERKQvULiOE7WEiIiIiPQ9CtdxopYQERERkb5H4TpOVrTOEqKWEBEREZE+Q+E6DiJbQoarJURERESkz1C4joONpdVs33eIy9QSIiIiItKnKFzHwfL1miVEREREpC9SuO5magkRERER6bsUrruZWkJERERE+i6F6262QgvHiIiIiPRZCtfdKGgJ2c37pqglRERERKQvUrjuRptKa9i296BaQkRERET6KIXrbrS8sCRsCRmd6FJEREREJAEUrrtJZEvIiMGZiS5HRERERBJA4bqbqCVEREREROIars3sUjPbbGZbzOzWKM+fZ2ZvmlmTmf1du+eazWxt+LEsnnV2h8OzhKglRERERKSv6hevFzazdOBu4GKgCFhlZsvcfWPEYTuBzwE3R3mJWnefH6/6ulPrwjFnTRmulhARERGRPiyeI9dnAFvcfau7NwAPAFdEHuDu2919PdASxzri7u3dNWxVS4iIiIhInxfPcJ0P7IrYLgr3xWqAma02s1fN7MpoB5jZdeExq8vLy7tSa5esKCwlzdDCMSIiIiJ9XDzDtUXZ58dx/gR3LwA+CfzIzE466sXc73H3AncvyM3NPdE6u8TdWb6+lLOmjGCkWkJERERE+rR4husiYHzE9jigJNaT3b0k/LwVeA5Y0J3FdYdH1xRz5veeYeveg2wsqebRNcWJLklEREREEiie4XoVMNXMJptZBnAVENOsH2Y2zMwyw8cjgXOAjZ2f1bMeXVPM0kcK2VNTD0BlbSNLHylUwBYRERHpw+IWrt29CbgBWAlsAh509w1mdpuZLQEws9PNrAj4GPA/ZrYhPH0msNrM1gHPAne0m2Uk4e5cuZnaxuYj9tU2NnPnys0JqkhEREREEi1uU/EBuPsKYEW7fd+MeLyKoF2k/XkvA3PiWVtXlVTWHtd+EREREen9tELjCcrLyTqu/SIiIiLS+ylcn6BbFk0nq3/6Efuy+qdzy6LpCapIRERERBItrm0hvdmVC4Ipu+9cuZmSylrycrK4ZdH0tv0iIiIi0vcoXHfBlQvyFaZFREREpI3aQkREREREuonCtYiIiIhIN1G4FhERERHpJgrXIiIiIiLdROFaRERERKSbmLsnuoZuYWblwI4EffmRwN4Efe3eQtew63QNu07XsHvoOnadrmHX6Rp2na5hxya6e260J3pNuE4kM1vt7gWJriOV6Rp2na5h1+kadg9dx67TNew6XcOu0zU8MWoLERERERHpJgrXIiIiIiLdROG6e9yT6AJ6AV3DrtM17Dpdw+6h69h1uoZdp2vYdbqGJ0A91yIiIiIi3UQj1yIiIiIi3UThugvM7FIz22xmW8zs1kTXk4rMbLyZPWtmm8xsg5ndlOiaUpWZpZvZGjN7LNG1pCIzyzGzh83s7fDn8X2JrinVmNm/hP8dv2VmvzezAYmuKRWY2X1mtsfM3orYN9zMnjKzd8PPwxJZY7Lr4BreGf73vN7M/mRmOYmsMdlFu4YRz91sZm5mIxNRW6pRuD5BZpYO3A18EJgFXG1msxJbVUpqAr7i7jOBs4B/0nU8YTcBmxJdRAr7MfCEu88A5qFreVzMLB+4EShw91OAdOCqxFaVMn4FXNpu363AM+4+FXgm3JaO/Yqjr+FTwCnuPhd4B1ja00WlmF9x9DXEzMYDFwM7e7qgVKVwfeLOALa4+1Z3bwAeAK5IcE0px91L3f3N8HENQaDJT2xVqcfMxgGLgXsTXUsqMrMhwHnALwDcvcHdKxNbVUrqB2SZWT9gIFCS4HpSgrv/Dahot/sK4Nfh418DV/ZoUSkm2jV09yfdvSncfBUY1+OFpZAOfg4Bfgh8FdBNejFSuD5x+cCuiO0iFAq7xMwmAQuA1xJbSUr6EcH//FoSXUiKmgKUA78MW2vuNbNBiS4qlbh7MfB/CUa3SoEqd38ysVWltNHuXgrBIAQwKsH1pLrPA48nuohUY2ZLgGJ3X5foWlKJwvWJsyj79FvdCTKzwcAfgS+5e3Wi60klZnY5sMfd30h0LSmsH3Aq8N/uvgA4iP4Mf1zCnuArgMlAHjDIzD6d2KpEwMy+TtCC+L+JriWVmNlA4OvANxNdS6pRuD5xRcD4iO1x6E+gJ8TM+hME6/9190cSXU8KOgdYYmbbCdqTLjSz3yW2pJRTBBS5e+tfTR4mCNsSu4XANncvd/dG4BHg7ATXlMrKzGwsQPh5T4LrSUlmdg1wOfAp19zDx+skgl+W14X/vowD3jSzMQmtKgUoXJ+4VcBUM5tsZhkEN+4sS3BNKcfMjKDPdZO735XoelKRuy9193HuPong5/Cv7q4Rw+Pg7ruBXWY2Pdx1EbAxgSWlop3AWWY2MPzv+iJ0U2hXLAOuCR9fA/w5gbWkJDO7FPgasMTdDyW6nlTj7oXuPsrdJ4X/vhQBp4b/v5ROKFyfoPAmiRuAlQT/gDzo7hsSW1VKOgf4DMFo69rw47JEFyV90j8D/2tm64H5wPcSXE9KCUf9HwbeBAoJ/n3R6m4xMLPfA68A082syMyuBe4ALjazdwlmargjkTUmuw6u4U+BbOCp8N+WnyW0yCTXwTWUE6AVGkVEREREuolGrkVEREREuonCtYiIiIhIN1G4FhERERHpJgrXIiIiIiLdROFaRERERKSbKFyLiPQCZtYcMZ3lWjPrthUmzWySmb3VXa8nItKb9Ut0ASIi0i1q3X1+oosQEenrNHItItKLmdl2M/tPM3s9/Dg53D/RzJ4xs/Xh5wnh/tFm9iczWxd+tC5hnm5mPzezDWb2pJllJexNiYgkMYVrEZHeIatdW8gnIp6rdvczCFas+1G476fAb9x9LvC/wE/C/T8Bnnf3ecCpQOvKs1OBu919NlAJfDTO70dEJCVphUYRkV7AzA64++Ao+7cDF7r7VjPrD+x29xFmthcY6+6N4f5Sdx9pZuXAOHevj3iNScBT7j413P4a0N/d/yP+70xEJLVo5FpEpPfzDh53dEw09RGPm9E9OyIiUSlci4j0fp+I+PxK+Phl4Krw8aeAF8PHzwD/AGBm6WY2pKeKFBHpDTTyICLSO2SZ2dqI7SfcvXU6vkwze41gQOXqcN+NwH1mdgtQDvx9uP8m4B4zu5ZghPofgNK4Vy8i0kuo51pEpBcLe64L3H1vomsREekL1BYiIiIiItJNNHItIiIiItJNNHItIiIiItJNFK5FRERERLqJwrWIiIiISDdRuBYRERER6SYK1yIiIiIi3UThWkRERESkm/x/K08R4+7pLYsAAAAASUVORK5CYII=\n",
      "text/plain": [
       "<Figure size 864x648 with 2 Axes>"
      ]
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    }
   ],
   "source": [
    "# 画图\n",
    "plt.subplot(2, 1, 1)\n",
    "plt.plot(x)\n",
    "plt.title('Loss history')\n",
    "plt.xlabel('Iteration')\n",
    "plt.ylabel('Loss')\n",
    "\n",
    "plt.subplot(2, 1, 2)\n",
    "plt.plot(y,'-o', label='val')\n",
    "plt.title('Classification accuracy history')\n",
    "plt.xlabel('Epoch')\n",
    "plt.ylabel('Clasification accuracy')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Module API: Train a Three-Layer ConvNet\n",
    "现在应该使用模块API在CIFAR上训练一个三层的ConvNet。这应该看起来很像训练两层网络!您不需要调整任何超参数，但是经过一段时间的训练后，您应该可以达到45%以上。\n",
    "你应该使用没有动量的随机梯度下降来训练模型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================================\n",
      "Epochs 0\n",
      "Iteration 0, loss = 2.6763\n",
      "Checking accuracy on validation set\n",
      "Got 107 / 1000 correct (10.70)\n",
      "\n",
      "Iteration 100, loss = 2.0228\n",
      "Checking accuracy on validation set\n",
      "Got 346 / 1000 correct (34.60)\n",
      "\n",
      "Iteration 200, loss = 1.5413\n",
      "Checking accuracy on validation set\n",
      "Got 430 / 1000 correct (43.00)\n",
      "\n",
      "Iteration 300, loss = 1.7005\n",
      "Checking accuracy on validation set\n",
      "Got 444 / 1000 correct (44.40)\n",
      "\n",
      "Iteration 400, loss = 1.4794\n",
      "Checking accuracy on validation set\n",
      "Got 457 / 1000 correct (45.70)\n",
      "\n",
      "Iteration 500, loss = 1.3682\n",
      "Checking accuracy on validation set\n",
      "Got 466 / 1000 correct (46.60)\n",
      "\n",
      "Iteration 600, loss = 1.4897\n",
      "Checking accuracy on validation set\n",
      "Got 475 / 1000 correct (47.50)\n",
      "\n",
      "Iteration 700, loss = 1.4058\n",
      "Checking accuracy on validation set\n",
      "Got 489 / 1000 correct (48.90)\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "([2.6763367652893066,\n",
       "  2.022784948348999,\n",
       "  1.5413262844085693,\n",
       "  1.700487732887268,\n",
       "  1.4794301986694336,\n",
       "  1.368176817893982,\n",
       "  1.4897373914718628,\n",
       "  1.4057869911193848],\n",
       " [0.107, 0.346, 0.43, 0.444, 0.457, 0.466, 0.475, 0.489])"
      ]
     },
     "execution_count": 71,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "learning_rate = 3e-3\n",
    "channel_1 = 32\n",
    "channel_2 = 16\n",
    "\n",
    "model = None\n",
    "optimizer = None\n",
    "################################################################################\n",
    "# TODO: Instantiate your ThreeLayerConvNet model and a corresponding optimizer #\n",
    "################################################################################\n",
    "# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "\n",
    "model=ThreeLayerConvNet(3,channel_1,channel_2,10)\n",
    "optimizer=optim.SGD(model.parameters(),lr=learning_rate)\n",
    "\n",
    "# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "################################################################################\n",
    "#                                 END OF YOUR CODE                             \n",
    "################################################################################\n",
    "\n",
    "train_part34(model, optimizer)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Part IV. PyTorch Sequential API\n",
    "\n",
    "第三部分介绍了PyTorch模块API，它允许您定义任意可学习的层及其连接性。\n",
    "对于像前馈层这样的简单模型，你仍然需要通过三个步骤:子类的`nn.Module`，在`__init__`,中将层分配给类属性，并在`forward()`中将每一层逐个调用。有没有更方便的方法?\n",
    "\n",
    "幸运的是，PyTorch提供了一个名为 `nn.Sequential`，将上述步骤合并为一个。它没有“nn”那么灵活。，因为您不能指定比前馈堆栈更复杂的拓扑，但是它对于许多用例来说已经足够了。\n",
    "\n",
    "### Sequential API: Two-Layer Network\n",
    "让我们看看如何重写我们的两层完全连接网络的例子与 `nn.Sequential`，然后使用上面定义的训练循环来训练它。\n",
    "同样，您不需要在这里调整任何超参数，但是在经过一段时间的训练后，您应该能够达到40%以上的准确率。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================================\n",
      "Epochs 0\n",
      "Iteration 0, loss = 2.3335\n",
      "Checking accuracy on validation set\n",
      "Got 136 / 1000 correct (13.60)\n",
      "\n",
      "Iteration 100, loss = 1.5648\n",
      "Checking accuracy on validation set\n",
      "Got 397 / 1000 correct (39.70)\n",
      "\n",
      "Iteration 200, loss = 1.6825\n",
      "Checking accuracy on validation set\n",
      "Got 424 / 1000 correct (42.40)\n",
      "\n",
      "Iteration 300, loss = 1.8195\n",
      "Checking accuracy on validation set\n",
      "Got 408 / 1000 correct (40.80)\n",
      "\n",
      "Iteration 400, loss = 1.6557\n",
      "Checking accuracy on validation set\n",
      "Got 421 / 1000 correct (42.10)\n",
      "\n",
      "Iteration 500, loss = 1.8843\n",
      "Checking accuracy on validation set\n",
      "Got 422 / 1000 correct (42.20)\n",
      "\n",
      "Iteration 600, loss = 1.9000\n",
      "Checking accuracy on validation set\n",
      "Got 401 / 1000 correct (40.10)\n",
      "\n",
      "Iteration 700, loss = 2.0394\n",
      "Checking accuracy on validation set\n",
      "Got 434 / 1000 correct (43.40)\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "([2.3334877490997314,\n",
       "  1.5648093223571777,\n",
       "  1.6825041770935059,\n",
       "  1.8194698095321655,\n",
       "  1.655714750289917,\n",
       "  1.884252667427063,\n",
       "  1.9000388383865356,\n",
       "  2.0394318103790283],\n",
       " [0.136, 0.397, 0.424, 0.408, 0.421, 0.422, 0.401, 0.434])"
      ]
     },
     "execution_count": 72,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# We need to wrap `flatten` function in a module in order to stack it\n",
    "# in nn.Sequential\n",
    "class Flatten(nn.Module):\n",
    "    def forward(self, x):\n",
    "        return flatten(x)\n",
    "\n",
    "hidden_layer_size = 4000\n",
    "learning_rate = 1e-2\n",
    "\n",
    "model = nn.Sequential(\n",
    "    Flatten(),\n",
    "    nn.Linear(3 * 32 * 32, hidden_layer_size),\n",
    "    nn.ReLU(),\n",
    "    nn.Linear(hidden_layer_size, 10),\n",
    ")\n",
    "\n",
    "# you can use Nesterov momentum in optim.SGD\n",
    "optimizer = optim.SGD(model.parameters(), lr=learning_rate,\n",
    "                     momentum=0.9, nesterov=True)\n",
    "\n",
    "train_part34(model, optimizer)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Sequential API: Three-Layer ConvNet\n",
    "这里你应该用`nn.Sequential`。使用与第三部分相同的架构定义和训练一个三层卷积神经网络:\n",
    "1. 带有32个5x5滤波器的卷积层(带偏置)，填充0为2\n",
    "2. 线性整流函数（Rectified Linear Unit）\n",
    "3. 卷积层(带偏置)有16个3x3滤波器，填充0为1\n",
    "4. 线性整流函数（Rectified Linear Unit）\n",
    "5. 全连接层(带偏见)计算10个类的分数\n",
    "\n",
    "你应该使用上面定义的‘random_weight’函数初始化你的权重矩阵，你应该使用上面的‘zero_weight’函数初始化你的偏置向量。\n",
    "你应该使用Nesterov momentum 0.9的随机梯度下降法来优化你的模型。\n",
    "同样，您不需要调整任何超参数，但是经过一个阶段的训练后，您应该看到准确率超过55%。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================================\n",
      "Epochs 0\n",
      "Iteration 0, loss = 2.2992\n",
      "Checking accuracy on validation set\n",
      "Got 158 / 1000 correct (15.80)\n",
      "\n",
      "Iteration 100, loss = 1.5176\n",
      "Checking accuracy on validation set\n",
      "Got 444 / 1000 correct (44.40)\n",
      "\n",
      "Iteration 200, loss = 1.4129\n",
      "Checking accuracy on validation set\n",
      "Got 481 / 1000 correct (48.10)\n",
      "\n",
      "Iteration 300, loss = 1.4257\n",
      "Checking accuracy on validation set\n",
      "Got 521 / 1000 correct (52.10)\n",
      "\n",
      "Iteration 400, loss = 1.3350\n",
      "Checking accuracy on validation set\n",
      "Got 523 / 1000 correct (52.30)\n",
      "\n",
      "Iteration 500, loss = 1.3110\n",
      "Checking accuracy on validation set\n",
      "Got 529 / 1000 correct (52.90)\n",
      "\n",
      "Iteration 600, loss = 1.0635\n",
      "Checking accuracy on validation set\n",
      "Got 572 / 1000 correct (57.20)\n",
      "\n",
      "Iteration 700, loss = 1.3828\n",
      "Checking accuracy on validation set\n",
      "Got 571 / 1000 correct (57.10)\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "([2.299175500869751,\n",
       "  1.5175790786743164,\n",
       "  1.4128787517547607,\n",
       "  1.4257279634475708,\n",
       "  1.3349725008010864,\n",
       "  1.3110129833221436,\n",
       "  1.0635403394699097,\n",
       "  1.382818579673767],\n",
       " [0.158, 0.444, 0.481, 0.521, 0.523, 0.529, 0.572, 0.571])"
      ]
     },
     "execution_count": 73,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "channel_1 = 32\n",
    "channel_2 = 16\n",
    "learning_rate = 1e-2\n",
    "\n",
    "model = None\n",
    "optimizer = None\n",
    "\n",
    "################################################################################\n",
    "# TODO: Rewrite the 2-layer ConvNet with bias from Part III with the           #\n",
    "# Sequential API.                                                              #\n",
    "################################################################################\n",
    "# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "\n",
    "model=nn.Sequential(\n",
    "    nn.Conv2d(3,channel_1,5,padding=2),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.Conv2d(channel_1,channel_2,3,padding=1),\n",
    "    nn.ReLU(inplace=True),\n",
    "    Flatten(),\n",
    "    nn.Linear(channel_2*32*32,10)\n",
    ")\n",
    "# for i in (0,2,5):\n",
    "#     w_shape=model[i].weight.data.shape\n",
    "#     b_shape=model[i].bias.data.shape\n",
    "#     model[i].weight.data=random_weight(w_shape)\n",
    "#     model[i].bias.data=zero_weight(b_shape)\n",
    "\n",
    "optimizer=optim.SGD(model.parameters(),nesterov=True,lr=learning_rate, momentum=0.9)\n",
    "\n",
    "# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "################################################################################\n",
    "#                                 END OF YOUR CODE                             \n",
    "################################################################################\n",
    "\n",
    "train_part34(model, optimizer)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Part V. CIFAR-10 open-ended challenge\n",
    "\n",
    "在本节中，您可以在CIFAR-10上试验任何您想要的ConvNet架构。\n",
    "现在，您的工作是对体系结构、超参数、损失函数和优化器进行试验，以训练在10个时间周期内在CIFAR-10 **验证**设置上达到**至少70%**精度的模型。您可以使用上面的check_accuracy和train函数。你可以使用任何一个“nn”。模块”或“神经网络。连续的API。\n",
    "描述一下你在这本笔记本的结尾做了什么。\n",
    "以下是每个组件的官方API文档。注意:我们在类“空间批处理规范”中称为PyTorch中的“BatchNorm2D”。\n",
    "\n",
    "* Layers in torch.nn package: http://pytorch.org/docs/stable/nn.html\n",
    "* Activations: http://pytorch.org/docs/stable/nn.html#non-linear-activations\n",
    "* Loss functions: http://pytorch.org/docs/stable/nn.html#loss-functions\n",
    "* Optimizers: http://pytorch.org/docs/stable/optim.html\n",
    "\n",
    "\n",
    "### Things you might try:\n",
    "- **过滤器大小**:以上我们使用5x5;更小的过滤器会更有效吗?\n",
    "- **过滤器的数量**:上面我们使用了32个过滤器。多做还是少做更好?\n",
    "- **池与跨步卷积**:你使用最大池或只是跨步卷积?\n",
    "- **批处理归一化**:尝试在卷积层之后添加空间批处理归一化，在仿射层之后添加普通批处理归一化。你的社交网络训练得更快吗?\n",
    "- **网络结构**:上面的网络有两层可训练参数。你能用深层网络做得更好吗?好的架构包括:\n",
    "    - [conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]\n",
    "    - [conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]\n",
    "    - [batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]\n",
    "- **全局平均池**:不是变平，然后有多个仿射层，执行卷积，直到你的图像变得很小(7x7左右)，然后执行平均池操作，以得到一个1x1图像图像(1,1，过滤器#)，然后再重塑成一个(过滤器#)向量。这在[谷歌的Inception网络]中使用(https://arxiv.org/abs/1512.00567)(他们的架构见表1)。\n",
    "- **正则化**:添加l2权重正则化，或者使用Dropout。\n",
    "\n",
    "### Tips for training\n",
    "对于您尝试的每个网络体系结构，您都应该调整学习率和其他超参数。当你这样做的时候，有几件重要的事情要记住:\n",
    "- 如果参数工作良好，你应该看到改善在几百次迭代\n",
    "- 记住超参数调优的由粗到细的方法:首先测试大范围的超参数，只需要几个训练迭代，就可以找到有效的参数组合。\n",
    "- 一旦你找到了一些参数，似乎工作，搜索更精细围绕这些参数。你可能需要为更多的时代而训练。\n",
    "- 您应该使用验证集进行超参数搜索，并保存您的测试集，以便根据验证集所选择的最佳参数来评估您的体系结构。\n",
    "\n",
    "### Going above and beyond\n",
    "如果您喜欢冒险，您可以实现许多其他特性来尝试和改进性能。您**不需要**实现任何这些，但如果您有时间，请不要错过其中的乐趣!\n",
    "- 替代优化:你可以尝试Adam，Adagrad, RMSprop等。\n",
    "- 可选的激活函数，如leaky ReLU、parameter ReLU、ELU或MaxOut。\n",
    "- —模型集合体\n",
    "- 数据增加\n",
    "- 新架构\n",
    "  - [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output.\n",
    "  - [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together.\n",
    "  - [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32)\n",
    "\n",
    "### Have fun and happy training! "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "==================================================\n",
      "Epochs 0\n",
      "Iteration 0, loss = 2.3003\n",
      "Checking accuracy on validation set\n",
      "Got 112 / 1000 correct (11.20)\n",
      "\n",
      "Iteration 100, loss = 2.1527\n",
      "Checking accuracy on validation set\n",
      "Got 187 / 1000 correct (18.70)\n",
      "\n",
      "Iteration 200, loss = 1.9137\n",
      "Checking accuracy on validation set\n",
      "Got 200 / 1000 correct (20.00)\n",
      "\n",
      "Iteration 300, loss = 2.1134\n",
      "Checking accuracy on validation set\n",
      "Got 215 / 1000 correct (21.50)\n",
      "\n",
      "Iteration 400, loss = 2.0464\n",
      "Checking accuracy on validation set\n",
      "Got 224 / 1000 correct (22.40)\n",
      "\n",
      "Iteration 500, loss = 1.8254\n",
      "Checking accuracy on validation set\n",
      "Got 197 / 1000 correct (19.70)\n",
      "\n",
      "Iteration 600, loss = 1.7882\n",
      "Checking accuracy on validation set\n",
      "Got 271 / 1000 correct (27.10)\n",
      "\n",
      "Iteration 700, loss = 1.6435\n",
      "Checking accuracy on validation set\n",
      "Got 261 / 1000 correct (26.10)\n",
      "\n",
      "==================================================\n",
      "Epochs 1\n",
      "Iteration 0, loss = 1.6488\n",
      "Checking accuracy on validation set\n",
      "Got 310 / 1000 correct (31.00)\n",
      "\n",
      "Iteration 100, loss = 1.7110\n",
      "Checking accuracy on validation set\n",
      "Got 368 / 1000 correct (36.80)\n",
      "\n",
      "Iteration 200, loss = 1.4812\n",
      "Checking accuracy on validation set\n",
      "Got 399 / 1000 correct (39.90)\n",
      "\n",
      "Iteration 300, loss = 1.4828\n",
      "Checking accuracy on validation set\n",
      "Got 423 / 1000 correct (42.30)\n",
      "\n",
      "Iteration 400, loss = 1.6158\n",
      "Checking accuracy on validation set\n",
      "Got 386 / 1000 correct (38.60)\n",
      "\n"
     ]
    }
   ],
   "source": [
    "from torchsummary import summary\n",
    "################################################################################\n",
    "# TODO:                                                                        #         \n",
    "# Experiment with any architectures, optimizers, and hyperparameters.          #\n",
    "# Achieve AT LEAST 70% accuracy on the *validation set* within 10 epochs.      #\n",
    "#                                                                              #\n",
    "# Note that you can use the check_accuracy function to evaluate on either      #\n",
    "# the test set or the validation set, by passing either loader_test or         #\n",
    "# loader_val as the second argument to check_accuracy. You should not touch    #\n",
    "# the test set until you have finished your architecture and  hyperparameter   #\n",
    "# tuning, and only run the test set once at the end to report a final value.   #\n",
    "################################################################################\n",
    "model = None\n",
    "optimizer = None\n",
    "\n",
    "# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "\n",
    "# class AlexNet(nn.Module):\n",
    "#     def __init__(self, num_classes=10):\n",
    "#         super(AlexNet, self).__init__()\n",
    "#         self.relu=nn.ReLU(inplace=True)\n",
    "#         self.features = nn.Sequential(\n",
    "#             nn.Conv2d(3, 64, kernel_size=3, padding=1),\n",
    "#             self.relu,\n",
    "#             nn.MaxPool2d(kernel_size=2),\n",
    "\n",
    "#             nn.Conv2d(64, 192, kernel_size=3, padding=1),\n",
    "#             self.relu,\n",
    "#             nn.MaxPool2d(kernel_size=2),\n",
    "\n",
    "#             nn.Conv2d(192, 384, kernel_size=3, padding=1),\n",
    "#             self.relu,\n",
    "\n",
    "#             nn.Conv2d(384, 256, kernel_size=3, padding=1),\n",
    "#             self.relu,\n",
    "\n",
    "#             # nn.Conv2d(256, 256, kernel_size=3, padding=1),\n",
    "#             # nn.ReLU(inplace=True),\n",
    "#             # nn.MaxPool2d(kernel_size=2),\n",
    "#         )\n",
    "#         self.avgpool = nn.AdaptiveAvgPool2d((7, 7))\n",
    "#         self.classifier = nn.Sequential(\n",
    "#             nn.Dropout(),\n",
    "#             nn.Linear(256 * 7 * 7, 4096),\n",
    "#             nn.ReLU(inplace=True),\n",
    "#             nn.Dropout(),\n",
    "#             nn.Linear(4096, 4096),\n",
    "#             nn.ReLU(inplace=True),\n",
    "#             nn.Linear(4096, num_classes)\n",
    "#         )\n",
    "\n",
    "#     def forward(self, x):\n",
    "#         x = self.features(x)\n",
    "#         x: Tensor = self.avgpool(x)\n",
    "#         x = x.view(-1, 7 * 7 * 256)\n",
    "#         x = self.classifier(x)\n",
    "#         return x\n",
    "VGG16=nn.Sequential(\n",
    "    nn.Conv2d(3, 64, kernel_size=3, padding=1),\n",
    "    nn.BatchNorm2d(64),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.Conv2d(64, 64, kernel_size=3, padding=1),\n",
    "     nn.BatchNorm2d(64),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.MaxPool2d(kernel_size=2),\n",
    "\n",
    "    nn.Conv2d(64, 128, kernel_size=3, padding=1),\n",
    "    nn.BatchNorm2d(128),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.Conv2d(128, 128, kernel_size=3, padding=1),\n",
    "    nn.BatchNorm2d(128),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.MaxPool2d(kernel_size=2),\n",
    "\n",
    "    nn.Conv2d(128, 256, kernel_size=3, padding=1),\n",
    "    nn.BatchNorm2d(256),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.Conv2d(256, 256, kernel_size=3, padding=1),\n",
    "    nn.BatchNorm2d(256),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.Conv2d(256, 256, kernel_size=3, padding=1),\n",
    "    nn.BatchNorm2d(256),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.MaxPool2d(kernel_size=2),\n",
    "\n",
    "    nn.Conv2d(256, 512, kernel_size=3, padding=1),\n",
    "    nn.BatchNorm2d(512),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.Conv2d(512, 512, kernel_size=3, padding=1),\n",
    "    nn.BatchNorm2d(512),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.Conv2d(512, 512, kernel_size=3, padding=1),\n",
    "    nn.BatchNorm2d(512),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.MaxPool2d(kernel_size=2),\n",
    "\n",
    "    nn.Conv2d(512, 512, kernel_size=3, padding=1),\n",
    "    nn.BatchNorm2d(512),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.Conv2d(512, 512, kernel_size=3, padding=1),\n",
    "    nn.BatchNorm2d(512),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.Conv2d(512, 512, kernel_size=3, padding=1),\n",
    "    nn.BatchNorm2d(512),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.MaxPool2d(kernel_size=2),\n",
    "\n",
    "    Flatten(),\n",
    "    nn.Dropout(),\n",
    "    nn.Linear(512, 4096),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.Dropout(),\n",
    "    nn.Linear(4096, 4096),\n",
    "    nn.ReLU(inplace=True),\n",
    "    nn.Linear(4096, 10))\n",
    "model=VGG16\n",
    "optimizer=optim.Adam(model.parameters())\n",
    "\n",
    "# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****\n",
    "################################################################################\n",
    "#                                 END OF YOUR CODE                             \n",
    "################################################################################\n",
    "\n",
    "# You should get at least 70% accuracy\n",
    "(losses,accs)=train_part34(model, optimizer, epochs=20)\n",
    "summary(model, (3,32,32))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 画图\n",
    "plt.subplot(2, 1, 1)\n",
    "plt.plot(losses)\n",
    "plt.title('Loss history')\n",
    "plt.xlabel('Iteration')\n",
    "plt.ylabel('Loss')\n",
    "\n",
    "plt.subplot(2, 1, 2)\n",
    "plt.plot(accs,'-o', label='val')\n",
    "plt.title('Classification accuracy history')\n",
    "plt.xlabel('Iteration')\n",
    "plt.ylabel('Clasification accuracy')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": [
     "pdf-inline"
    ]
   },
   "source": [
    "## 描述你做了什么\n",
    "在下面的单元格中，你应该解释你做了什么，你实现了什么额外的功能，和/或你在培训和评估你的网络的过程中做了什么。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": [
     "pdf-inline"
    ]
   },
   "source": [
    "TODO: Describe what you did"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 测试集——只运行一次\n",
    "现在我们得到了满意的结果，我们在测试集上测试我们的最终模型(应该存储在best_model中)。考虑一下这与您的验证集精度相比如何。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Checking accuracy on test set\n",
      "Got 8188 / 10000 correct (81.88)\n"
     ]
    }
   ],
   "source": [
    "best_model = model\n",
    "check_accuracy_part34(loader_test, best_model)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.7.6 64-bit ('base': conda)",
   "language": "python",
   "name": "python37664bitbaseconda92b0ec200685491790e4a861efae1222"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  },
  "toc": {
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": "block",
   "toc_window_display": false
  },
  "varInspector": {
   "cols": {
    "lenName": 16,
    "lenType": 16,
    "lenVar": 40
   },
   "kernels_config": {
    "python": {
     "delete_cmd_postfix": "",
     "delete_cmd_prefix": "del ",
     "library": "var_list.py",
     "varRefreshCmd": "print(var_dic_list())"
    },
    "r": {
     "delete_cmd_postfix": ") ",
     "delete_cmd_prefix": "rm(",
     "library": "var_list.r",
     "varRefreshCmd": "cat(var_dic_list()) "
    }
   },
   "types_to_exclude": [
    "module",
    "function",
    "builtin_function_or_method",
    "instance",
    "_Feature"
   ],
   "window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
