{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 01 CNN Training With Code Example - Neural Network Programming Course\n",
    "\n",
    "## CNN Training Process\n",
    "**In this episode, we will learn the steps needed to train a convolutional neural network.**  \n",
    "\n",
    "So far in this series, we learned about Tensors, and we've learned all about PyTorch neural networks. We are now ready to begin the **training process**.\n",
    "* Prepare the data\n",
    "* Build the model\n",
    "* Train the model\n",
    "  * **Calculate the loss, the gradient, and update the weights**\n",
    "* Analyze the model's results\n",
    "\n",
    "## Training: What We Do After The Forward Pass\n",
    "\n",
    "During training, we do a forward pass, but then what? We'll suppose we get a batch and pass it forward through the network. Once the output is obtained, we compare the **predicted output** to the **actual labels**, and once we know **how close** the predicted values are from the actual labels, we **tweak** the weights inside the network in such a way that the values the network predicts move closer to the true values (labels).其实就是通过loss function找最优解  \n",
    "\n",
    "All of this is for **a single batch**, and we **repeat** this process for **every batch** until we have covered every sample in our training set. After we've completed this process for all of the batches and passed over every sample in our **training set**, we say that **an epoch** is complete. We use the word **epoch** to represent a **time period** in which our **entire training** set has been covered.\n",
    "\n",
    "During the **entire training process**, we do as many **epochs** as necessary to reach our desired level of accuracy. With this, we have the following steps:\n",
    "1. Get batch from the training set.\n",
    "2. Pass batch to network.\n",
    "3. Calculate the loss (difference between the predicted values and the true values).\n",
    "4. Calculate the gradient of the loss function w.r.t the network's weights.\n",
    "5. Update the weights using the gradients to reduce the loss.\n",
    "6. Repeat steps 1-5 until one epoch is completed.\n",
    "7. Repeat steps 1-6 for as many epochs required to reach the minimum loss.\n",
    "\n",
    "We already know exactly how to do steps `1` and `2`. We use a loss function to perform step `3`, and you know that we use `backpropagation` and an optimization algorithm to perform step `4` and `5`. Steps `6` and `7` are just standard **Python loops (the training loop)**. Let's see how this is done in code."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The Training Process\n",
    "\n",
    "Since we disabled PyTorch's gradient tracking feature in a previous episode, we need to be sure to turn it back on (it is on by default).  \n",
    "`torch.set_grad_enabled(True)`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<torch.autograd.grad_mode.set_grad_enabled at 0x1b6f9de6e80>"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "\n",
    "torch.set_printoptions(linewidth=120) # Display options for output\n",
    "torch.set_grad_enabled(True) # Already on by default\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.6.0\n",
      "0.7.0\n"
     ]
    }
   ],
   "source": [
    "print(torch.__version__)\n",
    "print(torchvision.__version__)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_num_correct(preds,labels):\n",
    "    return preds.argmax(dim = 1).eq(labels).sum().item()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Preparing For The Forward Pass\n",
    "We already know how to get a batch and pass it forward through the network. Let's see what we do after the forward pass is complete.\n",
    "\n",
    "We'll begin by:\n",
    "1. Creating an instance of our `Network` class.\n",
    "2. Creating a data loader that provides batches of size 100 from our training set.\n",
    "3. Unpacking the images and labels from one of these batches."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Network(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)\n",
    "        self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size = 5)\n",
    "        \n",
    "        self.fc1 = nn.Linear(in_features = 12*4*4,out_features = 120)\n",
    "        self.fc2 = nn.Linear(in_features = 120,out_features = 60)\n",
    "        self.out = nn.Linear(in_features = 60,out_features = 10)\n",
    "        \n",
    "    def forward(self,t):\n",
    "        # (1) input layer\n",
    "        t = t\n",
    "\n",
    "        # (2) hidden conv layer\n",
    "        t = self.conv1(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t, kernel_size=2, stride=2)\n",
    "\n",
    "        # (3) hidden conv layer\n",
    "        t = self.conv2(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t, kernel_size=2, stride=2)\n",
    "\n",
    "        # (4) hidden linear layer\n",
    "        t = t.reshape(-1, 12 * 4 * 4)\n",
    "        t = self.fc1(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        # (5) hidden linear layer\n",
    "        t = self.fc2(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        # (6) output layer\n",
    "        t = self.out(t)\n",
    "        #t = F.softmax(t, dim=1)\n",
    "\n",
    "        return t"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_set = torchvision.datasets.FashionMNIST(\n",
    "    root = './data/FashionMNIST'\n",
    "    ,train = True\n",
    "    ,download = True\n",
    "    ,transform = transforms.Compose([\n",
    "        transforms.ToTensor()\n",
    "    ])\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "network = Network()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_loader = torch.utils.data.DataLoader(train_set, batch_size=100)\n",
    "batch = next(iter(train_loader)) # Getting a batch\n",
    "images, labels = batch"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, we are ready to pass our batch of images forward through the network and obtain the output predictions. Once we have the prediction tensor, we can use the predictions and the true labels to calculate the loss."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Calculating The Loss\n",
    "To do this we will use the `cross_entropy()` loss function that is available in PyTorch's `nn.functional` API. Once we have the loss, we can print it, and also check the number of correct predictions using the function we created a [previous post](https://github.com/unclestrong/DeepLearning-code/blob/master/05%20Neural%20Networks%20and%20PyTorch%20Design-P2.ipynb)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "preds = network(images)\n",
    "loss = F.cross_entropy(preds,labels) # Calculating the loss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "2.307081460952759"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "loss.item()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "11"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "get_num_correct(preds,labels)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The `cross_entropy()` function returned a scalar valued tenor, and so we used the `item()` method to print the `loss` as a Python number. We got `11` out of `100` correct, and since we have `10` prediction classes, this is what we'd expect by guessing at random."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Calculating The Gradients\n",
    "Calculating the gradients is very easy using PyTorch. Since our network is a PyTorch `nn.Module`, PyTorch has created a **computation graph** under the hood. As our tensor flowed forward through our network, all of the computations where added to the graph. The computation graph is then used by PyTorch to calculate the gradients of the loss function with respect to the network's weights.\n",
    "\n",
    "Before we calculate the gradients, let's verify that we **currently** have **no gradients** inside our `conv1` layer. The gradients are tensors that are accessible in the `grad` (short for gradient) attribute of the weight tensor of each layer."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "None\n"
     ]
    }
   ],
   "source": [
    "print(network.conv1.weight.grad)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To `calculate the gradients`, we call the `backward()` method on the loss tensor, like so:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "loss.backward() # Calculating the gradients"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, the gradients of the loss function have been stored inside weight tensors."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([6, 1, 5, 5])"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "network.conv1.weight.grad.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[[ 8.0532e-04,  7.1517e-04,  5.4289e-04,  4.2453e-04,  2.2062e-04],\n",
       "          [ 4.2473e-04,  3.6081e-04,  3.4775e-04,  3.3520e-04,  1.3792e-04],\n",
       "          [ 1.8878e-04,  2.0218e-04,  1.3977e-04,  3.1463e-05, -1.6695e-04],\n",
       "          [ 6.2114e-06,  1.1477e-04,  4.5907e-05, -2.9935e-05, -1.2944e-04],\n",
       "          [-2.1969e-04, -1.8537e-04, -2.5566e-04, -2.2936e-04, -2.4350e-04]]],\n",
       "\n",
       "\n",
       "        [[[ 1.0334e-03, -1.3228e-04, -4.6828e-04,  7.5834e-04,  1.1306e-03],\n",
       "          [ 7.5262e-04, -4.0342e-04, -9.5859e-04,  1.9084e-04,  6.4649e-04],\n",
       "          [ 6.9752e-04, -2.2768e-04, -8.4701e-04,  3.4626e-04,  4.3055e-04],\n",
       "          [ 3.6175e-04, -7.0846e-04, -1.4202e-03, -3.4338e-04, -2.2465e-04],\n",
       "          [ 3.8891e-04, -5.8086e-04, -1.4649e-03, -5.2291e-04, -2.1644e-04]]],\n",
       "\n",
       "\n",
       "        [[[-2.7583e-03, -2.3309e-03, -2.3823e-03, -2.7402e-03, -2.4740e-03],\n",
       "          [-2.3130e-03, -1.8277e-03, -2.0964e-03, -2.7168e-03, -2.2019e-03],\n",
       "          [-2.1739e-03, -1.8778e-03, -2.1596e-03, -2.5166e-03, -1.9841e-03],\n",
       "          [-2.3112e-03, -1.6920e-03, -1.9620e-03, -2.3040e-03, -1.9130e-03],\n",
       "          [-1.9026e-03, -1.5760e-03, -1.6420e-03, -2.1162e-03, -1.5236e-03]]],\n",
       "\n",
       "\n",
       "        [[[ 3.9256e-04,  4.3901e-05, -9.8919e-04, -9.3412e-04, -8.7138e-04],\n",
       "          [ 4.7728e-04,  2.1598e-04, -6.6322e-04, -1.1400e-03, -9.9659e-04],\n",
       "          [ 2.8956e-04,  3.8861e-04, -5.9428e-04, -1.1186e-03, -5.9322e-04],\n",
       "          [ 2.0328e-04,  1.6961e-04, -8.3956e-04, -1.1020e-03, -3.6717e-04],\n",
       "          [ 2.0307e-04, -7.7453e-06, -9.7802e-04, -9.9741e-04, -6.3992e-04]]],\n",
       "\n",
       "\n",
       "        [[[ 2.2192e-04, -1.1592e-04, -5.3961e-04, -8.3908e-04, -1.4125e-03],\n",
       "          [ 5.3511e-04,  6.1669e-04,  4.1265e-05, -3.2095e-04, -1.2030e-03],\n",
       "          [ 8.4875e-04,  9.2720e-04,  4.7815e-04,  2.3308e-05, -9.8388e-04],\n",
       "          [ 7.9033e-04,  9.1618e-04,  3.3051e-04, -1.2498e-04, -9.4174e-04],\n",
       "          [ 5.1729e-04,  7.7212e-04,  3.4925e-04, -1.4285e-04, -1.1027e-03]]],\n",
       "\n",
       "\n",
       "        [[[ 1.7515e-03,  1.5406e-03,  1.8931e-03,  1.3164e-03,  1.5501e-03],\n",
       "          [ 1.4139e-03,  1.6735e-03,  2.0061e-03,  1.6903e-03,  2.0538e-03],\n",
       "          [ 1.1625e-03,  1.5831e-03,  1.9177e-03,  9.5898e-04,  1.8004e-03],\n",
       "          [ 1.0228e-03,  1.3901e-03,  1.5015e-03,  6.2995e-04,  1.4678e-03],\n",
       "          [ 1.0027e-03,  1.0492e-03,  1.3290e-03,  4.3028e-04,  1.2901e-03]]]])"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "network.conv1.weight.grad"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "These gradients are used by the optimizer to update the respective weights. To create our optimizer, we use the `torch.optim` package that has many optimization algorithm implementations that we can use. We'll use `Adam` for our example."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Updating The Weights\n",
    "To the `Adam` class constructor, we pass the `network parameters` (this is how the optimizer is able to access the gradients), and we pass the `learning rate` .\n",
    "\n",
    "Finally, all we have to do to update the weights is to tell the optimizer to use the gradients to step in the direction of the loss function's minimum."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "optimizer = optim.Adam(network.parameters(), lr=0.01)\n",
    "optimizer.step() # Updating the weights"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "When the `step()` function is called, the optimizer updates the weights using the gradients that are stored in the network's parameters. This means that we should expect our loss to be reduced if we pass the same batch through the network again. Checking this, we can see that this is indeed the case:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "2.307081460952759"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "preds = network(images)\n",
    "loss.item()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "2.2812142372131348"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "loss = F.cross_entropy(preds, labels)\n",
    "loss.item()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "11"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "get_num_correct(preds, labels)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Train Using A Single Batch\n",
    "We can summarize the code for training with a single batch in the following way:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loss1: 2.300954818725586\n",
      "loss2: 2.2833118438720703\n"
     ]
    }
   ],
   "source": [
    "network = Network()\n",
    "\n",
    "train_loader = torch.utils.data.DataLoader(train_set, batch_size=100)\n",
    "optimizer = optim.Adam(network.parameters(), lr=0.01)\n",
    "\n",
    "batch = next(iter(train_loader)) # Get Batch\n",
    "images, labels = batch\n",
    "\n",
    "preds = network(images) # Pass Batch\n",
    "loss = F.cross_entropy(preds, labels) # Calculate Loss\n",
    "\n",
    "loss.backward() # Calculate Gradients\n",
    "optimizer.step() # Update Weights\n",
    "\n",
    "print('loss1:', loss.item())\n",
    "preds = network(images)\n",
    "loss = F.cross_entropy(preds, labels)\n",
    "print('loss2:', loss.item())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Quiz 01\n",
    "Q1:During the training process, once the output is obtained, we compare the predicted output to the _______________.<br>\n",
    "A1:labels\n",
    "\n",
    "Q2:Once we know how close the predicted values are to the actual labels, we tweak the weights inside the network in such a way that the predicted values move _______________ the true values (labels).  \n",
    "A2:closer to\n",
    "\n",
    "Q3:After we've completed the training process for all the batches in our training set, we say that _______________ is complete.  \n",
    "A3:an epoch\n",
    "\n",
    "Q4:During the training process, we use the word _______________ to represent a time period for which the entire training set (every batch) has been passed to the network.\n",
    "A4:epoch  \n",
    "\n",
    "Q5:During the entire training process, we do as many epochs as necessary to reach the _______________.<br>\n",
    "A5:minimum loss\n",
    "\n",
    "Q6:To begin the training process, the first step is to get a batch from the training set. What is the second step?  \n",
    "A6:Pass the obtained batch to the network.\n",
    "\n",
    "Q7:During the training process, after we pass a batch to the network, we use the predicted values and the labels to _______________.<br>\n",
    "A7:calculate the loss\n",
    "\n",
    "Q8:PyTorch's gradient tracking feature is turned on using which piece of code?  \n",
    "A8:torch.set_grad_enabled(True)\n",
    "\n",
    "Q9:Which piece of code makes the most sense for creating a PyTorch DataLoader?  \n",
    "A9:torch.utils.data.DataLoader(train_set)\n",
    "\n",
    "Q10:The cross_entropy() loss function lives in which PyTorch package?  \n",
    "A10:torch.nn.functional\n",
    "\n",
    "---\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 02 CNN Training Loop Explained - Neural Network Code Project\n",
    "## CNN Training Loop - Teach A Neural Network\n",
    "**In this episode, we will learn how to build the training loop for a convolutional neural network using Python.**\n",
    "\n",
    "\n",
    "In the last episode, we learned that the [training process](https://deeplizard.com/learn/video/sZAlS3_dnk0) is an iterative process, and to train a neural network, we build what is called the training loop.\n",
    "* Prepare the data\n",
    "* Build the model\n",
    "* Train the model\n",
    "  * Build the training loop\n",
    "* Analyze the model's results\n",
    "\n",
    "### Training With A Single Batch\n",
    "We can summarize the code for training with a single batch in the following way:\n",
    "```python\n",
    "network = Network()\n",
    "\n",
    "train_loader = torch.utils.data.DataLoader(train_set, batch_size=100)\n",
    "optimizer = optim.Adam(network.parameters(), lr=0.01)\n",
    "\n",
    "batch = next(iter(train_loader)) # Get Batch\n",
    "images, labels = batch\n",
    "\n",
    "preds = network(images) # Pass Batch\n",
    "loss = F.cross_entropy(preds, labels) # Calculate Loss\n",
    "\n",
    "loss.backward() # Calculate Gradients\n",
    "optimizer.step() # Update Weights\n",
    "\n",
    "print('loss1:', loss.item())\n",
    "preds = network(images)\n",
    "loss = F.cross_entropy(preds, labels)\n",
    "print('loss2:', loss.item())\n",
    "```\n",
    "\n",
    "### Output:\n",
    "```python\n",
    "loss1: 2.300954818725586\n",
    "loss2: 2.2833118438720703\n",
    "```\n",
    "\n",
    "One thing that you'll notice is that we get **different results each time** we run this code. This is because the model is created each time at the top, and we know from previous posts that the model weights are **randomly initialized**.\n",
    "\n",
    "### Training With All Batches (Single Epoch)\n",
    "Now, to train with all of the **batches** available inside our **data loader**, we need to make a few changes and add one additional line of code:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch: 0 total_correct: 46957 loss: 347.39798778295517\n"
     ]
    }
   ],
   "source": [
    "network = Network()\n",
    "\n",
    "train_loader = torch.utils.data.DataLoader(train_set, batch_size=100)\n",
    "optimizer = optim.Adam(network.parameters(), lr=0.01)\n",
    "\n",
    "total_loss = 0\n",
    "total_correct = 0\n",
    "\n",
    "for batch in train_loader: # Get Batch\n",
    "    images, labels = batch \n",
    "\n",
    "    preds = network(images) # Pass Batch\n",
    "    loss = F.cross_entropy(preds, labels) # Calculate Loss\n",
    "\n",
    "    optimizer.zero_grad()\n",
    "    loss.backward() # Calculate Gradients\n",
    "    optimizer.step() # Update Weights\n",
    "\n",
    "    total_loss += loss.item()\n",
    "    total_correct += get_num_correct(preds, labels)\n",
    "\n",
    "print(\n",
    "    \"epoch:\", 0, \n",
    "    \"total_correct:\", total_correct, \n",
    "    \"loss:\", total_loss\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Instead of getting a single batch from our data loader, we'll create a for loop that will **iterate** over **all of the batches**.\n",
    "\n",
    "Since we have `60,000` samples in our training set, we will have `60,000 / 100 = 600` iterations. For this reason, we'll remove the print statement from within the loop, and keep track of the `total loss` and the `total number` of correct predictions printing them at the end.\n",
    "\n",
    "Something to notice about these `600` iterations is that our `weights` will be `updated 600 times` by the end of the loop. If we **raise the batch_size** this number will **go down** and if we **lower the batch_size** this number will **go up**.\n",
    "\n",
    "Finally, after we call the `backward()` method on our loss tensor, we know the gradients will be calculated and **added** to the `grad` attributes of our network's parameters. For this reason, we need to zero out these gradients. We can do this with a method called `zero_grad()` that comes with the optimizer.\n",
    "\n",
    "We are ready to run this code. This time the code will take longer because the loop is working on `600` batches.\n",
    "\n",
    "```python\n",
    "epoch: 0 total_correct: 46957 loss: 347.39798778295517\n",
    "```\n",
    "\n",
    "We get the results, and we can see that the total number correct out of 60,000 was 46,957."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.7826166666666666"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "total_correct / len(train_set)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That's pretty good after only one epoch (a single full pass over the data). Even though we did one epoch, we still have to keep in mind that the **weights** were updated `600` times, and this fact depends on our batch size. If made our batch_batch size larger, say `10,000`, the weights would only be updated `6` times, and the results **wouldn't be quite as good**.\n",
    "\n",
    "### Training With Multiple Epochs\n",
    "To do **multiple epochs**, all we have to do is put this code into a **for loop**. We'll also add the epoch number to the print statement."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 0 total_correct: 46928 loss: 344.27279521524906\n",
      "epoch 1 total_correct: 51277 loss: 232.71748647093773\n",
      "epoch 2 total_correct: 52081 loss: 208.65398114919662\n",
      "epoch 3 total_correct: 52609 loss: 194.9983945786953\n",
      "epoch 4 total_correct: 52906 loss: 190.74674943089485\n",
      "epoch 5 total_correct: 53021 loss: 186.76688426733017\n",
      "epoch 6 total_correct: 53290 loss: 181.0335234105587\n",
      "epoch 7 total_correct: 53226 loss: 180.50387901067734\n",
      "epoch 8 total_correct: 53480 loss: 173.91857013106346\n",
      "epoch 9 total_correct: 53588 loss: 170.3671340867877\n"
     ]
    }
   ],
   "source": [
    "network = Network()\n",
    "\n",
    "train_loader = torch.utils.data.DataLoader(train_set, batch_size=100)\n",
    "optimizer = optim.Adam(network.parameters(), lr=0.01)\n",
    "\n",
    "for epoch in range(10):\n",
    "\n",
    "    total_loss = 0\n",
    "    total_correct = 0\n",
    "\n",
    "    for batch in train_loader: # Get Batch\n",
    "        images, labels = batch \n",
    "\n",
    "        preds = network(images) # Pass Batch\n",
    "        loss = F.cross_entropy(preds, labels) # Calculate Loss\n",
    "\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward() # Calculate Gradients\n",
    "        optimizer.step() # Update Weights\n",
    "\n",
    "        total_loss += loss.item()\n",
    "        total_correct += get_num_correct(preds, labels)\n",
    "\n",
    "    print(\n",
    "        \"epoch\", epoch, \n",
    "        \"total_correct:\", total_correct, \n",
    "        \"loss:\", total_loss\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Complete Training Loop\n",
    "Putting all of this together, we can pull the `network`, `optimizer`, and the `train_loader` out of the training loop cell.\n",
    "```python\n",
    "network = Network()\n",
    "optimizer = optim.Adam(network.parameters(), lr=0.01)\n",
    "train_loader = torch.utils.data.DataLoader(\n",
    "    train_set\n",
    "    ,batch_size=100\n",
    "    ,shuffle=True\n",
    ")\n",
    "```\n",
    "This makes it so that we can run the training loop without resetting the networks weights.\n",
    "```python\n",
    "for epoch in range(10):\n",
    "\n",
    "    total_loss = 0\n",
    "    total_correct = 0\n",
    "\n",
    "    for batch in train_loader: # Get Batch\n",
    "        images, labels = batch \n",
    "\n",
    "        preds = network(images) # Pass Batch\n",
    "        loss = F.cross_entropy(preds, labels) # Calculate Loss\n",
    "\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward() # Calculate Gradients\n",
    "        optimizer.step() # Update Weights\n",
    "\n",
    "        total_loss += loss.item()\n",
    "        total_correct += get_num_correct(preds, labels)\n",
    "\n",
    "    print(\n",
    "        \"epoch\", epoch, \n",
    "        \"total_correct:\", total_correct, \n",
    "        \"loss:\", total_loss\n",
    "    )\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Quiz 02\n",
    "Q1:In the code below, what does the `lr` parameter do?\n",
    "```python\n",
    "optimizer = optim.Adam(network.parameters(), lr=0.01)\n",
    "```\n",
    "A1:sets the learning rate which tells the optimizer how far to step in the direction of the loss function's minimum\n",
    "\n",
    "Q2:After we call the `backward()` method on our loss tensor, the gradients will be calculated and _______________ of our network's parameters.  \n",
    "A2:added to the grad attributes\n",
    "\n",
    "Q3:Using the code below, determine how many times optimizer.step() will be called during this training loop run.\n",
    "```python\n",
    "network = Network()\n",
    "\n",
    "train_loader = torch.utils.data.DataLoader(train_set, batch_size=100)\n",
    "optimizer = optim.Adam(network.parameters(), lr=0.01)\n",
    "\n",
    "for epoch in range(10):\n",
    "    for batch in train_loader: # Get Batch\n",
    "        images, labels = batch\n",
    "        # other stuff happens\n",
    "        optimizer.step() # Update Weights\n",
    "```\n",
    "A3:6000\n",
    "\n",
    "Q4:Suppose we have a fixed training set size. As batch size goes up, which of the following happens inside each epoch?\n",
    "```python\n",
    "for epoch in range(10):\n",
    "    # what happens in here?\n",
    "```\n",
    "A4:the frequency of weight updates goes down\n",
    "\n",
    "Q5:Suppose that our training set contains `60000 `samples. If we are using the data loader below, how many times will our weights be updated during one epoch?\n",
    "```python\n",
    "train_loader = torch.utils.data.DataLoader(train_set, batch_size=10000)\n",
    "```\n",
    "A5:6\n",
    "\n",
    "Q6:Suppose that our training set contains `60000` samples. If we are using the data loader below, how many iterations will occur inside our `for batch in train_loader:` loop?\n",
    "```python\n",
    "train_loader = torch.utils.data.DataLoader(train_set, batch_size=1000)\n",
    "```\n",
    "A6:60\n",
    "\n",
    "Q7:Suppose that our training set contains `60000` samples. If we are using the data loader below, how many iterations will occur inside our `for batch in train_loader:` loop?\n",
    "```python\n",
    "train_loader = torch.utils.data.DataLoader(train_set, batch_size=100)\n",
    "```\n",
    "A7:600\n",
    "\n",
    "Q8:What is the result of running the line of code below?  \n",
    "```python\n",
    "loss.item()\n",
    "```\n",
    "A8:the loss as a Python number\n",
    "\n",
    "---\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 03 CNN Confusion Matrix With PyTorch - Neural Network Programming\n",
    "\n",
    "## Create A Confusion Matrix With PyTorch\n",
    "**In this episode, we're going to build some functions that will allow us to get a prediction tensor for every sample in our training set.**\n",
    "\n",
    "Then, we'll see how we can take this **prediction tensor**, along with the **labels** for each sample, to create a **confusion matrix**. This confusion matrix will allow us to see which **categories** our network is confusing with one another. \n",
    "\n",
    "Where we are now in the course.\n",
    "* Prepare the data\n",
    "* Build the model\n",
    "* Train the model\n",
    "* Analyze the model's results\n",
    "  * **Building, plotting, and interpreting a confusion matrix**\n",
    "  \n",
    "Be sure to see the previous episode in this course for all the code setup details.\n",
    "\n",
    "## Interpreting The Confusion Matrix\n",
    "![Confusion Matrix](https://deeplizard.com/images/fashion%20mnist%20confusion%20matrix.png)\n",
    "The confusion matrix has three axes:\n",
    "1. Prediction label (class)\n",
    "2. True label\n",
    "3. Heat map value (color)\n",
    "\n",
    "The **prediction label** and **true labels** show us which prediction class we are dealing with. The **matrix diagonal** represents locations in the matrix where the **prediction and the truth are the same**, so this is where we want the **heat map** to be **darker**.\n",
    "\n",
    "Any values that are **not on the diagonal** are **incorrect** predictions because the prediction and the true label don't match. To read the plot, we can use these steps:\n",
    "1. Choose a prediction label on the **horizontal axis**.\n",
    "2. Check the **diagonal location** for this label to see the **total number correct**.\n",
    "3. Check the other **non-diagonal locations** to see where the network is **confused**.\n",
    "\n",
    "For example, the network is confusing a T-shirt/top with a shirt, but is not confusing the T-shirt/top with things like:\n",
    "* Ankle boot\n",
    "* Sneaker\n",
    "* Sandal\n",
    "If we think about it, this **makes pretty good sense**. As our model learns, we will see the numbers that lie **outside the diagonal** become smaller and **smaller**.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<torch.autograd.grad_mode.set_grad_enabled at 0x1651f361850>"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "\n",
    "torch.set_printoptions(linewidth=120) # Display options for output\n",
    "torch.set_grad_enabled(True) # Already on by default"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_num_correct(preds,labels):\n",
    "    return preds.argmax(dim = 1).eq(labels).sum().item()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Network(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.conv1 = nn.Conv2d(in_channels=1,out_channels=6,kernel_size=5)\n",
    "        self.conv2 = nn.Conv2d(in_channels=6,out_channels=12,kernel_size = 5)\n",
    "        \n",
    "        self.fc1 = nn.Linear(in_features = 12*4*4,out_features = 120)\n",
    "        self.fc2 = nn.Linear(in_features = 120,out_features = 60)\n",
    "        self.out = nn.Linear(in_features = 60,out_features = 10)\n",
    "        \n",
    "    def forward(self,t):\n",
    "        # (1) input layer\n",
    "        t = t\n",
    "\n",
    "        # (2) hidden conv layer\n",
    "        t = self.conv1(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t, kernel_size=2, stride=2)\n",
    "\n",
    "        # (3) hidden conv layer\n",
    "        t = self.conv2(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t, kernel_size=2, stride=2)\n",
    "\n",
    "        # (4) hidden linear layer\n",
    "        t = t.reshape(-1, 12 * 4 * 4)\n",
    "        t = self.fc1(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        # (5) hidden linear layer\n",
    "        t = self.fc2(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        # (6) output layer\n",
    "        t = self.out(t)\n",
    "        #t = F.softmax(t, dim=1)\n",
    "\n",
    "        return t"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_set = torchvision.datasets.FashionMNIST(\n",
    "    root = './data/FashionMNIST'\n",
    "    ,train = True\n",
    "    ,download = True\n",
    "    ,transform = transforms.Compose([\n",
    "        transforms.ToTensor()\n",
    "    ])\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "network = Network()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "optimizer = optim.Adam(network.parameters(), lr=0.01)\n",
    "train_loader = torch.utils.data.DataLoader(\n",
    "    train_set\n",
    "    ,batch_size=100\n",
    "    ,shuffle=True\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch 0 total_correct: 46100 loss: 362.7832463532686\n",
      "epoch 1 total_correct: 51150 loss: 238.03466223180294\n",
      "epoch 2 total_correct: 51779 loss: 219.40091614425182\n",
      "epoch 3 total_correct: 52221 loss: 208.4118372797966\n",
      "epoch 4 total_correct: 52519 loss: 201.381635800004\n",
      "epoch 5 total_correct: 52650 loss: 198.1756061464548\n",
      "epoch 6 total_correct: 52792 loss: 192.37248916924\n",
      "epoch 7 total_correct: 53051 loss: 188.6718685477972\n",
      "epoch 8 total_correct: 53030 loss: 189.43409553170204\n",
      "epoch 9 total_correct: 53139 loss: 185.44945441186428\n"
     ]
    }
   ],
   "source": [
    "for epoch in range(10):\n",
    "\n",
    "    total_loss = 0\n",
    "    total_correct = 0\n",
    "\n",
    "    for batch in train_loader: # Get Batch\n",
    "        images, labels = batch \n",
    "\n",
    "        preds = network(images) # Pass Batch\n",
    "        loss = F.cross_entropy(preds, labels) # Calculate Loss\n",
    "\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward() # Calculate Gradients\n",
    "        optimizer.step() # Update Weights\n",
    "\n",
    "        total_loss += loss.item()\n",
    "        total_correct += get_num_correct(preds, labels)\n",
    "\n",
    "    print(\n",
    "        \"epoch\", epoch, \n",
    "        \"total_correct:\", total_correct, \n",
    "        \"loss:\", total_loss\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Confusion Matrix Requirements\n",
    "\n",
    "To create a **confusion matrix** for our entire dataset, we need to have a **prediction tensor** with a single dimension that has the **same length** as our **training set**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "60000"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(train_set)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This **prediction tensor** will contain ten predictions for each sample from our training set (one for each category of clothing). After we have obtained this tensor, we can use the **labels tensor** to generate a confusion matrix."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "60000"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(train_set.targets)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A **confusion matrix** will show us where the model is getting confused. To be more specific, the confusion matrix will show us which **categories** the model is predicting **correctly** and which categories the model is predicting **incorrectly**. For the incorrect predictions, we will be able to see which category the model predicted, and this will show us which categories are confusing the model."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Get Predictions For The Entire Training Set\n",
    "To get the **predictions** for all the **training set samples**, we need to pass all of the samples forward through the network. To do this, it is possible to create a `DataLoader` that has `batch_size=1`. This will pass a single batch to the network at once and will give us the desired **prediction tensor** for all the training set samples.\n",
    "\n",
    "However, depending on the **computing resources** and the **size of the training set** if we were training on a different data set, we need a way to prediction on **smaller batches** and **collect the results**. To collect the results, we'll use the `torch.cat()` function to **concatenate** the output tensors together to obtain our **single prediction tensor**. Let's build a function to do this.\n",
    "\n",
    "### Building A Function To Get Predictions For ALL Samples\n",
    "We'll create a function called `get_all_preds()`, and we'll pass a **model** and a **data loader**. The **model** will be used to **obtain the predictions**, and the **data loader** will be used to **provide the batches from the training set**.\n",
    "\n",
    "All the function needs to do is **iterate** over the **data loader** passing the batches to the model and **concatenating** the results of each batch to **a prediction tensor** that will returned to the caller."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "@torch.no_grad()\n",
    "def get_all_preds(model, loader):\n",
    "    all_preds = torch.tensor([])\n",
    "    for batch in loader:\n",
    "        images, labels = batch\n",
    "\n",
    "        preds = model(images)\n",
    "        all_preds = torch.cat(\n",
    "            (all_preds, preds)\n",
    "            ,dim=0\n",
    "        )\n",
    "    return all_preds"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The implantation of this function creates an **empty tensor**, `all_preds` to hold the **output predictions**. Then, it **iterates** over the batches coming from the **data loader**, and **concatenates** the output predictions with the **all_preds tensor**. Finally, **all the predictions**, all_preds, is returned to the caller.\n",
    "\n",
    "Note at the **top**, we have annotated the function using the `@torch.no_grad()` PyTorch decoration. This is because we want this functions execution to **omit gradient tracking**.\n",
    "\n",
    "This is because **gradient tracking** uses **memory**, and during inference (getting predictions while not training) there is **no need** to keep track of the computational graph. The decoration is **one way** of **locally turning off** the gradient tracking feature while executing specific functions.\n",
    "\n",
    "### Locally Disabling PyTorch Gradient Tracking\n",
    "We are ready now to make the call to obtain the predictions for the training set. All we need to do is create a **data loader** with a reasonable **batch size**, and pass the model and data loader to the `get_all_preds()` function.\n",
    "\n",
    "In a previous episode, we saw how use turned off PyTorch's **gradient tracking** feature when it was not needed, and we turned it back on when we started the training process.\n",
    "\n",
    "We specifically **need the gradient** calculation feature anytime we are going to calculate gradients using the `backward()` function. Otherwise, it is a good idea to turn it **off** because having it off will **reduce memory consumption** for computations, e.g. when we are using networks for **predicting** (inference).\n",
    "\n",
    "We can disable gradient computations for **specific** or **local** spots in our code, e.g. like what we just saw with the annotated function. As another example, we can use Python's `with` context manger keyword to specify that a specify block of code should exclude gradient computations.\n",
    "\n",
    "Both of these options are valid."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "with torch.no_grad():\n",
    "    prediction_loader = torch.utils.data.DataLoader(train_set, batch_size=10000)\n",
    "    train_preds = get_all_preds(network, prediction_loader)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Using The Predictions Tensor\n",
    "Now, that we have the **prediction tensor**, we can pass it to the `get_num_correct()` function that we created in a previous episode, along with the **training set labels**, to get the **total number of correct predictions**.\n",
    "\n",
    "We can see the total number of correct predictions and print the accuracy by dividing by the number of samples in the training set."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "total correct: 53262\n",
      "accuracy: 0.8877\n"
     ]
    }
   ],
   "source": [
    "preds_correct = get_num_correct(train_preds, train_set.targets)\n",
    "print('total correct:', preds_correct)\n",
    "print('accuracy:', preds_correct / len(train_set))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Building The Confusion Matrix\n",
    "Our task in building the confusion matrix is to **count** the number of **predicted values** against the **true values** (targets).\n",
    "\n",
    "This will create a matrix that acts as a **heat map** telling us where the predicted values fall relative to the true values.\n",
    "\n",
    "To do this, we need to have the `targets` tensor and the predicted label from the `train_preds` tensor."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([9, 0, 0,  ..., 3, 0, 5])"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_set.targets"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([9, 0, 0,  ..., 3, 0, 5])"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_preds.argmax(dim=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, if we **compare** the two tensors **element-wise**, we can see if the predicted label matches the target. Additionally, if we are counting the number of predicted labels vs the target labels, the values inside the two tensors act as coordinates for our matrix. Let's **stack** these two tensors along the second dimension so we can have 60,000 **ordered pairs**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "stacked = torch.stack(\n",
    "    (\n",
    "        train_set.targets\n",
    "        ,train_preds.argmax(dim=1)\n",
    "    )\n",
    "    ,dim=1\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([60000, 2])"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "stacked.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[9, 9],\n",
       "        [0, 0],\n",
       "        [0, 0],\n",
       "        ...,\n",
       "        [3, 3],\n",
       "        [0, 0],\n",
       "        [5, 5]])"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "stacked"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[9, 9]"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "stacked[0].tolist()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we can **iterate** over these **pairs** and **count** the number of **occurrences** at each position in the matrix. Let's create the matrix. Since we have ten prediction categories, we'll have a ten by ten matrix. Check [here](https://deeplizard.com/learn/video/kF2AlpykJGY) to learn about the `stack()` function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "cmt = torch.zeros(10,10, dtype=torch.int64)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n",
       "        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n",
       "        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n",
       "        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n",
       "        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n",
       "        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n",
       "        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n",
       "        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n",
       "        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n",
       "        [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "cmt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we'll **iterate** over the prediction-target pairs and add one to the value inside the matrix each time the particular position occurs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "for p in stacked:\n",
    "    tl, pl = p.tolist()\n",
    "    cmt[tl, pl] = cmt[tl, pl] + 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[5227,   10,   84,  123,   15,   10,  508,    0,   23,    0],\n",
       "        [   7, 5899,    2,   82,    3,    0,    5,    0,    2,    0],\n",
       "        [  68,    8, 4816,   48,  628,    3,  413,    0,   16,    0],\n",
       "        [ 186,   76,   31, 5292,  304,    0,  105,    0,    3,    3],\n",
       "        [  14,    9,  463,  102, 5146,    0,  236,    0,   30,    0],\n",
       "        [   0,    3,    0,    0,    1, 5913,    1,   41,    2,   39],\n",
       "        [ 931,   16,  412,  136,  555,    2, 3923,    1,   24,    0],\n",
       "        [   0,    0,    0,    0,    0,  223,    0, 5379,    2,  396],\n",
       "        [  16,    4,   28,   21,   15,   57,   60,   11, 5784,    4],\n",
       "        [   0,    0,    0,    0,    0,   45,    0,   67,    5, 5883]])"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "cmt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[5227,   10,   84,  123,   15,   10,  508,    0,   23,    0],\n",
       "       [   7, 5899,    2,   82,    3,    0,    5,    0,    2,    0],\n",
       "       [  68,    8, 4816,   48,  628,    3,  413,    0,   16,    0],\n",
       "       [ 186,   76,   31, 5292,  304,    0,  105,    0,    3,    3],\n",
       "       [  14,    9,  463,  102, 5146,    0,  236,    0,   30,    0],\n",
       "       [   0,    3,    0,    0,    1, 5913,    1,   41,    2,   39],\n",
       "       [ 931,   16,  412,  136,  555,    2, 3923,    1,   24,    0],\n",
       "       [   0,    0,    0,    0,    0,  223,    0, 5379,    2,  396],\n",
       "       [  16,    4,   28,   21,   15,   57,   60,   11, 5784,    4],\n",
       "       [   0,    0,    0,    0,    0,   45,    0,   67,    5, 5883]],\n",
       "      dtype=int64)"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "cmt.numpy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Plotting The Confusion Matrix\n",
    "To generate the actual confusion matrix as a `numpy.ndarray`, we use the `confusion_matrix()` function from the `sklearn.metrics library`. Let's get this imported along with our other needed imports."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "\n",
    "from sklearn.metrics import confusion_matrix"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'numpy.ndarray'>\n"
     ]
    }
   ],
   "source": [
    "# We can generate the confusion matrix like so:\n",
    "cm = confusion_matrix(train_set.targets, train_preds.argmax(dim=1))\n",
    "print(type(cm))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[5227,   10,   84,  123,   15,   10,  508,    0,   23,    0],\n",
       "       [   7, 5899,    2,   82,    3,    0,    5,    0,    2,    0],\n",
       "       [  68,    8, 4816,   48,  628,    3,  413,    0,   16,    0],\n",
       "       [ 186,   76,   31, 5292,  304,    0,  105,    0,    3,    3],\n",
       "       [  14,    9,  463,  102, 5146,    0,  236,    0,   30,    0],\n",
       "       [   0,    3,    0,    0,    1, 5913,    1,   41,    2,   39],\n",
       "       [ 931,   16,  412,  136,  555,    2, 3923,    1,   24,    0],\n",
       "       [   0,    0,    0,    0,    0,  223,    0, 5379,    2,  396],\n",
       "       [  16,    4,   28,   21,   15,   57,   60,   11, 5784,    4],\n",
       "       [   0,    0,    0,    0,    0,   45,    0,   67,    5, 5883]],\n",
       "      dtype=int64)"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "cm"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "PyTorch tensors are [array-like](https://numpy.org/doc/stable/user/basics.creation.html#converting-python-array-like-objects-to-numpy-arrays) Python objects, so we can pass them directly to the `confusion_matrix()` function. We pass the training set **labels tensor** (targets) and the **argmax** with respect to the first dimension of the train_preds tensor, and this gives us the confusion matrix data structure.\n",
    "\n",
    "To actually plot the confusion matrix, we need some custom code that called `plot_confusion_matrix()`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [],
   "source": [
    "import itertools\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues):\n",
    "    if normalize:\n",
    "        cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\n",
    "        print(\"Normalized confusion matrix\")\n",
    "    else:\n",
    "        print('Confusion matrix, without normalization')\n",
    "\n",
    "    print(cm)\n",
    "    plt.imshow(cm, interpolation='nearest', cmap=cmap)\n",
    "    plt.title(title)\n",
    "    plt.colorbar()\n",
    "    tick_marks = np.arange(len(classes))\n",
    "    plt.xticks(tick_marks, classes, rotation=45)\n",
    "    plt.yticks(tick_marks, classes)\n",
    "\n",
    "    fmt = '.2f' if normalize else 'd'\n",
    "    thresh = cm.max() / 2.\n",
    "    for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):\n",
    "        plt.text(j, i, format(cm[i, j], fmt), horizontalalignment=\"center\", color=\"white\" if cm[i, j] > thresh else \"black\")\n",
    "\n",
    "    plt.tight_layout()\n",
    "    plt.ylabel('True label')\n",
    "    plt.xlabel('Predicted label')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We are ready to **plot** the confusion matrix, but first we need to create a **list** of prediction **class names** to pass to the `plot_confusion_matrix()` function. Our prediction classes and their corresponding indexes are given by the table below:\n",
    "\n",
    "| **Index** | **Label** |\n",
    "| --- | --- |\n",
    "| 0 | \tT-shirt/top |\n",
    "| 1 | \tTrouser |\n",
    "| 2 | \tPullover |\n",
    "| 3 | \tDress |\n",
    "| 4 | \tCoat |\n",
    "| 5 | \tSandal |\n",
    "| 6 | \tShirt |\n",
    "| 7 | \tSneaker |\n",
    "| 8 | \tBag |\n",
    "| 9 | \tAnkle boot |\n",
    "\n",
    "![class](https://deeplizard.com/images/fashion%20mnist%20grid%20sample.png)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<Figure size 720x720 with 0 Axes>"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    },
    {
     "data": {
      "text/plain": [
       "<Figure size 720x720 with 0 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# This allows us to make the call to plot the matrix:\n",
    "plt.figure(figsize=(10,10))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Confusion matrix, without normalization\n",
      "[[5227   10   84  123   15   10  508    0   23    0]\n",
      " [   7 5899    2   82    3    0    5    0    2    0]\n",
      " [  68    8 4816   48  628    3  413    0   16    0]\n",
      " [ 186   76   31 5292  304    0  105    0    3    3]\n",
      " [  14    9  463  102 5146    0  236    0   30    0]\n",
      " [   0    3    0    0    1 5913    1   41    2   39]\n",
      " [ 931   16  412  136  555    2 3923    1   24    0]\n",
      " [   0    0    0    0    0  223    0 5379    2  396]\n",
      " [  16    4   28   21   15   57   60   11 5784    4]\n",
      " [   0    0    0    0    0   45    0   67    5 5883]]\n"
     ]
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAV0AAAEmCAYAAADBbUO1AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8vihELAAAACXBIWXMAAAsTAAALEwEAmpwYAACKr0lEQVR4nO2dd3gUVReH35OEUKX33ntLQui99ya9iwr6iYqKiIqIBUUQUapipQmISpcmvfciKFIEpfdek3C+P+4kbEKS3U02DeblmYedOzN3zkx2z9w5997zE1XFxsbGxiZ+8EpoA2xsbGweJ2yna2NjYxOP2E7XxsbGJh6xna6NjY1NPGI7XRsbG5t4xHa6NjY2NvGI7XRt4hwRSSkiC0TkqojMjkU9XUVkmSdtSyhEpIaI/J3QdtjEP2KP07UJRUS6AK8CxYHrwG5gmKquj2W93YEXgaqqGhxbOxM7IqJAEVU9nNC22CQ+7JauDQAi8irwOfARkA3IC0wAWnmg+nzAwcfB4bqCiPgktA02CYiq2stjvgDpgBtA+2j2SY5xyqes5XMgubWtNnACeA04B5wGnrK2vQfcA4KsczwNDAWmOdSdH1DAx1rvBfyDaW0fBbo6lK93OK4qsA24av1f1WHbauADYINVzzIgcxTXFmr/QAf7WwNNgYPAJeAth/0rApuAK9a+4wBfa9ta61puWtfb0aH+N4AzwNTQMuuYQtY5/K31nMB5oHZCfzfsxfOL3dK1AagCpADmRLPP20BloDxQDuN4Bjtsz45x3rkwjnW8iGRQ1XcxredZqppGVb+NzhARSQ2MAZqo6hMYx7o7kv0yAousfTMBnwGLRCSTw25dgKeArIAvMCCaU2fH3INcwBDga6AbEADUAN4RkQLWviHAK0BmzL2rB/wPQFVrWvuUs653lkP9GTGt/j6OJ1bVIxiHPE1EUgHfA5NVdXU09tokUWynawPGaV3Q6F//uwLvq+o5VT2PacF2d9geZG0PUtXfMK28YjG05z5QWkRSquppVd0fyT7NgEOqOlVVg1V1BnAAaOGwz/eqelBVbwM/YR4YURGEiV8HATMxDvULVb1unf9PzMMGVd2hqput8x4DvgJquXBN76rqXcuecKjq18BhYAuQA/OQs3kEsZ2uDcBFILOTWGNO4F+H9X+tsrA6IjjtW0Aadw1R1ZuYV/LngNMiskhEirtgT6hNuRzWz7hhz0VVDbE+hzrFsw7bb4ceLyJFRWShiJwRkWuYlnzmaOoGOK+qd5zs8zVQGhirqned7GuTRLGdrg2Y+ORdTBwzKk5hXo1DyWuVxYSbQCqH9eyOG1V1qao2wLT4DmCckTN7Qm06GUOb3GEixq4iqpoWeAsQJ8dEO0xIRNJg4uTfAkOt8InNI4jtdG1Q1auYOOZ4EWktIqlEJJmINBGREdZuM4DBIpJFRDJb+0+L4Sl3AzVFJK+IpAPeDN0gItlEpJUV272LCVPcj6SO34CiItJFRHxEpCNQElgYQ5vc4QngGnDDaoU/H2H7WaCgm3V+AWxX1WcwseovY22lTaLEdro2AKjqKMwY3cGYnvPjQD9grrXLh8B2YC/wB7DTKovJuZYDs6y6dhDeUXpZdpzC9OjX4mGnhqpeBJpjRkxcxIw8aK6qF2Jik5sMwHTSXce0wmdF2D4UmCwiV0Skg7PKRKQV0JgH1/kq4C8iXT1msU2iwZ4cYWNjYxOP2C1dGxsbm3jEdro2NjY28YjtdG1sbGziEdvp2tjY2MQjduINDyC+aVRSZXK+o4v4FcrisbrAyQDRRICzAa4JSfB9z949H6/EfLWeZ+fOHRdU1WNfaO+0+VSDH5rQ9xB6+/xSVW3sqfN6EtvpegBJlYnktd50vqOLbPjlOY/VBRDiYcfh6REvPt6J94Xr8s17Hq0vQ2pfj9aX2EmZTCLOGowVGnyb5MWcjsLjzu7xzmYIJhi207WxsUk6iICXd0JbESsSbxMj6XBsz4albB7djvWj2gLwUa/K7B7fka1ftGfWm41IZ7Vu6pbLzYZRT7Lti/ZsGPUktcqY1AVpUiZj8+h2YUtyb/CJ8Jfp+0xv8ubMSkD50mFlly5dolnjBpQuUYRmjRtw+fJll40e98VoKpQvTaBfGXp178KdOw/SAgx45SWyZXzCaR3P93maAnmyU9G/bFjZ228OxL9sSSpXKE/nDm25cuUKANu3baVqRX+qVvSnSqAf8+dFl9As8uv98P2hFMyXi0oB5akUUJ4li39z+Xo9cf8qlilK3ar+1K8eSOPaVQC4fPkSHVs3oZp/STq2bsKVK6aOa1ev0qNjG+pXq0DtyuWZOW2yy7YuW7qEsqWKUap4YUaOGO7ycaEcP36cRvXr4Fe2JP7lSjFuzBcAvPfuOwT6laVSQHmaN2nIqVMxm8UdW/tijXg5XxIx9uSI2HMsS8Hy+W6UeRASqFc+N6v3niTkvvJhj0oADJ6yhXIFMnHu6m1OX7pFybwZWDC0OYV6T32owptznyMoJHwsdv26taROnYZnevdgx+59ALw1aCAZMmbk9YGDGDliOFcuX2bYx588VF/E8MKpkydpUKcG2/fsJ2XKlHTv0pFGjZvQrUcvdu7YzoRxY1gwbw5nL12P9IJDvzPr160lTZo09Hm6F1t37gVgxfJl1KpTFx8fH955exAAHwwbzq1bt/D19cXHx4czp09TpaIfh46ewMfHJ9LwQmTX++H7Q0mdJg2vvBpdhsbIien9cwwvVCxTlMWrN5Ip04M31w+GvEn6DBl58ZXXGTt6JFevXGbwex8xZtQnXLt2lcHvfcTFC+epUaEMuw/+R7YM0ecACgkJoUzJoixavJxcuXNTvXIgk6fNoETJki5f6+nTpzlz+jR+/v5cv36dqpUC+OnnueTKnZu0adMCMH7sGA789SdjJ7g329hd+1Imkx2qWsGtk0SDV+psmryk84l6d7aP9uh5PUnifiQkUVbsPhHm6LYePEuuzOaHtufoRU5fugXAn/9dJoWvN74RmrSFc6ZDeLjzq3qNmmTMGD4HysIF8+jWvScA3br3ZMH8uS7bGBwSzO3btwkODub2rVvkyJGTkJAQ3n5zIB9+9LDjjozqNWqSIUN4m+o1aIiPj4laBVasxKkTJwBIlSpVWPmdO3cQib5DKbLrjQ2evn+hLP1tAR06dwOgQ+duLFk0HwAR4eaN66gqN2/cIH2GDGHXHx3btm6lUKHCFChYEF9fX9p37MTCBfPcsilHjhz4+fsD8MQTT1C8eAlOnToZ5nABbt266fRvEFf2xQ5J8i3dxG1d0kCXzpnGhlFP0rthiYc29qhXnKU7/nuovE3Vguz+5wL3gsPncmlfozAhLr58nDt7lhw5cgCQPXt2zp096+QIQ85cuXip/2uUKJyPQvlykjZdOuo1aMiXE8bRrFkLslt1xpapk7+nQaMHHcjbtm4h0K8MlSuU4/OxE1xyQhH5csI4Av3K0veZ3m6FUyLD3fsnAp3bNKNRrcpM++EbAC6cO0e27KaOrNmyc+HcOQCeevZ5Dv39N37F81O3WgDvDx+Fl5fzn9upUyfJnTtP2HquXLk5eTLmidP+PXaM3bt3EVjRvHG9+87bFC6Qh5kzpvPO0Pfdrs/T9rmNYGK6zpZETJw7XRHJJCK7reWMiJx0WI+yK1dE8ovIvii2vS8i9aPY1ktEckYo6yQib4tIbRGpGrsreojqFWo1o/X7i+jbtBTVSj5wWAPb+xNyX5m55lC4A0rkycCHPSrRb8LahyprX6MQIZHl1HKCiLjccrl8+TKLFs5n39//cPjYSW7dvMmP06Yw99efee6FF90/eSSMHP4RPj4+dOz84FUwsGIltu36g9UbtvDZyE/CxZFd4dm+z/Pn30fYsmM32XPkYNDrr3nEVnDt/s1dsopla7cw/ef5/PD1l2zesC7KOlavXE6pMmXZdeAYy9dt5e3X+3P92jWP2esKN27coHOHJxk56vOwVu57Hwzj8NHjdOrclS8njItXezyDmKefsyURE+dOV1Uvqmp5VS2PSVc3OnRdVWM0HkdVh6jq7xHLRcQbo6OVM8KmJsASjC6Vp53uSYDzV+8wf/MxAotmBaBb3WI0rZCXXqNWhNs5V6bUzHqzEc98voqjZ8L/CMvkz4SPl5fL42qzZsvG6dOnARPHy5I1q0vHrVr5O/nz5ydLliwkS5aMlq3bMOz9oRw5cpiyJYtQsmgBbt26RdkSRVy0JDzTpvzA4sWL+PaHaZE6suLFS5A6dRr+3B/pMzVKsmXLhre3N15eXvR++lm2b98aI/tCcff+5chp8qNnzpKVxs1bsWvnNjJnzcrZM6aOs2dOkymLGZI6a/pkmrZojYhQoGBh8uYrwOFDzhXXc+bMxYkTx8PWT548Qa5cuaI5InKCgoLo3OFJOnbuSus2bR/a3rFzV+bO+cXtej1lX6ywwwuxR0RKichWq/W7V0RCf+3eIvK1iOwXkWUiktLa/wcRaWd9PiYin4jITqAzUAGYbtWVUsyvvjwmTeBzwCvWthpWa3qldc4VIpLXof4vRWS7iBwUkeZRmJ4ak1uVVMl9qO+Xm/3/XqKBXx5ebVuOdsOWcPveAzGFdKl9+fWdJrwzZQubDpx5qLIONQvz0zrXVbubNW/JtKmmV3za1Mk0b+GacG+ePHnZumULt27dQlVZvWol/V5+hX/+O82fB4/y58GjpEqVir1/HXJeWQSWL1vC5599yqyf55Iq1YM85ceOHiU42NyL//79l4MHD5A3X3636g51kADz5s6hZKnS0eztHHfu362bN7lx/XrY5zWrfqd4iVI0bNKcn2aYtMI/zZhGo6ZGLShX7jysW7MKgPPnznLk8EHy5i8QeeUOVAgM5PDhQxw7epR79+4xe9ZMmjVv6dZ1qSrPPfs0xYqX4OVXXg0rP3zowd9z4fx5FC0WmSBH3NsXOyTJhxfiVQUTk2d0QCTlY3mg+OoLpMQoxAYD5a3yn4Bu1ucfgHbW52PAQIe6VgMVHNb9gSmRnR9YAPS0PvcG5jrUvwTzUCqCUXJNEcHmPiVKlNh74MCBW7v3/KH7/72oQ6Zu0RQtJ+rhU1f0+PnruvvIed195LxOWrxPU7ScqO9O3aI3bt8LK9995Lzm6f69pmg5UVO0nKj/nL6qZZ+fobeD9KGlfcdOmj17dvXx8dGcuXLpxK++0RNnLmjtOnW1UOHCWqduPT159mKkx964e/+hZdDb72iRosW0RMlS2qlLN7147Xa47alTp470uBt37+v1OyF6/U6ItmvfUbM52DRu4iQtWLCQ5sqVW8uULadlypbT3s/00et3QnTStz9o8RIltUzZclquvJ/++NMvYfW4er2du3TTUqVKa+nSZbRZ8xb6z3+nIj3Wk/fv1JW7eurKXd20+y8tWaqMlixVRosWL6FvDH5PT125q/v+OaXVa9bWAgULafVadXT/0dN66spd3fnXUa1Zp54WL1lKi5UoqWO/+l5PXbnrkq1z5i/SwkWKaIGCBXXo+x+6fI2hy++r1imgpUuX0bJly2nZsuV0zvxF2qpNWy1ZqpSWLl1GmzZrroePnXC7bnftwyRm95gPkTQ5NEX1d5wunj6vR68hPoeMichQ4IaqfhqhvAtGiG8K8KuqHhKR/MByVS1i7fMGkExVPxSRH4CFqvqziBwDaqnqv9Z+qzGOdbu1/hZwVFVnRDy/iFwAcqhqkIgkA06ramar/rWq+p2131rgJVXdHdl1eaXPp56ckXbZnpGWaLBnpMUOjw8ZeyKnJvfr43S/O+ves4eMOSIibRw60yqo6o9AS4z4328iUtfa1VGcL4SoZ9DdjOZ0DYFlMTAzomexBzTb2CQ49pCxGKGqc/RBZ9p2ESkI/KOqY4B5QFknVUTHdaw4q6W/5aNG2iXcNouNQCfrc1fAsTu6vYh4iUghjN6V814QGxubuEUAb2/nSyImsTwSOgD7RGQ3RoJ6Sizq+gH40qqrJeA4ymEBENrKrgG8CDwlInuB7sDLDvv+B2wFFgPPqXP5bBsbm/ggiQ8Zi9eEN6o6NIry4UDESdyXMA44dJ9PHT73cvicP0JdvwC/AIjIN8A3DtsO8nArui6R87uqeja4amNjE0sk0YcPnPFIZxlTI2dtY2PzKJHYh4Q54ZF2ujHFsSVtY2OTiEgC4QNn2E7XxsYmaWGHF2xsbGziEbula+NXKItHJXYyBPbzWF0Al7d5NrFJTBLyJFUet8kMiZ+krxxhO10bG5ukg2CHF2xsbGzij6Q/ZCxpW59EOPj332G6XpUCypM1Y1rGfvF5pPsm94ZtP73F5pmDWD99IABli+ZizeTXwsoqlMoHQPonUjJr1LNsnfUm66YOoGShB7l8X+hcm+2z32LHz2/j7SQEFpWmljtEprnWu2c3/EoXJ9CvDM/36U1QUJDb9d65c4fqVSpS0b8c/uVK8cF777pdR0Q8rfFVrHB+KpQvQ6WA8lSrFPvp/p60zxN/27i0L0Yk8SxjtkaaBwgIqKAbtmx3ad+QkBAK5cvFmg1byJcv30Pbk3tDnrpvcPHKg3QSCya8wNjpq1i24U8aVS/Jqz0b0OjZL/iof2tu3LrLR5MWUzR/Nj4f1IGmz42lZKEcTBn+FDW6j+ReUAhXt44h+H7UySOi0tSKSvfKVc21LFmy0rBxEwCe6tGVatVr8Gzf5x+qz9sr6qeCqnLz5k3SpElDUFAQdWtV59PPvqBS5cpRHhMdntAgi0ixwvnZsHk7mTPHXvXb0/a5+7f1tH0eT3iTPp8mr/220/3uzOtrJ7yxMaxauYICBQtF6nCjQhXSpk4BQLo0KTl9/ioAxQtmZ822gwAcPHaWfDkzkjXjExQvkJ1t+45x+04QISH3ua8QjV+LUlPLHSLTXGvUpGmYmkKFCoGcPHnCrTrBqDGkSWM05oKCgggOCoqRtlcoCa/xFT2ets8Tf9u4tM9txE54Y+Mms2fNpEPHzlFuV2DBhH5smD6Q3m2rAfD6pz/zUf/WHFr8AR+/0oYhY82X/I+DJ2lVtxwAFUrlI2+OjOTKlp79R05Rza8wGdOlJmWKZHh7uT7KJqKmlitEpbkWSlBQEDN+nEaDho2jqSVqQkJCqBRQnrw5s1K3fgMqVnLdtojEhcaXiNCiSUOqVgzg268nxaquuNQgi8nfNiIJrpEGST73QqJ1ujHVVkvM3Lt3j0UL59O2Xfuo9wmBql0+oXW/CfTtWINq/oXo074GA0f9SpEm7zDw01+Y+K7RHfv0++WkeyIVm2cO4vlOtdjz9wlCQu7z99GzjPphOQsmvMD88S/gajrdyDS1XCEyzbWZP04L2/7KS/+jWvUaVKtew+U6HfH29mbLjt0cPnaC7du2sn+fezI/cc2K1evZtG0ncxcu5quJ41m/7mHtu4Qmpn/bxIYAXl5eTpfETKK1Tp1oq4lIvI68sPTXYsXSJYsp7+dPtmzZnO57/vIN5q/cS2Cp/HRtXom5K3YD8MvyXWEdaddv3qHv0GlU7jScp9+ZQuYMaTh60mSxnDx3E9W6jqDB058DJkQRHc40taIjMs21zZs2AvDRh+9x4fwFho/8zK06IyN9+vTUql2HZcuWxLiOuND4Cj0+a9astGzdhm3bYq7dFhf2xeZvGx/2uYW4uLhSlZH6+sNqyIWKHmQUkeUicsj6P4NVLiIyRkQOW/Je/g719LT2PyQiPZ2dN9E63chw0C7bAowQkfIistm6CXMcbtBqEalgfc5sqUtEqcUmIt0cyr8KdbAickNERonIHqBKbO3/adaMaEMLjqRK4Uv9KsXZf+QUp89fpUaAkY2rXbEoh/87D5j4bjIf8yx4qk1V1u88zPWbJgNllgwmDponewa8hGhl3TUKTS1XiUxzrVjxEvzw3TesWL6M76f+GOPWx/nz57ly5QoAt2/fZsXvyykWA22vUDyt8XXz5k2uW9ppN2/e5PflyygVC+02T9sX279tXNvnPhLWTxDd4gZ1rIZcaKfbIGCFpVizwloHI25bxFr6ABPBOGngXaASUBF4N9QPRUVSHKebG6iqqiFWHtwXVXWNiLyPufj+0Rz7HPCFqk63QhTeIlIC6AhUs2R7JmASmk/BCE9uUdWHtL5FpA/m5pMnb16nRt+8eZOVvy9n3ISvotxHgGTesGXWIHy8vZm1eDvLN/7FC7d+ZOTr7fDx8eLu3WD6fTgDMB1pX7/fHVXlryOnee696WF1zfj0GTKmT01QcAjBTmaQbdywgR+nT6V0aTPsCeC9Dz+icZOmTq8LjLR667ZPUq1SAD4+PpQr70fvZ/qQNUMa8ubNR92aRoC5Zes2vPn2EJfqDOXM6dM827snISEh3Nf7PNmuA02bRaUT6hwfHx9GfzGOFs0aERISQs9evSlZqlSM6zt39iwd27UBTGdix05daNgoZrHruLAvtn/buLYvJsRx+KAVRjUcYDJGc/ENq3yKmuFem0UkvYjksPZdrqqXAERkOdAYmBHVCZLEkLFQbTNMft1VqjrZUoX4Q1VDFXwLAbNV1d9RJ01EMmNE6vJHocXWD3gLOGedLiUwQ1WHikgwkFxVQ6Kzz50hY66Q+KcBe/Y7E92QMZukjaeHjHlnLKBpGr3vdL9rM3v8C1xwKJqkquF6OUXkKHAZ03/9lapOEpErqpre2i7AZVVNLyILgeGqut7atgLjjGtjRGs/tMrfAW5H1IF0JCm2dKPTQwslmAehkxShhar6oxWaaIbRYuuLaWBOVtXIlCXvOHO4NjY28YjrMdsLLjj76qp6UkSyAstF5IDjRlVVEfF4qzRJxXQdUdWrwGVLdgeM3M4a6/MxIMD63C70mCi02FYA7awbHxpId30QrY2NTbwhiMdGL6jqSev/c8AcTEz2rBU2wPo/9A34JJDH4fDcVllU5VGSZJ2uRU9gpBXbLQ+Evnd8CjwvIrsAx2lCD2mxqeqfwGBgmVXPciAHNjY2iRJPdKSJSGoRCRWwTY1RDd8HzMf4Faz/Q2d+zAd6WKMYKgNXVfU0sBRoKCIZrA60hlZZlCSJ8EI02mq7gYfmg6rqAcJroQ22yiPTYkNVZwGzIilPEyODbWxs4gw3RydERTZgjlWXD/Cjqi4RkW3ATyLyNPAvpqEG8BvQFDgM3AKeAlDVSyLyAbDN2u/90E61qEgSTtfGxsYGcGscbnSo6j9AuUjKLwL1IilX4IUo6voO+M7Vc9tO18bGJskQGtNNythO18bGJknhofBCgmE7XRsbm6RF0va5ttP1BArc9+CEAU9PZsj97EyP1vffVx09Wt+5a3c9VlfWtMk9VhfAtdvuJ16PjrQpk3m0vscOifMZaXGO7XRtbGySFHZ4wcbGxiaeENxOaJPoSNrt9ETOlStX6NqpPX5lSuBftiRbNm9iz57d1K5RhcqBflSvEsj2GKYBHPP5aPzLlSKgfGl6dOvMnTt3otzXS4SVQxvx48tm8l6NEtlYObQhq95rxMI361EgqxmOXKVoFlYObciZbzrQokLucHXkypiK2a/VZuOwJqRIFnlYLSQkhCoV/XmydQvAqGRUrRRA5UA/6tepwZHDh6O9pqtXr/Bcr87UrVSWupXLsWPbZoa9+yZ1K5WlUY0K9OnegatXrwAmXeGr/3uahtUDqFu5HONHj3DpvsVWcy0kJIR61QPp2r41AN9+NYFK5UqQLa0vFy8+mOq/eNF8alfxp261CjSsVZktmza4fI7YapD1faY3eXNmJaB8+GxnE8aNpVzp4viXK8Vbgwa6Xa+n7Is1HkrtmFDYTjcOef21/jRo2Ihdf/zF5u27KVa8BIPffIM33x7C5m27GDzkPQa/9Ybb9Z48eZIJ48ewYfN2duzeR0hICLNnRR237dugKIdOXwtb/7RHAH2/2kydd5fyy+Z/ebWFyRJ14uIt+n2zhV82//tQHROercy4xX9R9e3F3AmKXG9t/NgvKFa8RNh6/xf/x3c/TGPztl106NiZT4YPi/a63nvzNWrVa8DKLXtZsnYbhYsWp0btuizbsJOl67ZToFARJoweCcCieb9w7949lq3fwaKVm/hx8jcc/+9YtPUDJE+enCXLV7J15x62bN/NsqVL2LJ5s9PjQvl64liKFH2QWrJi5SrMnr+YPHnDzxyvWasuqzbuYOWG7YweP4lX+/V1qf6QkBD6v/QC8xYsZtfeP5k9cwZ//fmny/YBdO/Zi3kLw+ccXrN6FQsXzGPrjj3s3LOf/q8OcKtOT9oXK8ROYm4TBVevXmXDurX0fOppAHx9fUmfPj0iwvXrxgFeu3aV7Dlyxqj+4OAImmQ5o66nQbmcTFt7JGxdFZ5IaSJLaVMl48yV2wAcv3iTP09cfUhpomjOtHh7CWv+PBvlOU6eOMGSxb/Ry7peINy1Xr12lRw5op5dfe3aVbZsWk+nbk8B5n6lS5eemnUa4ONjbPWrUJHTp0+E1X3r1k2Cg4O5c+c2yXx9eeIJ54oIEgvNtVMnT7B86WK69uwdVlamnB958+V/aN/UadKE1Xvr5i2Xz+EJDbLqNWqSMWPGcGWTvprIgIGDSJ7cdDRmzZrVrTo9aV9s8XA+3XjHjunGEceOHSVzliz0fbY3f+zdg5+/PyNHfcGIT0fTqkVj3hr0Ovfv32flatdfO0PJlSsX/V8ZQNGCeUmZMiX16jekvoMmmSPJvOC9n3aTJsWDXvP+329j5iu1uHMvhOu3g2j04fJoz1co2xNcu3WPH/pVI2/mNCTzhqAIudcGDniFYR9/EpbQG2D8l1/TtlUzUqRMSdon0rJq3aYoz3H832NkypSFAf2e5c/9f1CmnB9DPxpFqtSpw/b56cfJNG9t8hc1bdmW5YsXElgyP7dv32LIhyNInyFjVNWHIyQkhKoVAzhy5DB9n3/BZc21dwa9xpD3P+bGjevOdwZ+WzCXYUMHc+H8eabNds0xRaZBtnXrFpeOjY7DBw+yYf063n3nbVKkSMHHn3xKhcBAt+uJK/vcInH7VKckqpauiIRY6g37RGS2iKRysr+jQsQxK3duoiAkOJjdu3bybJ/n2LR1J6lSpWbUyOF8M2kin4z8jINH/uOTkZ/xfN9n3K778uXLLFwwj78OHeWf/05x89ZNZkyf9tB+XmLCAHv+vRyu/LlGRek0eg1lX5vPjPVH+bCzX7Tn8/EWKhfNwruzdtPg/WWIgLfDN2fxooVkyZIFP/+AcMeNG/M5v85bxKF/jtOtRy8GDYxauSAkOJh9e3fR7ak+LF69hVSpUjPhi5Fh28eOGo6Ptw9t2hvljd07t+Hl7cXW/UdZv/MAX4//gv+O/RPtdYQSE821ZYsXkTlzVsr5+TvdN5SmLVqzYcc+fpjxM58MG+rycXFBcEgwly5dYu2GzXw0fCTdunQgKeTSjoiI57KMJRSJzbrblnRGaeAeRukhwbEyC7l1r3Lmyk2u3LnDlFfbtG3H7l27mD5tCq1aG52qtk+2Z8d29zvSVq74nfz5C4RpkrVu3TZMk8wRLwFvgZ0jWzDp+SpUL5GNGf1rUipPBnb+Y3JyzNn6H4GFon9Wnbp0m33/XeHf8zcJua+E3A8v6b5p0wYWLVpAiaIF6Nm9M2tWr6Rtq+b8sXdP2PW3a9+RLZuibulmz5mLHDlz4VehIgBNW7Zh397dAMz+cQorli3mi69+CHt1nPfzLGrXbUiyZMnInCUrAZWqsHf3TpfvIbinubZ1y0aWLl5IhdJF6PtUNzasXcX/nnEqhwVAlWo1+PfY0XAdbVERVxpkuXLlpnWbtogIgRUr4uXlxYULzu2JL/vcIamHFxKb03VkHVBYRGpbWdsBEJFxItIrugNF5FWrtbxPRPpbZcNF5AWHfYaKyADr8+sisk2Mbtp7Vll+EflbRKZgUr7lieRUUZI9e3Zy587Dwb//BmD1qhUUL1GCHDlysm7tGqtsJYUKF3GnWsDSJNu6OUyTbNXKFeE6sEIJvg93Q8D/9QX0mbiJ9X+dpduYdaRNmYxC2Z4AoHap7Bx06GSLjF1HL5E2VTIyPWHigd4SXujy/Q8/5tA/x/nr4FEmT51Brdp1+emXuVy7dpVDBw8CsHLF8khtDCVrtuzkyJWbI4fM/hvWrqJIsRKsXrGML8d+xrfTfyZlqgcvPrly52HjutUA3Lp5k13bt1KoSDGn9y6mmmuDhw5j94GjbN93iK++n0a1mnWY8M3kKPc/euRwWEty7+5d3Lt7l4wZMzk9T1xpkLVo2Zo1q1cBcOjgQe7du0fmzO6/GCa8RlrSd7qJMqYrRum3CeC27KuIBGDSrlXCRH+2iMgaTOrGz4Hx1q4dgEYi0hAjNlfR2n++iNQE/rPKe6rqQ93b4oJG2qejx9C7Vzfu3btHgQIF+fLr72jeohWvv9af4OBgUqRIEa1mWlRUrFSJNm3bUaWiv9EkK+fH08/2cenYkPvKKz9s5ft+1bh/X7l6K4iXvjMxOb8CGZncrzrpUvvSqHxO3mhdhuqDF3NflXdn7ebX1+sQ+n12prvm4+PDuImT6NKpHV5eXmTIkIGJX30b7THvDR/Ny317ERR0j7z5CvDpuEm0qF+Ne3fv0u3JZsbGChX5aNQ4ejz9HANe7EP9qn6oKu279KBEqTJOr9/TmmtfTxzH+C9Gce7sGepUCaBew8aMHvcVC+fPYfaMafgkS0aKFCmZ9MN0l5yBJzTIenTrzLo1q7lw4QKF8ufmnSHv0fOp3vR9pjcB5Uvjm8yXb76bHCPnlBg00pJ6TDdRaaSJSAjwh7W6DngNqIrRO2tu7TMOo3n2g4TXQjsGVMCISmZS1SHW/h8A51V1jIj8hUnblgWYoKrVRORTjLrEFeu8aYCPMYoSq1S1gDO7/QMq6PpN25zt5jJeHtYMS+zTgC/cuOexuuxpwIkLT2ukJc9eRHN3HeN0v38+a+rR83qSxNbSva2q5R0LxIhDOoZBUhBzZmMcbHYeJC0X4GNVDdfkFJH8uKbHZmNjE08IkMijB05JzDHdUP4FSopIchFJTyQJhiOwDmgtIqksGY42VhkYR9sJ43hnW2VLgd4ikgZARHKJpZdmY2OT2HAez7VjurFEVY+LyE+YzqyjwC4n++8UkR+A0GEB36jqLmvbfksX6aSlb4SqLhOREsAm6491A+gG2CrANjaJEE+H3+KbROV0o9IkU9WBwEOTxVW1tsPn/A6fPwM+i6Kuh3pbVPUL4ItIdi8dSZmNjU1CIUk/vJConK6NjY1NdAh2S9fGxsYmXrFbujY2NjbxhdgtXRsbG5t4wwwZs53uY48qBIU4maLlBj4eHsl3fJJnJzNkbO1ZDbcTPz3v0fo8SSpf74Q2wSYciX9ImDOSwjhdGxsbmzC8vMTp4ioi4i0iu0Lzu4hIARHZIiKHRWSWiPha5cmt9cPW9vwOdbxplf8tIo2c2u/+JdvY2NgkENaQMWeLG7wM/OWw/gkwWlULA5eB0Kz8TwOXrfLR1n6ISEnMhKtSQGNggohE+3pkO10P8kLfZyicLwdVKpQLK9u7Zzf1a1WleqUAalerxA4HTbR1a1dTvVIAlQPK0rRhnWjrPvj331QJ9AtbcmROx/gxnwMwcfxY/MqUoEL50gx+0zXtqzt37lCjaiUqBZQnoFzpMK2wiRPGUbpEEVL5Rp76L7kPbBvfmc1jO7H+8w4AfNS7Gru/7MbWcZ2Z9XZT0qX2BSCZjxdf9a/HtvGd2TK2MzXKmBSAKZP78OvQFuz+shs7JnQhVTIv7ty5Q4NaVahV2Z9qFcox/MP3APj32FEa1q5KYNniPN2jC/fuhc/TsGDur2ROk4xdO7e7dN0QM42v5/s8TYE82anoXzas7NKlS7Rs2pDypYrRsmlDLl82eYvXrVlNrqwZqFrRn6oV/Rk+7AOXbYupfVERW024uK7PXUJjup6YkSYiuYFmwDfWugB1gZ+tXSYDra3Prax1rO31rP1bATNV9a6qHgUOY5JnRYntdD1Il+49+HnuonBl7w4exBtvvcP6LTt46513GTJ4EGBEKwf0f5EZP89h8469TJ42K7IqwyharBibtu1i07ZdrN+8nZSpUtGiVRvWrF7FogXz2bx9N9t37+OlV1zTvkqePDmLl61gy47dbN6+i+XLlrJ1y2aqVKnGosXLyZsvX5THNn5zDpVfnEn1/j8BsGLXfwT8bzoV+83g0KkrvN7B5Bnp3chknwp8YQbNB89l+DPVw1ohn/+6k/LPTaPySzPx8RbSpErBnEXLWbN5J6s3bWfl70vZvnUz77/zFs+98DLb9h4gffr0TJv8XZgd169f56sJYwkIjPY7Ho6Yanx17d6TOfN/C1f22aefUKtOPXbv/5taderx2aefhG2rUq06G7fuZOPWnQx6+504ty8qYqsJF9f1xQQPtnQ/x0y6Cu2QyQRcUdVga/0EEJosOBdwHMDaftXaP6w8kmMixXa6HqRa9ZpkiKBNJSJhEjbXrl0jh6WJ9vOsGbRo2Zo8eUxayCxuaFatXrmCggULkTdfPr6Z9CWvvf6G29pXEkErLCgoCEQo7+dHvvz5XbYFYMWu44RYwmpbD5whVyZTb/G8GVm9x2ianb96m6s37hJQJBu37wazdu9Jc+7g+wTfV7y9HrZHRFi3ZhUt2zwJQKeu3Vm8cH7YeYd/8C4vvfo6yZO7ngMpphpf1WvUJEMEOaBFC+bTtVsPALp268HC+bHXCvO0BlnEv7M7mnDxUV9McDGmm1lEtjss4XKfikhz4Jyq7ohX47Gdbpzz8YjPGPLWG5Qqkp933hzIkPeNIu7hw4e4cuUKzRrVpVbVisyYPtXlOn+ePZN2HTqZeg4dZMOGddSuXplG9WuzY7vrKSZDQkKoVMGPfLmyUa9efSpWdE0rbMEHrdjwRUd6N344j2qPBiVZusOoCf9x9ALNKxfA20vIly0tfoWzkjtz+Jne6VL74ustBIUoISEh1K4SQIkCOaldtz75CxQiXfr0YcKUOXPl5vSpUwDs2b2TkydO0LBxU5evFyLX+Dp58qRbdYRy/txZsltim9myZ+f8uQfCnVu3bKZKoB9tWzblrz/3J4h9oYSEhFApoDx5c2albv0GLmvCxVd9biEuhxcuqGoFh2VShJqqAS2tlLAzMWGFL4D0Vj5vgNxA6M0/iSVkYG1PB1x0LI/kmEhJ0k5XHmiq7ReRPSLymrgpqxPXfPv1VwwbMYr9h47x0YhRvPj8s0CohtoOfvp1Ab/O/42Rw4dx2FJNiI579+6xaOEC2jzZHjCqwJcvXWLVuk0M+3gEPbp0dFn7ytvbmy3bd3Ho6HG2b9/mklbY3WCo+vIsWg+ZT99mZalW6oEK8cCOFQgJuc/MVUYtY/KyPzl54QYbvujIyD412PzX6bAWMYC3lzB5YGPuBN3nvhp7Vm/awd6/j7Fz+zYOHTwQqQ3379/nnUGv8/7HI1y6zvjAMZZYzs+fPw8eZdO2XfT9Xz86t2+boLbFRBMuPutzh9DUjrENL6jqm6qa28rZ0glYqapdgVWYLIQAPYHQ14z51jrW9pVqfmjzgU7W6IYCGOGDaDW4EpWDigGhmmqlgAYYtYmHIvsOT654Z+b0KbRs1QaA1m3bsdNqiebMlYu69RuSOnVqMmXOTNVqNdj3x16n9S1bspjy5f3Jli0bYFpCLVsb7asKgTHTvkqfPj01a9VmuQtaYaGcv3qb+ZuOEFjM2NGtfnGaBuan16fLwvYJua8M/Ho9lV+cSYcPFpE+TXIOnXwgkjn+xbocOXWFO8HhHxLp0qenes3abNu6hatXrhAcbEJsp06eIEfOnNy4fp0Df+6nVZP6+JUszI5tW+jWoa1LnWme1PjKkjUbZ06fBowiReYsJrSTNm3asFfwRo2bEhQU5PLfJC41yNzRhEuI+lzDeWghljPW3gBeFZHDmJhtqNzJt0Amq/xVYBCYzIXAT8CfGKWbF1Q12gyFSd3phqGq5zDyOf3E0EtE5ovISmCFiKQWke9EZKs1Lq8VgIiUssp2i9FIK2Ltu8hqPe8TkRjPLsieIyfr1xlNtLWrV1KwkNFEa9q8JZs3bSA4OJhbt26xY/tWirqg1TX7p5m079gpbL15y1asXeOgfRXkmvZVRK2wlSt+d+n8oaRK7kN9/7zs//ciDQLy8uqTAbR7fyG37waH7ZMyuQ+pkpvnXd3yeQgOuc+B48bpvtu9MulS+zJg0loALpw/z1UHe9asNPZUr1mb+XN+AWDm9Kk0adaCtOnScfC/M+z68zC7/jxMQGAlpv30K37+zoUCPKnx1bR5C6ZPmwLA9GlTaNbC1HP2zJmwt43t27Zy//59MmVyro/mafsg5ppw8VVfTPB0Pl1VXR2qTKOq/6hqRVUtrKrtVfWuVX7HWi9sbf/H4fhhqlpIVYup6mJn53ukZqSp6j/WGLnQ3iR/oKyqXhKRjzCvBL3FJEPfKiK/YxSHv1DV6dZAaG+gKXBKVZsBiEi6iOcSR400qzPs6Z5dWb92DRcvXqBk4XwMGvwuX4z/kkEDXiU4JJgUyZPzxbiJABQrXoL6DRpRraIfXl5edO/Vm5Klos8kefPmTVatWM6Y8V+GlfXo1Zvn+zxNoF8ZfH19+eqbH1z60p05fZpnn+7F/ZAQ7t+/T9t27WnarDkTxo3hs1EjOXvmDBUDytGocRMmfvWNuWbA1we2jO2Mj7cwa81Blu/4j31fdyd5Mm8WDmsNmM60l8avJku6lCz4oBX3VTl18SZPf7ocgFyZUjOoUyAHjl9i05hOpEvhzaELZ+jz9FNGu+y+0qptOxo1aUax4iV4tldXPv7gXcqULU/Xnr2dXlt0xFTj66nuXVi3bg0XL1ygWKG8vDX4XV4d8AY9u3Zi6g/fkSdvPiZPN7JIc+f8wjeTvsTHx4cUKVPy/dQfXXYEntYg87QmnKfrc5tHILVjotJIcxcRuRExB6+IXAGKYUINtVT1Kat8O0bqJ7QplhFoBPgBbwNTgF9V9ZCIFAWWYZQmFqrqOqLBz7+Crt6wxWPX5ePt2RcQT+cHSczTgFMn92w7ItiD07vB83/bxI6nNdKeyFNcy/f/xul+6wfUsDXS4gMRKYhRfDhnFTlqnAnwpKr+HeGwv0RkC2aQ9G8i0ldVV4qIP6bF+6GIrFDV9+PafhsbG+ck9Sxjj8xjV0SyAF8C4zTy5vtS4EVrFgki4mf9XxD4R1XHYHoqy4pITuCWqk4DRmLCFDY2NokAT8d045uk3tJNKSK7gWSYsMFUopDpAT7AzEDZaw0rOwo0BzoA3UUkCDgDfAQEAiNF5D4QBCTeNFg2No8Tj0BMN0k7XVWNMrGEqv4A/OCwfhvoG8l+w4GIE9yXWouNjU0iQoj1kLAEJ0k7XRsbm8cPryTe1LWdro2NTZIiifvcqJ2uiIwFohxPpqovxYlFNjY2NlEg8mjL9bieoNTGxsYmnvB+VGO6qjrZcV1EUqnqrbg3KenhJZA8WeLV0rpy857zndzg/K8veLS+LF2+91hdl2c97XwnNwgO8ezkIR8Pf008PbkpKbQik4CJ0eJ0nK6IVBGRP4ED1no5EZkQ55bZ2NjYREAwIxic/UvMuDI54nPMdNmLAKq6B6gZhzbZ2NjYRI4I3l7Ol8SMSzPSVPV4hKJoU5fZQN9nepM3Z1YCyj+cxObz0aNImUzcTsHoyLgxXxBQvjT+5Uox9ovPXT4uJCSEBjUq0r1ja8C8nn78wRCqBZSiRsWyfPOlyauwZNF86lYNoH71QBrVrsKWTRseqssd3bBZM6ZTuUJ5KgWUo17t6vyxdw8AqXyFbZ+1YfOnrVn/icmm1bZKfnZ83pabs3vjX+jhjGl5Mqfm/LQe9G/54N6mS+XLjwPq4usNvt5E29aJiQbZiRPHadGkHpUDylClQlm+HD8GgGHvD6FaRT9qVA6gbYvGnD59KuyY9WtXU6NyAFUqlKVZo+g18GJrnyNR6d8dO3qUmtUqU7pEEbp36fSQ3lx82RdbPCxMGe+44nSPi0hVQEUkmYgMILx6pk0kdO/Zi3kLH84zevz4cVYsX0aevHljXPf+ffv4/ruvWbdxK1t37GHxbws5cviwS8d+PXEsRRxS8c2aPoVTJ06wbtsfrNu6l9ZPGrHJGrXqsmLDdn5fv43R4ybx2kvPPVSXO7ph+fIXYPHyVWzZsYc33nybl154UF/jd3+j8oC5VH/DSPHs/+8ynUasYP2fZyK9hk96VWLZrhPhyj7tXZllu05wLwTuhUQ97CamGmQ+3j58+NFINu/4g2WrNvDNpIkc+OtPXuw/gA1bd7Fu8w4aNWnGiI8/BODqlSsMeOVFfpw9h03b9/LD1Og18GJrnyNR6d8NfmsQL77Un31/HSJ9hvT88P23ziuLA/tig2DG6TpbEjOuON3ngBcwYmungPLWuk00VK9Rk4wR9NIABg54hWEfj4hVh8WBA38RGFiJVKlS4ePjQ42atZg791enx506eYIVyxbTpftTYWWTv5vEq2+8hZeX+SqEJuJOnSZNmI23bt2M1F53dMMqV6lKhgwZAAisWJmTJ8M7TUf+PnmVQ6euRrqtRcV8HDt3nT+PP0iGnjZVMqqXzM4PK5wrb8RUgyx7jhyU8zMpOJ544gmKFivO6VMnSZs2bdg+N28+uE+zf5pB8xho4HlCIy0q/bs1q1fS5kkjitCte88Yabp5WsMtJsRxEvM4x6nTVdULqtpVVbOpahZV7aaqF+PDuEeNBfPnkTNnLsqWK+d852goVao0Gzas4+LFi9y6dYsli3/jxPGIEaCHGfLmAAa//3GYgwX49+g/zPv1ZxrVrkKXdi3458ihsG2/LZhH9cAydO/QmtHjIkpMRU50umGhTPnhOxo0bBy2vmBIYzaMaEXvBsWirTt1Ch9ea12WYT/tCleeP+sTXLh2h0n9auDrDT7RfKs9oUH237/H2LtnNwGBRhvsg6GDKVU0P7NnzeCtwUMBOHLIaOA1b1yX2tUqMtNFDTxPaaRF1L8rWDC83lyuXLk5FYN640LDzR1cCS0k8oauS6MXCorIAhE5LyLnRGSelZkr0SIi2UVkpogcEZEdIvKblSPXnTrSi8j/PGXTrVu3GDH8I4YMjX2GyOIlSvDagDdo0aQhLZs1ply58nh7Rz8WafmSRWTOkoVy5cMnTLt77y4pkidn6epNdO3xNK/0e5CeommLVqzf9gffTZ/NiGFD3bYzsoxPa1evYsoP3/H+MBMLvH1Pqfr6PFp/uJS+jUtQrWT2KOsb3MGfsQv3cfNOcLhyH28vyhfMxNdLD3DP6m2IzvHGhhs3btCjSwc+HvFZWCv3naEfsv/gMdp37MzXX40HIDgkmD27djDrlwX8Mu83Rn7imgaep4iof3fw78j15pIij0N44UeMBlAOICcwG5gRl0bFBit14xxgtSWhEQC8CWRzs6r0gMec7j9HjvDvsaNUDChHscL5OXniBFUq+nPmTORxS2f06v00G7fu4PdVa0mfIQNFikT/TNm6ZRPLFi8isExRnnu6O+vXruaFPr3IkTMXTVu0BoyT/Wv/Hw8dW6VaDf49dpSLF513/EWlGwaw74+99Hu+DzN/nhMmXxMaez1/7Q7zt/xLYOGopYYCi2RhWPdADkzsQL/mpXi9bXmea1KCkxdvcvLiTbYdOg9AyP2oO9Jio0EWFBREzy7tad+xMy0s3TtH2nfqwvy5c8LOExMNPE9rpIXq323ZvCmc3tzJkyfIGYN641LDzVXEhSUx44rTTaWqU1U12FqmYRQYEit1gCBVDdO0sYa5rReRkZbm2R+humcikkZEVojITqu8lXXYcKCQpZ02MrZGlS5Thv9OnePvw8f4+/AxcuXOzaatO8mePeqWXXScO2fytP/333/Mm/srHTt3iXb/t9/9kJ1//sO2Pw7y5bdTqV6zNuMn/UCTZi3ZYGm4bVq/NkzD7eg/h8MG3u/dvYt79+6RMaNzna+odMOO//cfXTu2Y9J3kyN9QKRK7kP9crnY/9/lh7aFUv+dRRR//ieKP/8T4xbuZ+Svu/ly8V+cvXKbExduUiSnUVXy9oq6Iy2mGmSqyovPP0vRYiV44aVXwsqPHH4Qjlm8cD5FixWz7kNLNm98oIG3fZtrGnie0EiLTP+uWPES1KxVhzm//AzAtKmTw/427uBpDTd3EUjyQ8aiy70Q2kOyWEQGYbThFegI/BbVcYmA0sCOSMrbYjoBywGZgW0ishY4D7RR1WsikhnYLCLzMWqfpVW1fGQnCaeRFslIhB7dOrNuzWouXLhAofy5eWfIe/Tq7bnZUp07PMmlSxdJ5pOMz8eMJ3369DGqp1//13mhT08mTRxD6tRpGDXGPKsWzZ/L7JnTSOaTjBQpU/Lld9MeChW4oxs2/KMPuHTpIq++3A8wWmDrNm4lRTJhy6jW+Hh7MWvdEZbvPknLivn47JkqZE6bgl/fasjeYxdp+UH0mTZf/XYT379cC19vUIWgKFR2YqpBtnnTBmbNmEbJUmWoUTkAgHeGfsC0Kd9z6OBBvLy8yJM3L5+NMfOGihUvQb0GjaheyQ8RL3q4oIEXG/sciUr/rkSJkvTo1pn3hr5DuXJ+9HrK/e+jpzXc3CYJJCl3RpQaaSJyFONkI7tCVdVEGdcVkZeAAqr6SoTy0cAfqvqdtT4VEypZDIzGTPi4j9FXK4BpzS9UVae/lICACrphS+JNVeHpacBpUng2OV1ingZ8555nh6Sn8PXsPODEPg3Y0xppmQqW0qYf/Oh0v2ndyic9jTRVLRCfhniQ/UA7N/bvCmQBAlQ1SESOkbjDJzY2jy2h4YWkjEt9vCJSWkQ6iEiP0CWuDYsFK4Hk1us/ACJSFrgCdBQRb0tPrSawFUgHnLMcbh0gn3XYdeCJeLXcxsbGKY+8RpqIvAvUBkpiYrlNgPUYyfJEh6qqiLQBPheRN4A7wDGgP5AG2IMJmwxU1TMiMh1YICJ/YNJZHrDquSgiG0RkH7BYVV+P/6uxsbGJSOJ2qc5xJTjXDtP5tEtVnxKRbMC0uDUrdqjqKYzgZERetxbHfS8AVaKoJ/ohATY2NvGKiGfkekQkBbAWSI7xgz+r6rsiUgAzaCATpkO+u6reE5HkmIZmACb5V0dVPWbV9SbwNCYnzUuqGm2vryvhhduqeh8IFpG0wDkgj5NjbGxsbOIED00DvgvUVdVymFFNjUWkMvAJMFpVCwOXMc4U6//LVvloaz9EpCTQCSgFNAYmiEi0vaWuON3tIpIe+Brj+XcCm1y5KhsbGxtP44lpwGq4Ya0msxYF6gI/W+WTgdbW51bWOtb2etZErFbATFW9q6pHgcNAxejO7TS8oKqhs7K+FJElQFpVdT61xsbGxsbDCC5P880sIo7jOCepargEIlaLdAdQGBgPHAGuqGroPPMTmERfWP8fB1DVYBG5iglB5AI2O1TreEykRDc5wj+6baq6M7qKbRIP6VP7JrQJ0eLJsbUZAvt5rC6Ay9vGebS+xD6u1tP2eRzB1fDBBWfjdFU1BChvvcnPAZxPGfQA0bV0R0WzLbQZbmNjYxOveDqXkapeEZFVmA719CLiY7V2cwOhKdROYvqyToiID2ao6UWH8lAcj4mU6CZHuJ7q3sbGxiYeEDzTurfG6gdZDjcl0ADTObYKM2JrJtATCE0WPN9a32RtX2kNT50P/Cgin2ESghXBjP+PkjhKgGcTEU9KnNy5c4fqVSpS0b8c/uVKhcmxJBb7PC3nEp30UUQOLHqPbT+9xeaZg1g/fSAAZYrmYvXk19j201v8/HlfnkhtJhxmTJeaJZNe4vyGUYx+o324euaN+x9bZg1ymp/XXfuiIyQkhMqB/rRt3QKAiRPGUbpEEVL5esVY2un48eM0ql8Hv7Il8S9XinFjvnC7jqjkf1avWkmVigFUKF+GZ3v3CstgFtd4ifPFBXIAq0RkL7ANWK6qC4E3gFdF5DAmZhsqr/EtkMkqfxWTmwVV3Y/JwvgnsAR4wQpbREmUuRdsXMdZ7oWQkBDKlCzKosXLyZU7N9UrBzJ52gxKlCwZo/OpKjdv3iRNmjQEBQVRt1Z1Pv3sCypVrhyj+jxpn6evFWD9urWkTp2GZ3r3YMfufdHue/bcRap1HcHFKzcfHD/tdQaNnsP6HYfp0aoy+XNl4v0Ji0iVwpfyxXNTsnBOShXKwSufzA475onUKbh+8w6Xt40jmReEKNyP4qfijn3R/d7GfP4ZO3fs4Nr1a/w6dwG7d+0iQ4YMNGpQh/WbtpE588NpL521+k6fPs2Z06fx8/fn+vXrVK0UwE8/z43y7xGZfRG/b/Vq12DEp5/RvWsnflvyO0WKFuX9oUPImy/fQ0l0Uvl6eTQHQvYipbXrZ7843e+zlsUTbe4Fu6UbD3ha4iSiHEtwUFCsXrk8aV9cyLlEJX3kKoXzZmX9DqMht3LzAVrXKw/ArTv32Lj7H+7cDXromOs378SbfQAnTpxgyeLfwmWiK+/nR778+WNVb44cOfDzfyAzVLx4CU6dck/pITL5Hy9vb3x9fSlS1KTprFe/AXPnOJeM8gQeaukmGK4oR4iIdBORIdZ6XhGJdhyaTXjiQuIkJCSESgHlyZszK3XrN6BipUqJwr6ElnNRVRZM6MeG6QPp3bYaAH/9c5oWtY1icdsG/uTOlsGluuaPf4Hk1jD3qFq5nmLga6/w4cefhJNS8jT/HjvG7t27CKzo/nclovxPYGBFgoOD2bHDvOHN+fVnTrogGeUJHnm5HmACplevs7V+HTOmLV4RkbdFZL+I7LUSi8fcyzyoc7WIRPsK4so+CYG3tzdbduzm8LETbN+2lf37on+tfVyo99Roqnb5hNb9JtC3Yw2q+Rei79Dp9OlQgw3TB5ImVXLuBbmWrrHlC+O5a+0al62n3xYtJEvWLPj7B8TZOW7cuEHnDk8yctTn4cQ0XSWi/M+f+/czZdoM3hjwKjWqViJNmifwciIZ5QkE8BFxuiRmXMm9UElV/UVkF4CqXhaReB34KSJVgOaAv6retZKNJ+7Bpw7EpcRJ+vTpqVW7DsuWLaFU6Zh15HjSvoSWczl13qgIn798g/kr9xJYKj+fT11Bi/+ZdkLhvFlpUsO9pNshapxuXLV2N2/cwKKFC1i6ZDF37tzh+rVr9O7Zne8muyZm6YygoCA6d3iSjp270rpN21jVFSr/s3zZEvq/OoDfV60F4Pflyzh86JCToz1DIvepTnGlpRtkzdxQCBtqEUVe/jgjB2aw810IUyg+JSJDRGSbJcEzyZqWF9o6/UREtorIQRGpYZWnFCNY+ZeIzAFShp5ARCaKyHarNf2eJ433tMRJRDmWFb8vp5gLUjDxYV9Cy7mkSZUcgFQpfKlfpTj7j5wiSwYTjxQRBj3biK9/Xh9tHalT+pI984PWoLcYNYq44v1hH3P46HEOHDrKlGkzqFWnrsccrqry3LNPU6x4CV5+5dUY1RGZ/E/RYsXDJKPu3r3LZ5+O4Jk+faOpxTOIC6KUiV2Y0pWW7hjMbI2sIjIMM0ZtcJxa9TDLgCEichD4HZilqmuAcar6PoQpQTQHFljH+KhqRRFpCrwL1AeeB26pagkrx67jrLq3VfWS9YBZISJlPTXd2dMSJ2dOn+bZ3j0JCQnhvt7nyXYdaNqseaKwLy7kXFyVPhJgxfdGMMTH25tZi7ezfONfvNC5Nn071gRg3srdTJn3YNbmgUXv8UTqFPgm86FFnbI0/994Ll25yc+f98U3mQ++3qaFGxKN040raaYJ48bw2aiRnD1zhooB5WjUuAkTv/rGrTo2btjAj9OnUrp0GSoFlAfgvQ8/onGTpi7XEZX8z1uDXmfxokXcv3+fZ/s+R+068TNfKpH7VKe4NGRMRIoD9bC+16r6V1wbFokN3kANjPBkX8w4uevAQCAVkBEYq6rDRWQ1xolusFJRblDVwiIyFxijqiutOncCfVR1u4g8h9E888G0rF9U1ZlWXQNUNdyYMAmvkRZw8Mi/cXsDbFzCngYcOzxtn6eHjOUqWkb7jp/jdL93GxZJtEPGXElinhe4xYMWJCKSV1X/i0vDImINOF4NrLYSjvcFygIVVPW4iAwlvMzOXev/EJxcp5VDcwAQaMWsf8CJZI+VPGMSmHG67l6PjY1NzEjqLV1XYrqLgIXW/yuAfzBijvGGiBQTkSIOReWBv63PF0QkDa7poq0Fulh1lsY4bYC0wE3gqtUybuIJu21sbDyMC2N0E/s4XVdSO5ZxXLeyj/0vit3jijTAWCsbUDAmZ2UfjO7ZPuAMZiqfMyYC34vIX8BfWFLtqrrHGp1xAJO+bYOH7bexsfEAAngn8aau21raqrrTE2Nk3TznDqBqJJsGE0mnnqrWdvh8Achvfb6NyfIe2Tl6RVFeO7JyGxubhCGxt2Sd4UpM13GciRfgD5yKM4tsbGxsoiGxq/06w5WWrqMMeTAmtus844SNjY2NhxEe8ZauNUzrCVUdEE/22NjY2ESNgHcS97rRyfX4WFpA1eLTIBsbG5uoeNRbulsx8dvdVnb02ZhhVQCoavzkcUsC3A2+z7HzN53v6CL5s6T2WF0A124/nLowNqTy9Wxik6u3PGefpycz1Bm1xqP1rXqtlkfrS+yTLeKCJGBitLgS002B0QKqi8m/INb/ttO1sbGJVwR5pIeMZbVGLuzjgbMNxZ6BZWNjE/8kgckPzohuRpo3ZlJCGswIhjQRFptImPLNeFrUCaR57QpM/tqkE/xixPu0qleJNvWr8HSnlpw7cxqAfw79TacWdSmbPyPfTYxeuyoqHa4J48ZSrnRx/MuV4q1BA53aFxISQr3qgXRt3xqAb7+aQKVyJciW1peLFx/ocP0860dqV/GnVmU/mtWvyf4/9jxU1/N9nqZAnuxU9C8bVvbB0CFUrlCeqhX9adWsEadPPRhduG7NaqpW9CfQrwyN6z+se1q5XFHqVQugYc2KNK1rhmWPGv4BAaUK0rBmRRrWrMiK5UsAOP7fMQrlTB9WPuhV13IuuKsZ9m13P6Y8FcD0pyvwTPV8AATkTc8PPf2Z1rsC7zQthrflBBqWzMrUpwKY1juASd3KU9gKE/l6S1g9zjTXPKG3duL4cRo3qIt/2VIElCvN+LHhr/GL0aNipbvmaQ08d0nqWcaiTHgjIjtV1T+e7UmSlC7nrz8vWcfBA/t57fle/LRoDcl8fXm2S2uGfvIFmTJnIc0TJlXg1G8mcOTQAYZ+MoaLF85x6sRxVixZQNp0Gej9/MtA5DHdyHS41qxexScfD2PO/EUkT56cc+fOkTVr1oeOdYzpfjnuc3bv3MH169eZPnsuf+zZRbr0GWjbrAFL12wiUyajw7VtyyaKFC1O+gwZWLFsCSM//oAlq8xEvdCY7vp1a0mTJg19nu7F1p0mIdu1a9fCkmRPHD+WA3/9yRfjJnLlyhXq167OnPm/kSdvXs6fO0cWy9bQmG7lckX5beVGMmZ6oAU2avgHpE6dhudefCXcNR3/7xi9OrVlxcad4cozPZE82r+Vu5phTces43bQfby9hK+6lueLlUf4oGUJXpy5l+OXb/Ns9fycuXaHBXvPUCZXWo5duMX1u8FULpiRZ6rl45mpuwBImcyL20H3WfVaLXy9ISgk8tdFd/TWIPKY7unTpzlz5jR+fuYaq1WqwKyf51CiZElOHD/O/557lr//PsCGzdsf0l1zFtN1VwMvZTLxaOKZ/CXK6ts/LHC6X5/K+RNtwpvoWrqJ+3GRCPnn0N+U9QskZapU+Pj4EFilOst/mx/mcAFu374V1hOQKXNWypQPwMcnmdO6I9PhmvTVRAYMHETy5MbRROZwHTl18gTLly6ma8/eYWVlyvmRN1/+h/YNrFSF9BmMrE1AYCVOR6KrVb1GTTJkCG+ToyrBzZs3w37Es2fNoGWrNuTJmxcgzOHGN+5qht0OMqmjfbwEHy8h5L4SFKIcv3wbgK3HLlO7qHFcf5y8xvW7RhF3/8lrZHV4AITW4wxP6K3lyJEDP78H11jM4RoHDniVDz/6JMYdZnGhgecu3l7idEnMROd068WbFY8IRYqXZMfWjVy+dJHbt26xduUyzpw6AcDnw4dSJ6AYC36dxUuveyYd8eGDB9mwfh01qlaiQd1abN8WffqJdwa9xpD3P3Zbh+vHqd9Tt0Ejl/d/b8hgihfKx08zf+TtISYf/OFDB7ly5TJNGtSlRpVAfpw25aHjRIQuTzanSZ0qTPvhQd7YH76ZSP3qFXitXx+uXLkcVv7ff8doVKsSTzavz5ZN0ScmjwxXNMO8BCb3CuC3F6uy9dhl/jx9HW8voXh2E2GrUywz2dI+3LpuUS47m/659FA9ya38vPHVKfLvsWPs2WOuccH8eeTMlZOy5crFuL6E1sATjNNytiRmorRPVS9FtS2xE5memogcs2R+Iu7bUkQGRVFPbRGJLOdDpBQqUpxn/vcKz3RuxbNdW1O8VJkw3aj+g4ayasfftGjbkenffRXja3MkOCSYS5cusXbDZj4aPpJuXTpEOYRo2eJFZM6clXJ+7kWM1q9dzY9Tvued9z5y+Zh33/+QA0f+pUOnLkyaaOLawcHB7Nq1k5/nLmDOgsWM+HgYhw4dDHfcr7+tZMnqzUz9aR6Tv/2KzRvX0aN3Hzbs/Itla7eSNXt2Phj8BgBZs+Vg695DLF2zhXc/HEG/Z3ty/do1l210VTPsvkLPH3bQasImSuZIS8HMqRgy/y9erluYb7v7ceteCCERGrH+edPTomx2xq/+56F67oYYxxEfbbEbN27QuWM7Rnw6Gh8fH0Z+8jHvvPt+PJw5DhHzcHa2JGYS+0PBbSLoqZXFKEZEKVOqqvNV9aHeABHxAWoTeaKdKGnXpSe/LF3PtDnLSJcuA/kLFg63vXmbjiz7zTOvY7ly5aZ1m7aICIEVK+LlFXXnyNYtG1m6eCEVSheh71Pd2LB2Ff97pme09e/ft5dX+z3H5Bm/kDFTJrft69ipC/PmmpGFOXPlpn79hqROnZrMmTNTtXoN9u0N3zmXI6fRUsucJSuNm7Vk947tZMmaDW9vb7y8vOjSoze7d5pc8smTJydDRmNT2fL+5CtQkH+OuKbRFRPNsBt3Q9j53xUqF8zIvlPXeP7H3Tw9dRe7j1/l+OVbYfsVypKaNxsXZeAv+7l2JzjSuu5r3PfABwUF0aVjOzp17kLrNm3558gR/j12lEoVylO8SAFOnjhB1UoBnDlzxq16E1oDLzTLmLMlMfPIOV2i0FOztr0oIjtF5A9LDQMR6SUi46zPP4jIlyKyBfgJeA54xWot13Dl5BcvGN2oUyeOs/y3eTRv04Fj/xwO275y6UIKFi7qkQtt0bI1a1avAuDQwYPcu3fvoY6RUAYPHcbuA0fZvu8QX30/jWo16zDhm8lR1n3i+H/07tqR8V9/T6Eirtt7+PADx7do4XyKFisGQLMWLdm0cQPBwcHcunWL7du2Uqx4ibB9b928yY3r18M+r121gmIlSnHWGukBsGThfIqVMNI/Fy+cJyTESPX+e+wfjv5zhLz5Czi1z13NsDSWBntyHy8C82fg34u3yJDKxOCTeQvdK+Vhzi5jY7YnkjO8TSneX3QgLOYLkD5lsrB6ALy94ja8oKo83+cZihUvzkv9zTWWLlOGf0+e5cChoxw4dJRcuXOzccsOsmfP7lbdCa2BBw/eFKJbEjNup3ZMAkSlpwbGGfuLyP8wShHPRHJ8bqCqqoZYahQ3VPVTV0/+8jNduXL5Ej7JkvHOR5+RNl16Br/2P44eOYSXlxc5c+Vl6CdmCM/5c2dp36QGN65fx8vLiynfjGfh6u0QyeiFyHS4ej7Vm77P9CagfGl8k/nyzXeT3X61+nriOMZ/MYpzZ89Qp0oA9Ro2ZvS4rxj1yTAuX77IG6++CBjts2VrNoc79qnuXVi3bg0XL1ygWKG8vDX4XZYtXcyhgwfx8vIiT968fDF2IgDFi5egfsNGVK5QHi8vL3o+9TQlSz0YFnX+/Fme6d4RgJDgYFq360id+g156bmn2P/HXkSEPHnzMfwzM+Ns88b1jPr4fXySJcPLy4vho8Y+1KkXGe5ohgkwvnM5vKxX1pUHzrPhyCX61S5ItcIZEYQ5u0+x478rAPSulo+0KX0Y0MDk2w+5r/SespNMaXwZ0qwYXiL4ekPI/aiVhT2ht7Zpo8M1VvAz1/jBMLd00aIiLjTw3MUTDVkRyQNMAbJhnoGTVPULEckIzMKkgz0GdLDUZAT4AmiKUdLppao7rbp68iDF7IeqGnVrBhc10pIaUeipDQWqqepJKx/wMFWtLyK9MJI//SyZnlWhNy06p+uokZYzV56AFds8JxtnTwOOOc6GjLmLPQ04dnh6yFjBkuV02PTfnO7XxT93tOcVkRxADis/+BMYQYPWQC/gkqW1OAjIoKpvWAK3L2KcbiXgC1WtZDnp7UAFjPPeAQSo6uWHTmrxKIYXUNUQVV2tqu8C/YAnrU2u6Ka5lERBVSepagVVrZAhU+Sv9DY2Np7FUzFdVT0d2lJV1esYJZlcQCsgtKU6GeOIscqnqGEzkN5y3I2A5ap6yXK0y4HG0Z37kXO6UeipxVSq9zrh8wnb2NgkMC7GdDOLyHaHpU+U9YnkB/yALUA2VQ3tSDiDCT+AcciOHfInrLKoyqPkUYzpRqWn1jwGdS0AfhaRVhhJ9nUes9LGxsZ9xOUQyAVXwhqWqO0vQH9VveZYt6qqiHg8/vrIOd1o9NTyO+yzHTMcDFX9AfjB+twrQl0HeaAYbGNjk8B4UphSRJJhHO50h1S1Z0Ukh6qetsIH56zyk0Aeh8NzW2UnsXyJQ/nq6M77yIUXbGxsHm08MWTMGo3wLfCXqn7msGk+EDqAvScwz6G8hxgqA1etMMRSoKGIZBCRDEBDqyxKHrmWro2NzaONhxq61YDuwB8istsqewsYDvwkIk9j+oI6WNt+w4xcOIwZMvYUmJm7IvIBEDoH/31ns3ltp2tjY5Nk8FR4QVXXE3Wj+KG8M2rG5r0QRV3fAd+5em7b6drY2CQhBEn0c86ix3a6HiC5j5fHJzR4krQpnaeOTEg8OaHhTlCIx+oCz09myNB8tEfru7zwFec7uYGnJ9LEBYk8tYJTbKdrY2OTZDCpHZO217Wdro2NTdJBwM100ImOJG5+0sHTulKJub7EYNuJE8dp0bgelf3LUCWgLF+OHwPAO28NpGL5UlSr6Ee3jk9y9coVAHZs20qNSgHUqBRA9Ur+LJw3N07sOzC5N9smdmfz+K6sH9MFgCE9qrB1Yjc2j+/KgmFtyZHRhKpeaRfA5vFd2Ty+K9u/7M6NRS+TIY0JxbzQyo/tX3bH15swjbbIcFcTDuDOnTs0ql2VOlUDqFmxHCOGmUT069ason6NitSsVJ4X+/YmOPhB6soN69ZQt1oFalYsR+smcat/IC78S8w8kglv4puAgAq6Ycv2KLe7qyvljMRcX0LbFhrTPXP6NGfPnKacpRNWp1pFps36hVMnT1Czdl18fHx4d7DJXf/eh8O5desWvr6++Pj4cOb0aWpU9uevI8dJkzL6eLO79p29eJVqL/7IxWt3wsqeSOXL9Vv3APhfq/IUz5uJl8auCHdc00oFebGNH00G/ULJfJmY8mZTarw8g1O/vEgyLwi+H3m6SHc14a7dDkJVuXXzJqnTpCEoKIgWDWvzwfBP6dOrKz/PX0KhIkX55MOh5M6bj649nuLqlSs0b1CTGb8uJHeevJw/f44sWYwcU7a0vh5NeFOsdHmd8PPvTverXyJLktRIs/EQntaVSsz1JRbbsufIEaaS8cQTT1C0WHFOnzpJ3foN8fExUbXAwMqcsqRmUlm6dgB3795xOduWJ6431OECpEqRLNLMYR1qF+On1X8DUDxvRrb9fYbblh5bdEnR3dWEAzPNNnUaI0cUFBREcHAQXl7eJEvmG5ZbuVbd+iyaNweAX2fPpGmL1uTOY+nfZYlb/bukrgZsO914wNO6Uom5vsRo23//HmPvnt0EBIbXQps25XvqN3yQEGr71i1UCShLtcDyfPbFhDAn7En7VGHBR23ZMLYLvZuUCSsf2rMqh6Y+Q6c6xflg6qZwx6RM7kODCvmZu94kiN9/7CLVSuUi4xMpAJMU3RU/44omXCghISHUrVaBUoVyUatOPfwrBBISEszunTsAWDD3V06eNHlejhw+xNUrV2jTtD4Nalbipx+nOjcmFiT18EKic7qR6Zt5sO7aIrLQU/XZJH5u3LhBj84d+HjEZ+G00D795CN8fHzo0KlLWFmFipXYtGMvK9ZtZvSnw7lz505kVcaKeq/Nomq/H2k9eA59W5SjWmmTkGro5I0U6f4NM1cd4LkW5cMd06xSQTbtP8XlGyYz6d/HLzFq9jYWfNQWX++oE6I74qomXCje3t6s3LCd3X8dZeeO7Rz4az9ffjeNIW8OoFHtqqRJkwZvS/8vJDiYPbt3Mm32PGbOWcRnIz7mSAT9O08hmFa9syUxk6icrrv6ZvGJpZkWIzytK5WY60tMtgUFBdGzS3vad+pMi9Ztwsp/nDqZZYsXMen7qZGGEYoVL0HqNGn4a/8+j9t36qJJ13z+6m3mbzxMYLHwcjmzVh6gdfXwunrtaxVj9uoD4comL91PtRd/5J41LDm6rpmYaMKFki59eqrXqMWq35cRWKky85euYunqjVSpVoNChU0G1Ry5clGnXgNSp05NpkyZqVytOvv37XXrPK7jSjs3cXvdROV0iULfzFLyfS8SfbPUIvKdiGwVkV1WCkZEJL+IrLP23xmZoq+IBFrHFBKRABFZIyI7RGSplV0IEVktIp+LyHbg5ZhelKd1pRJzfYnFNlXlxeefpWixErzw0oMJBL8vW8KY0Z/y4+y5pEqVKqz832NHw3rj//vvXw79/Td58+X3uH1prIkqqZL7UN8/H/uPXaBQzvRh25tXKcTB4w9EB9Km8qV62dws2HQkXD1Z0qUM++wlEBKF03VXEw7gwoXzYaM6bt++zZpVKyhcpBjnz5uEW3fv3mXs55/So7dJUdu4WQu2bN4Ypn+3c/tWihQr7tK53MaFVm5ib+kmtnG67uqbvQ2sVNXeVv7crSLyOyYdWwNVvWMlNJ+BkdMAwHLCYzHZ4E8DU4FWqnpeRDoCw4De1u6+kfWCOsr15MmbN9qL8rSuVGKuL7HYtnnTBmb9OI2SpctQo1IAAO+89wGDBrzC3bt3adPcxHIrVKzE6LET2LRxA1+MGoGPj9Fc+/TzcWSKQuQzpvYJsGKU0YHz8fZi1qoDLN/xLzMGN6dI7gzcV+W/s9d5aeyD3vmW1QqzYse/3LobXll4xjstyPhECny9zciFqHBHEy6Us2dO89JzTxMSEsL9+/dp1aYdDZs0473Bg1i+ZBH379+n59N9qVGrDgBFi5Wgbv2G1Knij3h50bVHb0qULB1l/bHBhBcSuVd1QqIbMuamvtl2IAUmWTlARox8xilgHEY1IgQoqqqpRKQ2Jp3bbaCh1YouDWwE/rHq8AZOq2pDEVkNvOvg+CPF2ZAxm/jD09OAUyTzrB7c4zYN2NNDxkqU8dPv56xyul+VIhkS7ZCxxNbSRVVDMEmAV4vIHzzIbRmZvpkAT6rq3451WIKSZ4FymBCKY4/IaYyj9sM4ZwH2q2qVKExySTPNxsYmfvC0eGZ8k6hiujHQN1sKvGglJEZE/KzydJjW6n1MzkzH5soVoBnwsdXy/RvIYnXiISLJRCR+NaVtbGxcRsT5kphJVE4Xo282WUT+FJG9QElMaCEqPgCSAXtFZL+1DjAB6Ckie4DiRGitqupZzCiJ8ZgWbzvgE2v/3UQu92NjY5MI8IRyREKSqMILMdA3u42J+0as5xDhtc3esMpXY+kXqep/gGOLtmYk9dR2x34bG5t4ILF7VSckKqdrY2NjEx0iSX/0gu10bWxskhRJ2+XaTtfGxiapkcS9ru10bWxskhCJP4uYM2yn+xjg6QkwiXmcpK+3Zwfk3Hclm4wbeHoyQ4aqr3m0vssbR3m0Pk+TFEYnOMN2ujY2NkmLJO51badrY2OTpEjsWcSckdgmRzwy9H2mN3lzZiWgfPjEHxPGjaVc6eL4lyvFW4MGxuocISEhVK7gR9tWzd0+9sTx4zRuUBf/sqUIKFea8WONdtae3bupVb0KlSr4Ua1yINu2bXVaV1Q6XL/8PBv/cqVI5evFju3u5aaI7P59+P5QCubLRaWA8lQKKM+Sxb+5XF+JogUI9C9L5UA/qlcJBKBH105UDvSjcqAfJYoWoHKgn5NaHnDlyhW6dmqPX5kS+JctyZbNm7h06RLNmzSkbMmiNG/SkMuXLzuvKIprdfXeHZj7Ntt+HMDmaa+yfnJ/AKYO687maa+yedqrHJj7NpunmexiPt5efP1uJ7b9OIBdswYyoGfdcHV5eQm+3pDMBa8Qm+9ebPFEljErO+E5EdnnUJZRRJaLyCHr/wxWuYjIGBE5bOX59nc4pqe1/yER6RnZuSJit3TjiO49e/Hc//rxTO8eYWVrVq9i4YJ5bN2xh+TJk3Pu3LlYnWPcmC8oVqIE169dc/tYbx8fPh7xKX6Whli1ShWoW68Bg996g7cGD6FR4yYsWfwbg998g6W/R59gxMfHh+EjRoXT4apXvwGlSpVm5k+/0u9/D81fcUpk9w/gxZdf4ZVXB7hdH8DiZSvJ7JA5bMr0mWGfBw18jXTp0rlc1+uv9adBw0ZMnzmbe/fucevWLUZ+8hG169ZlwOuD+HTkcEaNHM6HH33itK7IrtWde9f4+YlcvPpg0mX3tx8oNwx/uQVXb5jUI0/WL0fyZD4EdvmUlMmTsWvWQH5atov/TpuHQ79ONaLNy+tIbL57scJzQd0fMEmxpjiUDQJWqOpwERlkrb8BNAGKWEslYCJQSUQyAu9iMhgqsENE5qtqtE9bu6UbR1SvUZOMGTOGK5v01UQGDBxE8uRG7DBr1phrSZ04cYIlixfxVO9nYnR8jhw58HPQECtmaWeJSNgP6drVq+TIkdO1uiLR4SpeogRFixWLkX2R3b+4QlX59ZfZtO/Q2aX9r169yoZ1a+n51NMA+Pr6kj59ehYtmE/Xbqax07VbTxbOd00rLbJrjc29c+TJ+uX5adkuwCQ6T5XSF29vL1KmSMa94BCu3zQOOVfWdDSuVjLKvLyOxPa7F1s8kcRcVdcClyIUtwImW58nA60dyqeoYTOQ3sq53QhYrqqXLEe7HGiME2ynG48cPniQDevXUaNqJRrUrcX2bdtiXNfrr/Vn2Mcj8PKK/Z/w32PH2LPHaGeN+HQ0b705kCIF8/LmoNd5/8OP3K7LVR2umPDlhHEE+pWl7zO9XX59B/NDbdmsEdUqV+C7byaF27Zh/TqyZs1G4SJFojg6PMeOHSVzliz0fbY3VSr687/nnuHmzZucO3eWHDlyAJA9e3bOnTvr+oXFEEVZMLYPGyb3p3fryuG2VfMryNlL1zly/AIAv67Yw63b9zj627scnD+Yz6et5vK12wCMfKUVb491TcnKk989d3FDrieziGx3WPq4UH02VT1tfT4DZLM+5yK8gs0Jqyyq8mh55J2uiIRYWmt7olKRiC+CQ4K5dOkSazds5qPhI+nWpUOMhnP9tmghWbNkxT8gINY23bhxg84d2zHi09GkTZuWrydNZMTIzzj0z3+MGPkZz/d1vTXjrg6Xuzzb93n+/PsIW3bsJnuOHAx63fXhUr+vWsfGLTuYM/83vvpyAuvXrQ3bNnvWDNp36ORyXSHBwezetZNn+zzHpq07SZUqNaNGDg+3j4jEy9C6es+Oo2qP0bTu/w1921ejml/BsG0dGvoxe+musPXAUnkJua8UbPoeJVp/xMtda5E/Z0aaVC/Bucs32HXghNPzefK7F2Ncy3hzQVUrOCyToqgtUtT8MOMk2fgj73SB26paXlXLAW8CHyeUIbly5aZ1m7aICIEVK+Ll5cWFCxfcrmfTxg0sXDifYoXz06NrJ1avWslTPbq5XU9QUBBdOrajU+cuYdpZ06dOoZX1uW279mx3oSMttK6Y6nC5SrZs2fD29sbLy4veTz/L9u2u2QaQ09Ity5o1Ky1btQ67ruDgYObNm0O79h3dqCs3uXLnDmvNt2nbjt27dpE1azZOnzYNpdOnT8e5FDnAqfMmFHT+8g3mr/6DwJJGxcTb24tWtcvw8++7w/bt0MifZZsOEBxyn/OXb7BpzzECSuahStkCNK9RigNz3yaZl2kpRtWZ5qnvXmyIQ420sw5SXTkwCjQAJ4E8DvvltsqiKo+Wx8HpOpIWuAwgImlEZIWD7lqr0J1E5B0R+VtE1ovIDBGJWc9NBFq0bM2a1aZT6tDBg9y7dy9cx46rfDDsY44cO8Hfh48xZfpMatepy/dTprlVh6ryfJ9nKFa8OC/1f6CdlSNHTtatNUIZq1etDBMfdFaXuzpcMSHUoQHMmzuHkqVck4S5efMm169fD/u84vflYceuXPE7xYoVJ1fu3C7bkT17dnLnzsPBv03u/NWrVlC8RAmaNm/B9GkmJDh92mSatYi5NpyrpEll+gdSpfClfqVi7D9i7lHdwCIc/PccJ89dDdv3xNnL1K5QOGz/iqXz8vexcwyZ8BuFW3xA8dbDCLpv1IWDopAA8sR3L7bEYT7d+TwQTegJzHMo72GNYqgMXLXCEEuBhiKSwRrp0NAqi5bHYfRCShHZjVGLyAGEjpO5A7RR1WsikhnYLCLzMT2RT2JUJ5IBO4EdESt1ppHWo1tn1q1ZzYULFyiUPzfvDHmPnk/1pu8zvQkoXxrfZL58893kBJvdtWmjg3ZWBTNU6r0PhjH+y0kMeLU/IcHBJE+RgnETv3JaV1Q6XHfv3uXV/i9y4fx52rZqRtly5Vnwm9PvJBD5/Vu7ZjV79+xGRMiXPz9jJzi3DeDc2bN06mBa3yHBwXTo1JmGjUx/x8+zZ7kVWgjl09Fj6N2rG/fu3aNAgYJ8+fV33L9/n+5dOjLl++/IkzcfU3+cFeNrzZAxo9N7J8CKr/sBluba0p0s32weBO0b+oV1oIXy5ewNTBrSiR0zX0eAqQu3se/waZIanvjJiMgMTIrYzCJyAjMKYTjwk4g8jRFP6GDt/hvQFDgM3AKeAlDVSyLyARDaOfO+qkbsnHv43IlNI83TiMgNVU1jfa4CfAOUxjxwRmPy6N4HigEFgE5ABlV91zrmM+CUqn4a1TkSu0ba4zQN2NPTdj2Nl4elahP7NOCUycSjWmVlyvnrr8s2ON2vaPZUtkZaYkBVN1mt2iyYJ1cWIEBVg0TkGKY1bGNjk1hJAnI8znisYroiUhyjl3YRo6N2znK4dYB81m4bgBYikkJE0mBkfWxsbBIJSV0j7XFo6YbGdMGEwXqqaoiITAcWWIrD24EDAKq6zYrt7sUoCv8BXH24Whsbm/gnVqMTEgWPvNNVVe8oyi8AUcmuf6qqQ0UkFbCWSDrSbGxsEobE3pJ1xiPvdGPIJBEpiYnxTlbVnQltkI2NjZ1P95FFVbsktA02NjaRk5hHz7iC7XRtbGySFEnc59pO18bGJmmRxH2u7XQfB5L665g7ePpSPX3vPK655uHJDBkqvujR+jyOJP3vs+10bWxskgyCHV6wsbGxiVeSuM99vGakJSTLli6hbKlilCpemJEjhjs/IAnXl1htCwkJoXKgP21btwCgz9NPUaJoQSpV8KNSBT/27N6dYPZFprkWGw03R4oVzk+F8iYZUbVKUacjSO4N22a9yeYZb7B+2usAlC2aizWTXw0rq1DKTNxMmyYFP3/ehy0zB7Fj9lt0b2nSXObNkYGN0weyecYb7Jj9Fq+//nqWGBkdDV4iTpfEjN3SjQdCQkLo/9ILLFq8nFy5c1O9ciDNm7ekRMmSj1x9idm28WO/oHjxEly7/kDX66OPR9DmyXYxss2T9kWmuRYbDbeILPl9lUtpRBv3HcPFKw/01oa93IphXy1h2cY/aVStJMNebkWjPmPo26EmB/45Q7v+k8icPg175gxm5m/bOX3+GrV7fca9oGBSp/Rlz0+vZgdyAqdibHxEErdPdYrd0o0Htm3dSqFChSlQsCC+vr6079iJhQtc089KavUlVtuMrtdv9Or9dIxtiSv7otJcC8VdDTdPophWLUC6NCk5ff5qmE1pUpny1KmSc/naLYJD7hMUHMK9oGAAkvv6xImkj2vCEYkX2+nGA6dOnSR37gcJ5nPlys3Jk04TzCfJ+hKrbQNfe4UPP/7kIScwdMhgKvqXY+CAV7h7926C2BeV5loo7mq4RUREaNGkIVUrBvDt11Gr1iiwYPwLbJj+Or3bGlWr1z/9hY9ebsWh397n41daM2TcfAC+nLWW4gWy8c/SD9n+05sMGPlLWArR3NnSs3XWIA799gFjxow5gwdbua4ku0nk0YW4dboi0lpE1Mru5cr+x6zUixHLb7h5Xrf2j6aeXiLiXA7XJlHz26KFZMmaBX//8Lpe7334Ebv3/cW6TVu5fOkyo0Y6l0uPC5xprrmr4RaRFavXs2nbTuYuXMxXE8eH04dz5F4IVO06gtb9JtK3Q02q+ReiT7vqDBz1K0WaDmHgqF+ZOKQrAA2qlGDvwZMUbDSYSp2HM/qN9jyR2rR8T5y9QsWOwynd6j26dOmSmQcCjx4hVH8uuiUxE9ct3c7Aeuv/pEgvTDwqVuTMmYsTJx6Ihp48eYJcuZyKhibJ+hKjbZs3bmDRwgUUL1KAHt06s2bVSnr37E6OHDkQEZInT073nr3Yvt19dWZP2BeV5hrETMMtIrkc9eFat2GbE92785dvMH/VHgJL5aNr80rMXbkHgF+W76JCKaOS0r1lZeZZ5f8cv8CxUxcplj+8bz194RoHDhy4DdSIsfGRYIcXosDKRVsdeBqjxhBaXltEVovIzyJyQESmS4RHk4ikFJHFIvJsJPW+LiLbRGSviLwXzflHi8h+Swcti1VWXkQ2W8fOsXSNIi0XkXYY6Z7plppwypjeiwqBgRw+fIhjR49y7949Zs+aSbPmMdfPSsz1JUbb3h/2MYePHufAoaNMmTaDWnXq8t3kqWGaa6rKgvlzKVWyVILYF5XmGsRMw82RiPpwvy9fRikn2nKpUvhSv3Jx9h85zekLV6kRYHTValcsyuHj5wE4fuYStSsWBSBrxicomi8rR09eIFfW9KRIngyA9E+kJDAwMA3wd4yMj4KkHl6Iy9ELrYAlqnpQRC6KSICqhqZI9ANKYWI9G4BqmBYxQBpgJjBFVac4VigiDYEiQEXMA22+iNRU1YjvS6mB7ar6iogMwegf9QOmAC+q6hoRed8q7x9Zuar2F5F+wABVfUiLx5lGmiM+Pj6M/mIcLZo1IiQkhJ69elOylPs/8KRQX2K2LSK9e3bjwvnzqCply5VnzPiJCWZfZJprEHMNt1DOnT1Lx3ZtAAgOCaZjpy5h+nCOCJDMG7bMHGT01pZsZ/nGv3jh1gxGvv4kPt7e3L0bRL8PzYiK4V8vYdJ73dg2601E4O0x87h45SZ1KxVj+KttUDXOb9y40Wc+/fTTP2J8AQ/ZmfiHhDkjzjTSRGQh8IWqLheRl4C8qjpARGoDb6tqA2u/icAGVZ1mSeZcBUao6nSHum6oahoR+RRoB1yxNqUBPlbVbyOcOwRIrqrBIlIQ+BWoBfyhqnmtfQoBs4E6kZWrqr+IrCYKp+tIYtdIe5xI7Hpwnp4G7HHNNQ9PA76za5xHtcr8/CvoyvVbnO6XMbXP46WRJiIZMaq7ZUREMRI5KiKvW7s4dhOHRLBjA9BYRH7Uh39BgnGyrsnAPiBxqxXa2Ni4TBJv6MZZTLcdMFVV86lqflXNAxzFtYD6EOAyMD6SbUuB3la8GBHJJSJZI9nPy7IBoAuwXlWvApdFJNSG7sCaqMqtz9eBJ1yw2cbGJp4QF/4lZuLK6XYG5kQo+wXXRzG8jNE2G+FYqKrLgB+BTZa22c9E7hRvAhVFZB+mxf2+Vd4TGCkie4HyLpT/AHwZ2440GxsbzyACXi4siZk4CS+oap1IysY4rK52KO/n8Dm/wz5POZSncfj8BfCFk/OniaJ8N1DZjfJfMA8LGxubxEIid6rOsHMv2NjYJCkSe/jAGfY0YBsbmySFp8ILItJYRP4WkcMiMihurX6A7XRtbGySFh6YkiYi3pjO+iZASaCzpQAe59hO18bGJknhodELFYHDqvqPqt7DTMhqFaeGW9gxXQ+wc+eOCymTyb8u7JoZuODBU9v1Pbr1JWbb3KkvnwfPya6dO5am8n04KVYkpBARxxlLk1TVMcVaLuC4w/oJoJInbHSG7XQ9gKq6lB1fRLZ7cpaMXd+jW19iti0u6nMVVX14DnMSww4v2NjYPI6cBPI4rOe2yuIc2+na2Ng8jmwDiohIARHxxWRCnB8fJ7bDC/FL1Gn77frs+uKurqRQX7xiJcPqh0kt4A18p6r74+PccZZlzMbGxsbmYezwgo2NjU08YjtdGxsbm3jEdrqPKBElkGxsEhoR8bL+f6y/m7bTfQQREQlNAC8ijUQk5sqQkdTtqboSKyKSJjbXKSLx0kHtaKPVA59osfQI01qr5RPQlATHdrqJmNAflSXUmcrV4xwcbh1MUvjrnrLHoe6WIpLe3eM9YENGh8/FYltfJPUXAaYSQ8cgIumAQOtzg7iazx/hb/EUJneAy7/nBHh41gDesMRkp8b2wZaUsZ1uIkZVVURaA/OA30Wku4ikduVYEXkSmA58oqrXPNEScviRNwPexo0hhxGcRN2YOCPLqdQVkTEi8hzmR5zW2XHuoKqHMCong0SkbAyqyAbUFJF5wFjAlenhbuNwL6sAHYFfVfV+dMc4PMSLAD3is3WsqvMx+Q5eBv6nqjcikeN6PFBVe0mkC1ACWIuZE94MWAb0Cf29Rdg34noy4A/gd4cybw/YVA04A7S21pO7eXx/zMD0ArGwYRdwCcgTeq0euC4BvBzWh2MS2Jd19XiHz2OA0xgFEp84/H74A3uByY62R2cfUBsj1LoV6ODu38/dexphvRVmfO8koHhcnTexL3ZLN5EiIoWB94DzqrpFVRdhJOMHi0g1tb7F1r4RY7j1gUyYV+RsIjIZQFVDrJR27tgR8RVwN7AdeMeq866rdYpIXaArUF1Vj4qIn2WryzaISHJgBbAe+FhEfFQ1yKWLiaZ+NdwXkUwAqjoI+BN4z1mLN8L9fxrICLyFeRPoLyI5rW2ZYxkrDnesqu7EtKYLARWiq1tVVYwO4JfAd8AeoCbQMS5avBHuyZMi0hEIUdU+GMXvwSKSRUT6ikhPT58/MWNPjkhERPii+gCvA/Uxra4NqnpLRD4BNqrqvEiOH4BpEe8AygEvAkeAzcBxVW0dC3vqAykwTvcG8BnGsbcNdeaqGhLV8dZ6EWAQcAWjAh0I3AamqOpMF2woAVxT1ZPW+gxMS7K9iNTGtNqWunONEc7VD/NWcRz4VlWPiMhQoBRGhXpnJMekVdVr1ufqwECgm5qQTiugDvAPkAbIDgxU1TsxtdE6zzNADsy9Gwf0xXxP3gd2aBRhBhEZCKRW1Xeth9fTQBvgG0x4IlYPryjO+RLmQbsCKI75+/USkc+ArBiZrLaqutfT5060JHRT217MwoMHYHWMgGdLa30A5nVsENAAk4KueiTHFwbmWJ+HYeLAya31ZMA6IGfoedy07TWMQvI4jL5dFYwT+dpaf+jV1vE8wPNAW0w6vS7ADEx8L511Xd1csOEVTKhlBfAVpj/iCcyr8k5MyKFQLO7/s1b9uYHDwCyghrVtJKZzLXmEYwoBb2IeRumBuZgHXmWHfZph3go24WKowomdLwO/YwRX9wKvWuVvWH9j/2iObYWZ9lrSoWwlprVcJg6+0ykwArWFrXVfTD/DG9Z6cSBrfP3GEsuS4AbYi8MfAxph4rDDLOfyq1X+EiYO+jXQ2CrzinBsPmAKRrRzoYPDbRuZU3TDpqIOdrwBLLYcnhdmCNB4rNhqFMf3s2wvHMm2rpazjDa+B3QD1lmfP8KoPX/nsL0dkN/N6wq0nFByzANkJKYl+pLl1IZjYuihjjdzJHUUsJxtecxDL/Rv8CoODwDrXqWO4f33ilDPaEz8eQCwyLLf2+Feh8a5Qx/iAUA9IK+170eYTtCKQDHMQ3MuMNgD39+IMdxUmId1M4eyJsBnCfUbSwxLghtgL9YfwvygpgIdHcqWABOsz+9iWiRVceg4wvRcl8C0IsZhWn25rW3PYFpeMWpNAFkwrckxmDjgb6Hntpx52og/tAjHZ7YcWCnLsXXCtFhrY17jVxBJCyuSH29Zy2m8gGnBp8OETX4lhp1omBb31lCHYN2/QsBSh32OWE4qVVT2WfdnGPAtkN9yZFOt6yziwe9HI4yD/9m6pz8Dvta254AGEe2zHNxBYDBwFqiA6Xx7AxNy2mJ9d9oBI4hFR2uEe1IF87BOCzS27mNVa9uz1t8wRXTfnUd5SXADHtfF+pEXsz7nx8S3xgIdHPbJB0y2PifDxFFHhDoBywn9ARS01htgWmirLGexDygVQ/vKA59gOoXGYF5dC1nbnsK82maPcExEZ+mDeRDMwowE+Ab4EdOaTAakj+S8jj/edEBa67MXphXZxFr/wHIcWdy8LseW4zBMh1x7az0vcAATZ2yKeWPIFd01WmV5MK3HL62/ZVFM6/EFYjh6wXKOoa3sJ4DN1ud6lhMLtbkn8Ffod8Dh+FJYIReMwz5rXVs9a3sWIIO17U+gtIe+1y9i3mwmY1rRDa17eRKYYJ2rpCfOlVSXBDfgcVwwHVA1MU/9TzCdU6mB7pihRqWs/epYTiGLte7j8LkMphWby1qvhYnzhb42tyN2Mc5ATKu5GKZT7itMfG48sJ8IzjyCs2yFiUsXwzjvdlhDxDCdPj87c0aY1+e51r3piom1vm6d/33LIUYZ1nDh+vphWso/YuLkbazy54AN1rWXjnBMQYfP/YHvMZNPsmFa9UMs+wpiwg05Y2ibDyYOvhKoZpVtxbQcU1j38x/gB+s7UArjXNsCrRzqKYoZ4rfdWn8DuAfUtdbTWn/XGD2YI7G7iGVPdkxoob51H4tY34WSsfmbPSpLghvwuC2YFu4LwJOYVuBd4F2H7S9jWi5jMK2C0NffiDHc1JiW8ZfWsgbTmuwQS/tyhJ7LciyzLCdQ0PoRtSeaMbaWs9hoOdcgB6chQC9My/yhlg4m9lgR8wpd18HJdLAcQy9MSOJlYEFEh+jmNRaynENea70zpnXWzlrPSoSQDOZB+TfmVb0ipuXfDfPQnG/dt0yYN43PiOX4XKuuPta11sOMnoAH8dsimI7RrJZz3QeMsu79cw71PMODEFVtTOvTsaMvxmOcHb4noeGMgsCiCPsMBZ5J6N9dYloS3IDHcbGc7juYp/9HmNfcTjyIl1YDSgN+1rpjK7IRsMb63AIT6y1rrb+LmYEW7hg37KqE6aybZDm83JZTibLFHMG2rJjWcDqgNyZm6+2w7dMoHG5jy+n1sM7ZEZjqsL0hDq/Q7jqKiPcC8+CbgRkpEnrP38MMhWscyfHNMa3EupiW91KgubUti+VY5mJGZ2Qkkk63GNqZFvgf5gF0HxNeWYEZAfAlJvl2SUwYoYV1TDfMg6+8tV4D+AnTwbo31OHG5PsRlb2YsExofHkuVkjMWh8GfJjQv7nEtCS4AY/jgmn1LQaet9ZfBj7HxL6qY15xw7UiHI71wnREzI5Q3gUXRgJEtMPhczLMq2tpTItpmfWD3wOMc+H4LphX6jcwjnuJg0N7DfMK/lDrDxMWOQQEOpSVx8QEKzmUfYc1VM4dhxHBxvw8CMeMwDz4QsMerTCtytwRjm9u3YPQVnAuzOSQrx32yYQZ/TCTGHZGRbCzHubBm81afx7TQu2NeSjl50F8vTpw3+HYvdZ3a7d1z7wxbyjvYsXDPfD9rQfUtz6/Yt2fadb3Jj3mjWupdX/3YPVd2It1/xLagMdhsZzZE9bn0FfawphWZFZMi+YVYCJwDochNg51FMUadmT9kGYBv1nrJaz1GI21tH7U0yxnWcsqq4uJMR/ChDkyRHN8e0zsuTCmBbaPBx1g7S1HkD+KY18FXrY++1j/p8O8AQzHxHZ7YTqPcrt5XRLhPPstxzoASImJiU6z7t0eIgxrw8QmV2E9EBzufw1MfoZ+DvvGuIUb4Zz9MC3bd6x7H/p9eQ4T0qgWyTFNMDHe34EhVpkvZuTCa1Hdk1jY2BnT8n4J81ZUAvOw/hbTGvfChJee4jGe7hvl/UtoAx6HxfpRvIx5ff4L80qa2XIsoa+FqTCtwdARDaGvbl6YUQw7rC9x6MiFFJgB9wut9ZiOA22DibOWAj7GtFa6OWwvRvQx3EDLGbxgrWe1fvw/YFp+24kk/upwfWOxXj9xyH+A6Vl/y/ohf0ssOnswYZNpmHG1xa17GTpA3w/zcHkohGLZsAzTaZkCE0ZYjXHSazEz197z4PekPqYTLzVmZtt/mNl7oWGVZ4miIwrT+gwm/OiMpyM6XQ/YGPp3awvcwgoDYd6U0mI6SQM9ec5HbUlwAx6HxXKcqzApFts5lNfGxDIDIuwf2bCk6ph4Xg8etLjetpxALjdsqYnDa6b1437b+uyNaVX+SoSxqQ77F8EMqaqLaZHmwLR2VgHlrH3SWfY2wGqpRWNPXctJBzjcq9AWbz9MC983hvddMCMv9mK9alvl+TCtyS9cOP41zKvyCcyD5BnMq39oKzxsdElM7IuwntG6nz2xEhVhxvzei8rZRji+KXDY+lwYM0SsoYe/y+LgeJ/EtHhrOWz/EWskiL1EcQ8T2oBHdSH8q20Zy1kuwnQs5ONBvLMdJt72UEsV07L9GvgQM0GgjOV4X8MMm1rs7g8e87p/ngcz25phet/LOuyzjMgnLTTDDAGaAyzHpC0sjYkzvodptbo1qgDTqhuKibEGOJR3ss7l1hCjiI7MKgud4hs2sQTT6l2FebuIboJHGsxg/3AZuTCv0Q2iO9aN70cxoITD+mAevDk8a31vHprRF0W9jTEt0P1E0inoqe+2g+Ptihml8rblhP/Cg5NCHsUlwQ141BdMS20FZniPF+bVdIzlbGph5u6PxRrT6fBlfhbzat4RE+/9w/rxl8TkK/gRN+byY4Zk1bI+t8XEABtjBt5/iAkttMB0KO2M6MytfTcTvlUzFDiGiellxTw8fsDNjhNM59QQzLC3UZY9B9x14BHqbIcZclXOWu+LeZg4jliI6cSF9pgQhUuO0Eldr1rfjyWYUQYZeRArHYt5E8rhZp318FBrM6qHivVdDv2utse0eL/HyZuNvdhO1/M31HS+BFpfysKWUxrtsD2N9eP6BjiFGQJWC4eOJmuf/kBTh7LWmFELoaEFt/IpWD/u9TyY5dQe0zlVE9Mb/hym5Tw71FE5HJvR+lGFDpNK4bDtPUyHT0pMa/x1IsxUc9G+lJZDHGo5yKJuHp/K4XN/TJz5XUxrtq9V/gxm6muVGP5tc1h174/NA8GhvgY86Az9EGsKsnW/O2NCGDFORBOVw4zJ8Zh+iQ6YVnnK0O0OjrcJ9igF1+5rQhvwqC3WD9vxi/kh5pUrr8M+KTGvuuWt9eaYnubkmGFaH2MmBMx1OCYbpnWb3k17HH84L2DCAjWt9Q6W463nYFdUsdxmmNZ2Jmvd8VV7NQ/GFMdZUuxorrEZZshdLkyn2SyrfACm1T7JwfH2IIatMev+NMMDLVyrPn/MMLAPMXHj0BZ4jB4KHryfEWPNr2LCM8Oth1Z9x31j69wftyXBDXgUF8yr9ndAHWv9E+vL+tC0UB7MfS9mOY1lDq2H1ZhXNm9MjHMDMU9eE5p96rkIjvdJTA+50w4XqzVzBGv4mIOTmIcH0hbG8LpCx9G2ttZTWvexmXX/fDCt532hjjeB7HxI6QPzZrAVk0go9G/+DOaNJENCOTOsoW88eFubaa33x4RBvDCjFWxnG4PFVo7wAGIRuq6q5zC93Z1FpLqqvoGJ2y0XB2VeEWmI6ZD5C9O5Ng4zdKyEtUtzTCfVFExct69Vt7v2lcWoLLRW1S8xoxPeEZEaqvoLpvX3j7N6VHUxZkTBdhHJoKpBItIDE1I5465dsUVEsmM6FZ9R1bkikhITBhHMPVymqsGYoVcbMR2ACYKGelqRl0XkW0xn2b+Y+H5a4H8i8j4mnttXVS+HHhNfWF/jrMC/ItJSTTL0S8B5EfkR00BobpU/iWlc2LhJvEhFP8qISHJVvWt9rooZg7tJVYdYSg69LfGDt0QkGSZ+elJE6mGc7CsYp+WPabEVACqJyDVVPQE0sMQovdVSKHBij8CDH7n1ea+I7AIaich9VZ0oIgqMEpH+akQDXUJVF1sKC2tFZAJmjOvTMXkYeIC7mJ7zOyKSAjMbrjpmvGpGjIRNYUzcunlC2CgiqVT1lvW5OqaD7ytMK/cHzAPvHGYUiDdmSOHB+LYzFFU9JyK9ge9FpJeqLhCRm5jxzH1UNdiS1xmAiZvbuIkt1xMLxEiQz8WatosZenUY07JaqKrzReRVTMfaRFVd63BsIOb1fKMlQ9MJI7+SHjOtdD2wWlXdUpO1NMOCrc+NMOGIqdb6S5gptj+p6hLrx/W7qv4Xg2tvjmkx+6nqfneP9wTWA+ZVTG6GUpjxvusxbw6h6QRvAetV9XAC2NcM01k2AuP4/weMUNWFIpIFkwaxNNA/Jn+DuEREGmM6fFth7udQTNjmPCZu3iGh/u5JHdvpxhIReYMHqfbeVdUDItIXM6Z2meV4BwJLNBIdKBHxUiOIWAyTv+AmZnJBIYwTn6URtMeisaUBpmNmD6ZjDszQs+9Udba1z3eYVtY7Vrggxji24hIKEUmDudd5gHkObx2TgflW+CQh7GqOGZM9RFXniUgezHjbjar6nLVPJkzrvADmbx+iTmTU48hWr9DzikhnTN/DKBFph+mbaKqq6603uUzAnsT2kEhK2DHdGOLwGv8JJt9AC0xqOzDDrvYCrUSkjaqOiMzhWsfft/7/GzM6IQXmlfkvYJUbDrcx5ke+ETMGuDEma9Z3QHcR6WDtuhLjkHe4dcGR256gDtey4YaqblLVnxwcbntMC3J3QtgUIdY8T0RSq+pxTEu3kRWeQVUvYkYEPK+qQQnkcMsBi6wQFpjW7EXLvp8xMxTnWd/jjaq6wHa4scOO6caAUIVaK0Z3VVW/FZHMwCciclZVd4jIz5gYncvxOVX92zquJSaL1UUX7cmI6QFvZcXg8mJeaZNj5sIDvG+p05bDDJxPiBhsnCIiOTCTSZ7FyB4dSSBTIsaaX7fUis9i8jW8ISJZVPVdVb2UQDYCoKp7RCQYmCUibTCjJi44bP/VamCMEZHlwK2EeDg8StjhhRgiIk0wKgE9VXWdVfY/zOv9C6q6xTG+6mbdydRNOWwrfjgCM8bzmohMB9aq6lfW9lKY7Fi/J0R8Mz6wRi/UBf5OyGt0EmtugXFqrTEPv/MJaKNX6JuUiPyCyfFwxPr/L8zDA8xQxVuqejshbH3UsJ1uDLCGff0GPKuqW61XtLSYmVltMa+RVYEb8dkqsB4EYzAD7XMCXVX1dmjLPL7ssIk21jwFk8Tm94T6mzh+H0Qkl6qetD5/hXlL+ArTAZkWM+75LTuk4Dlsp+sCIlISMx1zlrWeDpPv9BZmiFhhzPjQ2ar6jYgUUNWjCWRrfcwEi+zW8J8UqnonIWyxCY8Vax6ECX0kSEs8gsPth0lYswMj6fOniIzHTKRpae3jq6r3EsLWRxW7I80JIlIUI48S2tGAql7FzCLzxUzVbYAZPhVgbU8Qh2ud+3fMbKxVIpLVdrgJj4jkEJH+mGFXPRMy9OHgcFtjQjH9MJNJ+opIFVV9AfASkdBYrtvhMZvosTvSosEaxrUQ+FlVv7PKUlqxrakYmZQQEamEUV8YlHDWPkDNBAZfYImIVDBF9itNAnIFE3pqlRji6SJSHJMa9Eer0/cfTJL9TiLirarNRSSn9Z2xvzcexm7pRoEVUpiGyRJ2VUSqAVgx0oKYPLfZxUyx7Y8Zj7kkdChZQqOq8zD5Fe7bDjdhUdXbqrooAUMK2SIUXcO8mXW1WreXMSk17wEtrZDUqfi283HBjulGgtUL/hsm/eJCzJhLX8zss+2YBC8bVXWYtX92VT1jd1jZJDasVu2fGDXgv1R1klWeAjOTsh7wkapusjr/UqjqhSgrtIk1ttONglBHan0uhulw8ME43YOqus9xJo+NTWJERHJjtOoWYBzsWcz03pWqelNEXsCMbR6oqpsTztLHBzu8EAUODtfLmi02FdOpUA6THwHb4dokdtQkTdqKSajUFDMV+VngNyvevweTeOlkghn5mGE7XSc4TNM9hHG8KTBxrwwJapiNjRMc+hcGYTrEMmNScJbGyCG9jUm0tNSapmwTD9jhBTcRkSIQ5oRtbBI1luNNhhlXXhAzrHGQmvzDRYHzVkeaTTxhO10bm8cAq19iDTBeVT9IaHseZ+zwgo3NY4DVLzEI8BaRVAltz+OM7XRtbB4fNmM61GwSEDu8YGPzGCGJIPH8447tdG1sbGziETu8YGNjYxOP2E7XxsbGJh6xna6NjY1NPGI7XRsbG5t4xHa6NrFGREJEZLeI7BOR2bEZByoiP4iR/kZEvrFSbEa1b20xsuDunuOYJSTqUnmEfW64ea6hIjLAXRttHl1sp2vjCW6ranlVLY3Jyfqc40YRiVGyfFV9RlX/jGaX2hgtOhubJIPtdG08zTqgsNUKXSci84E/RcRbREaKyDYR2SsifcHkBhCRcSLyt4j8DmQNrUhEVluZsBCRxiKyU0T2iMgKEcmPce6vWK3sGiKSRUR+sc6xLTTxvIhkEpFlIrJfRL7ByNNEi4jMFZEd1jF9ImwbbZWvEJEsVlkhEVliHbPOymNrY/MQtlyPjcewWrRNgCVWkT9QWlWPWo7rqqoGikhyYIOILAP8gGJASSAbJuH2dxHqzYJR6qhp1ZVRVS+JyJcYxeVPrf1+BEar6noRyYtRRS4BvAusV9X3xUjVP+3C5fS2zpES2CYiv6jqRYxW3nZVfUVEhlh19wMmAc+p6iFLvmkCRoPMxiYcttO18QQpRWS39Xkd8C3mtX+rg0hnQ6BsaLwWSAcUAWoCM1Q1BDglIisjqb8ysDa0LlW9FIUd9YGSDopJaS01hJpAW+vYRSLiSlatl0SkjfU5j2XrRYzq8yyrfBrwq3WOqsBsh3Mnd+EcNo8httO18QS3VbW8Y4HlfG46FgEvqurSCPs19aAdXkDliArI7srWiUhtjAOvoqq3RGQ1Jo9yZKh13isR74GNTWTYMV2b+GIp8LyIJAMjbS8iqYG1QEcr5psDqBPJsZuBmiJSwDo2o1V+HXjCYb9lwIuhKyJS3vq4FuhilTUBnCWgTwdcthxucUxLOxQvILS13gUTtrgGHBWR9tY5RETKOTmHzWOK7XRt4otvMPHanSKyD/gK86Y1ByNP/icwBdgU8UBVPQ/0wbzK7+HB6/0CoE1oRxrwElDB6qj7kwejKN7DOO39mDDDf05sXQL4iMhfwHCM0w/lJlDRuoa6GClzMBp6T1v27QdauXBPbB5D7IQ3NjY2NvGI3dK1sbGxiUdsp2tjY2MTj9hO18bGxiYesZ2ujY2NTTxiO10bGxubeMR2ujY2NjbxiO10bWxsbOKR/wMRd3064MxbgAAAAABJRU5ErkJggg==\n",
      "text/plain": [
       "<Figure size 432x288 with 2 Axes>"
      ]
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    }
   ],
   "source": [
    "plot_confusion_matrix(cm, train_set.classes)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Quiz 03\n",
    "Q1:In deep learning, another name for labels is _______________.<br>\n",
    "A1:targets\n",
    "\n",
    "Q2:To build a confusion matrix, we use a rank _______________ tensor.  \n",
    "A2:two\n",
    "\n",
    "Q3:Using the fact that this confusion matrix was built using the entire FashionMNIST training set, what will be the result of calling `cmt.sum()`?\n",
    "```python\n",
    "> cmt\n",
    "tensor([\n",
    "    [5637,    3,   96,   75,   20,   10,   86,    0,   73,    0],\n",
    "    [  40, 5843,    3,   75,   16,    8,    5,    0,   10,    0],\n",
    "    [  87,    4, 4500,   70, 1069,    8,  156,    0,  106,    0],\n",
    "    [ 339,   61,   19, 5269,  203,   10,   72,    2,   25,    0],\n",
    "    [  23,    9,  263,  209, 5217,    2,  238,    0,   39,    0],\n",
    "    [   0,    0,    0,    1,    0, 5604,    0,  333,   13,   49],\n",
    "    [1827,    7,  716,  104,  792,    3, 2370,    0,  181,    0],\n",
    "    [   0,    0,    0,    0,    0,   22,    0, 5867,    4,  107],\n",
    "    [  32,    1,   13,   15,   19,    5,   17,   11, 5887,    0],\n",
    "    [   0,    0,    0,    0,    0,   28,    0,  234,    6, 5732]\n",
    "])\n",
    "```\n",
    "A3:60000\n",
    "\n",
    "Q4:Using the code below, how many of the predictions inside the `preds` tensor are correct?\n",
    "```python\n",
    "> train_set.targets\n",
    "tensor([9, 0, 0, 3, 0, 5])\n",
    "\n",
    "> preds.argmax(dim=1)\n",
    "tensor([9, 0, 5, 4, 0, 5])\n",
    "```\n",
    "A4:4\n",
    "\n",
    "Q5:_______________ is branch of philosophy concerned with the theory of knowledge.  \n",
    "A5:Epistemology\n",
    "\n",
    "---\n",
    "---"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
