{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 04 Run PyTorch Code On A GPU - Neural Network Programming Guide\n",
    "\n",
    "**In this episode, we're going to learn how to use the GPU with PyTorch. We'll see how to use the GPU in general, and we'll see how to apply these general techniques to training our neural network.**\n",
    "\n",
    "## Using A GPU For Deep Learning\n",
    "### PyTorch GPU Example\n",
    "PyTorch allows us to seamlessly move data to and from our GPU as we preform computations inside our programs.\n",
    "\n",
    "When we go to the GPU, we can use the `cuda()` method, and when we go to the CPU, we can use the `cpu()` method.\n",
    "\n",
    "We can also use the `to()` method. To go to the GPU, we write `to('cuda')` and to go to the CPU, we write `to('cpu')`. The `to()` method is the preferred way mainly because it is more flexible. We'll see one example using using the first two, and then we'll default to always using the `to()` variant.\n",
    "\n",
    "| <center><b>CPU</b></center> | <center><b>GPU</b></center> |\n",
    "| --- | --- |\n",
    "| <center>`cpu()`</center> | <center>`cuda()`</center> |\n",
    "| <center>`to('cpu')`</center> | <center>`to('cuda')`</center> |\n",
    "\n",
    "To make use of our GPU during the training process, there are two essential requirements. These requirements are as follows, the **data** must be moved to the GPU, and the **network** must be moved to the GPU.\n",
    "1. Data on the GPU\n",
    "2. Network on the GPU\n",
    "\n",
    "By default, when a PyTorch tensor or a PyTorch neural network module is created, the corresponding data is initialized on the **CPU**. Specifically, the **data** exists inside the CPU's memory.\n",
    "\n",
    "Now, let's create a tensor and a network, and see how we make the move from CPU to GPU.\n",
    "\n",
    "Here, we create a tensor and a network:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<torch.autograd.grad_mode.set_grad_enabled at 0x217323b7d60>"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import json\n",
    "import time\n",
    "\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "import pandas as pd\n",
    "\n",
    "\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "from itertools import product\n",
    "from collections import namedtuple, OrderedDict\n",
    "\n",
    "torch.set_printoptions(linewidth=120)  # Display options for output\n",
    "torch.set_grad_enabled(True)  # Already on by default"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Network(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)\n",
    "        self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)\n",
    "\n",
    "        self.fc1 = nn.Linear(in_features=12 * 4 * 4, out_features=120)\n",
    "        self.fc2 = nn.Linear(in_features=120, out_features=60)\n",
    "        self.out = nn.Linear(in_features=60, out_features=10)\n",
    "\n",
    "    def forward(self, t):\n",
    "        t = t\n",
    "\n",
    "        t = self.conv1(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t,  kernel_size=2, stride=2)\n",
    "\n",
    "        t = self.conv2(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t, kernel_size=2, stride=2)\n",
    "\n",
    "        t = t.reshape(-1,12*4*4)\n",
    "        t = self.fc1(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        t = self.fc2(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        t = self.out(t)\n",
    "\n",
    "        return t"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "t = torch.ones(1,1,28,28)\n",
    "network = Network()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we call the `cuda()` method and reassign the tensor and network to returned values that have been copied onto the GPU:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "t = t.cuda()\n",
    "network = network.cuda()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, we can get a prediction from the network and see that the prediction tensor's device attribute confirms that the data is on cuda, which is the GPU:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "device(type='cuda', index=0)"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "gpu_pred = network(t)\n",
    "gpu_pred.device"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Likewise, we can go in the **opposite** way:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "device(type='cpu')"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t = t.cpu()\n",
    "network = network.cpu()\n",
    "\n",
    "cpu_pred = network(t)\n",
    "cpu_pred.device"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is, in a nutshell, how we can utilize the GPU capabilities of PyTorch. What we should turn to now are some important details that are lurking beneath the surface of the code we've just seen.\n",
    "\n",
    "For example, although we've used the `cuda()` and `cpu(`) methods, they actually **aren't our best options**. Furthermore, what's the difference with the methods between the **network instance** and the **tensor instance**? These after all are different objects types, which means the two methods are different. Finally, we want to integrate this code into a working example and do a performance test.\n",
    "\n",
    "### General Idea Of Using A GPU\n",
    "The **main takeaway** at this point is that our **network** and our **data** must **both exist on the GPU** in order to perform computations using the GPU, and this applies to any programming language or framework.\n",
    "![CPUGPU](https://deeplizard.com/images/gpu%20vs%20cpu.jpg)\n",
    "As we'll see in our next demonstration, this is **also true for the CPU**. GPUs and CPUs are compute devices that compute on data, and so any two values that are directly being used with one another in a computation, **must exist on the same device**.\n",
    "\n",
    "## PyTorch `Tensor` Computations On A GPU\n",
    "Let's dive deeper by demonstrating some tensor computations.\n",
    "\n",
    "We'll start by creating two tensors:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "t1 = torch.tensor([\n",
    "    [1,2],\n",
    "    [3,4]\n",
    "])\n",
    "\n",
    "t2 = torch.tensor([\n",
    "    [5,6],\n",
    "    [7,8]\n",
    "])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we'll check which **device** these tensors were **initialized** on by inspecting the device attribute:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(device(type='cpu'), device(type='cpu'))"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t1.device, t2.device"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As we'd expect, we see that, indeed, both tensors are on the **same device**, which is the CPU. Let's **move** the first tensor t1 to the **GPU**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "device(type='cuda', index=0)"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t1 = t1.to('cuda')\n",
    "t1.device"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can see that this tensor's device has been changed to `cuda`, the GPU. Note the use of the `to()` method here. Instead of calling a particular method to move to a device, we call the same method and pass an argument that specifies the device. Using the `to()` method is the preferred way of moving data to and from devices.\n",
    "\n",
    "Also, note the reassignment. The operation is not in-place, and so the reassignment is required.\n",
    "\n",
    "Let's try an experiment. I'd like to test what we discussed earlier by attempting to perform a computation on these **two tensors**, `t1` and `t2`, that we now know to be on **different devices**.\n",
    "\n",
    "Since we expect an error, we'll wrap the call in a `try` and `catch` the exception:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!\n"
     ]
    }
   ],
   "source": [
    "try:\n",
    "    t1+t2\n",
    "except Exception as e:\n",
    "    print(e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "These errors are telling us that the binary plus operation expects the second argument to have the same device as the first argument. Understanding the meaning of this error can help when debugging these types of device mismatches.\n",
    "\n",
    "Finally, for completion, let's move the second tensor to the cuda device to see the operation succeed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[ 6,  8],\n",
       "        [10, 12]], device='cuda:0')"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "t2 = t2.to('cuda')\n",
    "t1 + t2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## PyTorch `nn.Module` Computations On A GPU\n",
    "We've just seen how tensors can be **moved** to and from devices. Now, let's see how this is done with PyTorch `nn.Module` instances.\n",
    "\n",
    "More generally, we are interested in understanding **how** and **what** it means for a **network** to be on a device like a GPU or CPU. PyTorch aside, this is the essential issue.\n",
    "\n",
    "We put a network on a device by moving the network's parameters to that said device. Let's create a network and take a look at what we mean."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "nwtwork = Network()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "conv1.weight \t\t torch.Size([6, 1, 5, 5])\n",
      "conv1.bias \t\t torch.Size([6])\n",
      "conv2.weight \t\t torch.Size([12, 6, 5, 5])\n",
      "conv2.bias \t\t torch.Size([12])\n",
      "fc1.weight \t\t torch.Size([120, 192])\n",
      "fc1.bias \t\t torch.Size([120])\n",
      "fc2.weight \t\t torch.Size([60, 120])\n",
      "fc2.bias \t\t torch.Size([60])\n",
      "out.weight \t\t torch.Size([10, 60])\n",
      "out.bias \t\t torch.Size([10])\n"
     ]
    }
   ],
   "source": [
    "# Now, let's look at the network's parameters:\n",
    "for name,param in network.named_parameters():\n",
    "    print(name,'\\t\\t',param.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here, we've created a PyTorch network, and we've iterated through the network's parameters. As we can see, the network's parameters are the **weights** and **biases** inside the network.\n",
    "\n",
    "In other words, these are simply tensors that live on a device like we have already seen. Let's verify this by checking the **device** of each of the parameters."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "cpu  conv1.weight\n",
      "cpu  conv1.bias\n",
      "cpu  conv2.weight\n",
      "cpu  conv2.bias\n",
      "cpu  fc1.weight\n",
      "cpu  fc1.bias\n",
      "cpu  fc2.weight\n",
      "cpu  fc2.bias\n",
      "cpu  out.weight\n",
      "cpu  out.bias\n"
     ]
    }
   ],
   "source": [
    "for n,p in network.named_parameters():\n",
    "    print(p.device,'',n)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This shows us that all the **parameters** inside the **networ** are, by default, initialized on the **CPU**.\n",
    "\n",
    "An important consideration of this is that it explains why `nn.Module` instances like networks don't actually have a device. **It's not the *network* that lives on a device**, but the ***tensors* inside the *network* that live on a device**.\n",
    "\n",
    "Let's see what happens when we ask a network to be moved `to()` the GPU:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Network(\n",
       "  (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))\n",
       "  (conv2): Conv2d(6, 12, kernel_size=(5, 5), stride=(1, 1))\n",
       "  (fc1): Linear(in_features=192, out_features=120, bias=True)\n",
       "  (fc2): Linear(in_features=120, out_features=60, bias=True)\n",
       "  (out): Linear(in_features=60, out_features=10, bias=True)\n",
       ")"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "network.to('cuda')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note here that a **reassignment** was not required. This is because the operation is in-place as far as the network instance is concerned. However, this operation can be used as a reassignment operation. This is preferred for consistency between `nn.Module` instances and PyTorch tensors.\n",
    "\n",
    "Here, we can see that now, all the network parameters are have a device of `cuda`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "cuda:0  conv1.weight\n",
      "cuda:0  conv1.bias\n",
      "cuda:0  conv2.weight\n",
      "cuda:0  conv2.bias\n",
      "cuda:0  fc1.weight\n",
      "cuda:0  fc1.bias\n",
      "cuda:0  fc2.weight\n",
      "cuda:0  fc2.bias\n",
      "cuda:0  out.weight\n",
      "cuda:0  out.bias\n"
     ]
    }
   ],
   "source": [
    "for n,p in network.named_parameters():\n",
    "    print(p.device,'',n)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Passing A Sample To The Network\n",
    "Let's round off this demonstration by passing a **sample** to the network."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([1, 1, 28, 28])"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "sample = torch.ones(1,1,28,28)\n",
    "sample.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same\n"
     ]
    }
   ],
   "source": [
    "# This gives us a sample tensor we can pass like so:\n",
    "try:\n",
    "    network(sample)\n",
    "except Exception as e:\n",
    "    print(e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Since our **network** is on the **GPU** and this newly created **sample** is on the **CPU** by **default**, we are getting an error. The error is telling us that the CPU tensor was expected to be a GPU tensor when calling the forward method of the first convolutional layer. This is precisely what we saw before when adding two tensors directly.\n",
    "\n",
    "We can fix this issue by sending our sample to the GPU like so:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[-0.0682, -0.1137,  0.0062, -0.1020, -0.1043, -0.1616,  0.0101, -0.0623, -0.1047, -0.0606]], device='cuda:0',\n",
      "       grad_fn=<AddmmBackward>)\n"
     ]
    }
   ],
   "source": [
    "try:\n",
    "    pred = network(sample.to('cuda'))\n",
    "    print(pred)\n",
    "except Exception as e:\n",
    "    print(e)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally, everything works as expected, and we get a prediction.\n",
    "### Writing Device Agnostic PyTorch Code\n",
    "Before we wrap up, we need to talk about writing device agnostic code. This term `device agnostic` means that our code **doesn't depend on the underlying device**. You may come across this terminology when reading PyTorch documentation.\n",
    "\n",
    "For example, suppose we write code that uses the `cuda()` method everywhere, and then, we give the code to a user who **doesn't have a GPU**. This won't work. Don't worry. We've got options!\n",
    "\n",
    "Remember earlier when we saw the `cuda()` and `cpu()` methods?\n",
    "\n",
    "We'll, one of the reasons that the `to()` method is preferred, is because the `to()` method is **parameterized**, and this makes it easier to **alter the device we are choosing**, i.e. it's flexible!\n",
    "\n",
    "For example, a user could pass in `cpu` or `cuda` as an argument to a deep learning program, and this would allow the program to be device agnostic.\n",
    "\n",
    "Allowing the user of a program to pass an argument that determines the program's behavior is perhaps the best way to make a program be device agnostic. However, we can also use PyTorch to check for a supported GPU, and set our devices that way.\n",
    "```python\n",
    "torch.cuda.is_available()\n",
    "True\n",
    "```\n",
    "Like, if cuda is available, then use it!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## PyTorch GPU Training Performance Test\n",
    "Let's see now how to add the use of a **GPU** to the **training loop**. We're going to be doing this addition with the code we've been developing so far in the series.\n",
    "\n",
    "This will allow us to easily compare times, CPU vs GPU.\n",
    "\n",
    "### Refactoring The RunManager Class\n",
    "Before we update the training loop, we need to update the `RunManager` class. Inside the `begin_run()` method we need to modify the **device** of the images **tensor** that is passed to add_graph method.\n",
    "\n",
    "It should look like this:\n",
    "```python\n",
    "def begin_run(self, run, network, loader):\n",
    "    \n",
    "    self.run_start_time = time.time()\n",
    "    \n",
    "    self.run_params = run\n",
    "    self.run_count += 1\n",
    "    \n",
    "    self.network = network\n",
    "    self.loader = loader\n",
    "    self.tb = SummaryWriter(comment=f'-{run}')\n",
    "    \n",
    "    images,labels = next(iter(self.loader))\n",
    "    grid = torchvision.utils.make_grid(images)\n",
    "    \n",
    "    self.tb.add_image('images',grid)\n",
    "    self.tb.add_graph(self.network,images.to(getattr(run, 'device', 'cpu')))\n",
    "```\n",
    "\n",
    "Here, we are using the `getattr()` **built in function** to **get the value of the device** on the run object. If the run object **doesn't have a device**, then **cpu is returned**. This makes the **code backward compatible**. It will still work if we don't specify a device for our run.\n",
    "\n",
    "Note that the **network doesn't need to be moved to a device** because it's device was set before being passed in. However, the images tensor is obtained from the loader.\n",
    "\n",
    "### Refactoring The Training Loop\n",
    "We'll set our configuration parameters to have a device. The two logical options here are `cuda` and `cpu`.\n",
    "```python\n",
    "params = OrderedDict(\n",
    "    lr = [.01]\n",
    "    ,batch_size = [1000, 10000, 20000]\n",
    "    , num_workers = [0, 1]\n",
    "    , device = ['cuda', 'cpu']\n",
    ")\n",
    "```\n",
    "With these device values added to our configuration, they'll now be available to be accessed inside our training loop.\n",
    "\n",
    "At the top of our run, we'll create a device that will be passed around inside the run and inside the training loop.\n",
    "```python\n",
    "device = torch.device(run.device)\n",
    "```\n",
    "The first place we'll use this device is when **initializing our network**.\n",
    "```python\n",
    "network = Network().to(device)\n",
    "```\n",
    "This will ensure that the network is moved to the appropriate device. Finally, we'll update our `images` and `labels` tensors by unpacking them separately and sending them to the device like so:\n",
    "```python\n",
    "images = batch[0].to(device)\n",
    "labels = batch[1].to(device)\n",
    "```\n",
    "\n",
    "**Code：**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "import time\n",
    "\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "import pandas as pd\n",
    "\n",
    "\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "from itertools import product\n",
    "from collections import namedtuple, OrderedDict\n",
    "\n",
    "torch.set_printoptions(linewidth=120)  # Display options for output\n",
    "torch.set_grad_enabled(True)  # Already on by default\n",
    "\n",
    "class Network(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(Network, self).__init__()\n",
    "        self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)\n",
    "        self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)\n",
    "\n",
    "        self.fc1 = nn.Linear(in_features=12 * 4 * 4,out_features=120)\n",
    "        self.fc2 = nn.Linear(in_features=120, out_features=60)\n",
    "        self.out = nn.Linear(in_features=60, out_features=10)\n",
    "\n",
    "    def forward(self,t):\n",
    "        t = t\n",
    "\n",
    "        t = self.conv1(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t, kernel_size=2, stride=2)\n",
    "\n",
    "        t = self.conv2(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t, kernel_size = 2,stride = 2)\n",
    "\n",
    "        t = t.reshape(-1,12*4*4)\n",
    "        t = self.fc1(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        t = self.fc2(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        t = self.out(t)\n",
    "\n",
    "        return t\n",
    "\n",
    "\n",
    "class RunBuilder():\n",
    "    @staticmethod\n",
    "    def get_runs(params):\n",
    "        Run = namedtuple('Run',params.keys())\n",
    "\n",
    "        runs = []\n",
    "        for v in product(*params.values()):\n",
    "            runs.append(Run(*v))\n",
    "\n",
    "        return runs\n",
    "\n",
    "class RunManager():\n",
    "    def __init__(self):\n",
    "        self.epoch_count = 0\n",
    "        self.epoch_loss = 0\n",
    "        self.epoch_num_correct = 0\n",
    "        self.epoch_start_time = None\n",
    "\n",
    "        self.run_params = None\n",
    "        self.run_count = 0\n",
    "        self.run_data = []\n",
    "        self.run_start_time = None\n",
    "\n",
    "        self.network = None\n",
    "        self.loader = None\n",
    "        self.tb = None\n",
    "\n",
    "    def begin_run(self, run, network, loader):\n",
    "\n",
    "        self.run_start_time = time.time()\n",
    "        self.run_params = run\n",
    "        self.run_count += 1\n",
    "\n",
    "        self.network = network\n",
    "        self.loader = loader\n",
    "        self.tb = SummaryWriter(comment=f'-{run}')\n",
    "\n",
    "        images,labels = next(iter(self.loader))\n",
    "        grid = torchvision.utils.make_grid(images)\n",
    "\n",
    "        self.tb.add_image('images',grid)\n",
    "        self.tb.add_graph(self.network, images.to(getattr(run,'device','cpu')))\n",
    "\n",
    "    def end_run(self):\n",
    "        self.tb.close()\n",
    "        self.epoch_count = 0\n",
    "\n",
    "    def begin_epoch(self):\n",
    "        self.epoch_start_time = time.time()\n",
    "\n",
    "        self.epoch_count += 1\n",
    "        self.epoch_loss = 0\n",
    "        self.epoch_num_correct = 0\n",
    "\n",
    "    def end_epoch(self):\n",
    "\n",
    "        epoch_duration = time.time() - self.epoch_start_time\n",
    "        run_duration = time.time() - self.run_start_time\n",
    "\n",
    "        loss = self.epoch_loss / len(self.loader.dataset)\n",
    "        accuracy = self.epoch_num_correct / len(self.loader.dataset)\n",
    "\n",
    "        self.tb.add_scalar('Loss',loss,self.epoch_count)\n",
    "        self.tb.add_scalar('Accuracy',accuracy,self.epoch_count)\n",
    "\n",
    "        for name,param in self.network.named_parameters():\n",
    "            self.tb.add_histogram(name,param, self.epoch_count)\n",
    "            self.tb.add_histogram(f'{name}.grad',param.grad, self.epoch_count)\n",
    "\n",
    "        results = OrderedDict()\n",
    "        results[\"run\"] = self.run_count\n",
    "        results[\"epoch\"] = self.epoch_count\n",
    "        results[\"loss\"] = loss\n",
    "        results[\"accuracy\"] = accuracy\n",
    "        results[\"epoch duration\"] = epoch_duration\n",
    "        results[\"run duration\"] = run_duration\n",
    "        for k,v in self.run_params._asdict().items():#???\n",
    "            results[k] = v\n",
    "        self.run_data.append(results)\n",
    "\n",
    "        df = pd.DataFrame.from_dict(self.run_data,orient='columns')\n",
    "\n",
    "    def get_num_correct(self, preds, labels):\n",
    "        return preds.argmax(dim=1).eq(labels).sum().item()\n",
    "\n",
    "    def track_loss(self,loss,batch):\n",
    "        self.epoch_loss += loss.item() * batch[0].shape[0]\n",
    "\n",
    "    def track_num_correct(self,preds, labels):\n",
    "        self.epoch_num_correct += self.get_num_correct(preds,labels)\n",
    "\n",
    "    def save(self, fileName):\n",
    "        pd.DataFrame.from_dict(self.run_data,orient='columns').to_csv(f'{fileName}.csv')\n",
    "\n",
    "        with open(f'{fileName}.json','w',encoding='utf-8') as f:\n",
    "            json.dump(self.run_data, f, ensure_ascii=False, indent=4)\n",
    "\n",
    "\n",
    "\n",
    "train_set = torchvision.datasets.FashionMNIST(\n",
    "    root = './data/FashionMNIST',download=True,transform=transforms.Compose([transforms.ToTensor()])\n",
    ")\n",
    "\n",
    "params = OrderedDict(\n",
    "    lr = [.01]\n",
    "    ,batch_size = [1000,10000,20000]\n",
    "    ,num_workers = [0,1]\n",
    "    , device = ['cuda','cpu']\n",
    ")\n",
    "\n",
    "m = RunManager()\n",
    "\n",
    "for run in RunBuilder.get_runs(params):\n",
    "\n",
    "    network = Network().to(run.device)\n",
    "    loader = torch.utils.data.DataLoader(train_set,batch_size = run.batch_size)\n",
    "\n",
    "    optimizer = torch.optim.Adam(network.parameters(), lr=run.lr)\n",
    "\n",
    "    m.begin_run(run,network,loader)\n",
    "\n",
    "    for epoch in range(2):\n",
    "\n",
    "        m.begin_epoch()\n",
    "        for batch in loader:\n",
    "            images = batch[0].to(run.device)\n",
    "            labels = batch[1].to(run.device)\n",
    "            preds = network(images)\n",
    "            loss = F.cross_entropy(preds,labels)\n",
    "\n",
    "            optimizer.zero_grad()\n",
    "            loss.backward()\n",
    "            optimizer.step()\n",
    "\n",
    "            m.track_loss(loss,batch)\n",
    "            m.track_num_correct(preds, labels)\n",
    "        m.end_epoch()\n",
    "    m.end_run()\n",
    "m.save('result_GPU')\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Results：\n",
    "<table>\n",
    "<tr><td></td><td>run</td><td>epoch</td><td>loss</td><td>accuracy</td><td>epoch duration</td><td>run duration</td><td>lr</td><td>batch_size</td><td>num_workers</td><td>device</td></tr>\n",
    "<tr><td>0</td><td>1</td><td>1</td><td>1.028867890437444</td><td>0.61065</td><td>7.907843589782715</td><td>10.033156394958496</td><td>0.01</td><td>1000</td><td>0</td><td>cuda</td></tr>\n",
    "<tr><td>1</td><td>1</td><td>2</td><td>0.5684726412097613</td><td>0.7791833333333333</td><td>7.863961696624756</td><td>18.047714710235596</td><td>0.01</td><td>1000</td><td>0</td><td>cuda</td></tr>\n",
    "<tr><td>2</td><td>2</td><td>1</td><td>1.1850317627191544</td><td>0.5521833333333334</td><td>13.191706418991089</td><td>14.168094873428345</td><td>0.01</td><td>1000</td><td>0</td><td>cpu</td></tr>\n",
    "<tr><td>3</td><td>2</td><td>2</td><td>0.6565005630254745</td><td>0.7424166666666666</td><td>12.927437782287598</td><td>27.199254989624023</td><td>0.01</td><td>1000</td><td>0</td><td>cpu</td></tr>\n",
    "<tr><td>4</td><td>3</td><td>1</td><td>1.0353816469510397</td><td>0.5979166666666667</td><td>7.899864435195923</td><td>8.667809009552002</td><td>0.01</td><td>1000</td><td>1</td><td>cuda</td></tr>\n",
    "<tr><td>5</td><td>3</td><td>2</td><td>0.5281389872233073</td><td>0.7974833333333333</td><td>7.695444345474243</td><td>16.482932567596436</td><td>0.01</td><td>1000</td><td>1</td><td>cuda</td></tr>\n",
    "<tr><td>6</td><td>4</td><td>1</td><td>1.0003941506147385</td><td>0.6135</td><td>12.406805515289307</td><td>13.301412343978882</td><td>0.01</td><td>1000</td><td>1</td><td>cpu</td></tr>\n",
    "<tr><td>7</td><td>4</td><td>2</td><td>0.5597394168376922</td><td>0.7819333333333334</td><td>12.996736526489258</td><td>26.39090061187744</td><td>0.01</td><td>1000</td><td>1</td><td>cpu</td></tr>\n",
    "<tr><td>8</td><td>5</td><td>1</td><td>2.1817620595296225</td><td>0.21105</td><td>9.979301452636719</td><td>14.71061372756958</td><td>0.01</td><td>10000</td><td>0</td><td>cuda</td></tr>\n",
    "<tr><td>9</td><td>5</td><td>2</td><td>1.5009960730870564</td><td>0.41345</td><td>7.790157794952393</td><td>22.617460012435913</td><td>0.01</td><td>10000</td><td>0</td><td>cuda</td></tr>\n",
    "<tr><td>10</td><td>6</td><td>1</td><td>2.191338042418162</td><td>0.25776666666666664</td><td>12.654212713241577</td><td>20.27781581878662</td><td>0.01</td><td>10000</td><td>0</td><td>cpu</td></tr>\n",
    "<tr><td>11</td><td>6</td><td>2</td><td>1.5385146339734395</td><td>0.4116166666666667</td><td>13.750212907791138</td><td>34.12576651573181</td><td>0.01</td><td>10000</td><td>0</td><td>cpu</td></tr>\n",
    "<tr><td>12</td><td>7</td><td>1</td><td>2.0937188069025674</td><td>0.24205</td><td>10.781154155731201</td><td>16.329310655593872</td><td>0.01</td><td>10000</td><td>1</td><td>cuda</td></tr>\n",
    "<tr><td>13</td><td>7</td><td>2</td><td>1.6782972415288289</td><td>0.3495666666666667</td><td>8.667845487594604</td><td>25.11484146118164</td><td>0.01</td><td>10000</td><td>1</td><td>cuda</td></tr>\n",
    "<tr><td>14</td><td>8</td><td>1</td><td>2.181113620599111</td><td>0.18073333333333333</td><td>12.430742979049683</td><td>20.360525608062744</td><td>0.01</td><td>10000</td><td>1</td><td>cpu</td></tr>\n",
    "<tr><td>15</td><td>8</td><td>2</td><td>1.4258009195327759</td><td>0.4513</td><td>12.419771671295166</td><td>32.86806273460388</td><td>0.01</td><td>10000</td><td>1</td><td>cpu</td></tr>\n",
    "<tr><td>16</td><td>9</td><td>1</td><td>2.281795342763265</td><td>0.113</td><td>12.738913536071777</td><td>21.488025665283203</td><td>0.01</td><td>20000</td><td>0</td><td>cuda</td></tr>\n",
    "<tr><td>17</td><td>9</td><td>2</td><td>1.8872746229171753</td><td>0.33266666666666667</td><td>7.82509183883667</td><td>29.428784132003784</td><td>0.01</td><td>20000</td><td>0</td><td>cuda</td></tr>\n",
    "<tr><td>18</td><td>10</td><td>1</td><td>2.276853322982788</td><td>0.1421</td><td>14.396483182907104</td><td>28.404006242752075</td><td>0.01</td><td>20000</td><td>0</td><td>cpu</td></tr>\n",
    "<tr><td>19</td><td>10</td><td>2</td><td>1.9167550802230835</td><td>0.29265</td><td>13.226643323898315</td><td>41.743351221084595</td><td>0.01</td><td>20000</td><td>0</td><td>cpu</td></tr>\n",
    "<tr><td>20</td><td>11</td><td>1</td><td>2.2801879247029624</td><td>0.24583333333333332</td><td>12.877546787261963</td><td>22.319284915924072</td><td>0.01</td><td>20000</td><td>1</td><td>cuda</td></tr>\n",
    "<tr><td>21</td><td>11</td><td>2</td><td>1.8660159905751545</td><td>0.39941666666666664</td><td>7.981645345687866</td><td>30.41063666343689</td><td>0.01</td><td>20000</td><td>1</td><td>cuda</td></tr>\n",
    "<tr><td>22</td><td>12</td><td>1</td><td>2.291093111038208</td><td>0.15393333333333334</td><td>13.899812459945679</td><td>27.670968294143677</td><td>0.01</td><td>20000</td><td>1</td><td>cpu</td></tr>\n",
    "<tr><td>23</td><td>12</td><td>2</td><td>1.9686975479125977</td><td>0.35846666666666666</td><td>12.957333087921143</td><td>40.71407151222229</td><td>0.01</td><td>20000</td><td>1</td><td>cpu</td></tr>\n",
    "<tr><td></td></tr>\n",
    "</table>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Quiz 04\n",
    "1. When we move data to the GPU, we can use the cuda() method.\n",
    "```python\n",
    "network = Network().cuda()\n",
    "```\n",
    "* True<br><br>\n",
    "\n",
    "2. In neural network programming, it is ideal to put the data on the GPU while leaving the network on the CPU. This speeds up processing!  \n",
    "* False\n",
    "\n",
    "3. By default, when a PyTorch tensor or a PyTorch neural network module is created, the corresponding data is initialized on the _______________.\n",
    "* CPU\n",
    "\n",
    "4. GPUs and CPUs are compute devices that compute on data, and any two values that are directly being used with one another in a computation, must exist on the same device.\n",
    "* True\n",
    "\n",
    "5. By default, PyTorch initializes tensors on the _______________.\n",
    "* CPU\n",
    "\n",
    "6. What's the significance of the `0` in the `cuda` device below?\n",
    "```python\n",
    "> t2 = t2.to('cuda')\n",
    "> t1 + t2\n",
    "\n",
    "tensor([[ 6,  8],\n",
    "      [10, 12]], device='cuda:0')\n",
    "```\n",
    "* Given multiple GPUs, it tells us which one\n",
    "\n",
    "7. If a PyTorch program is device agnostic, the program will only run on machines that have a GPU.\n",
    "* False\n",
    "\n",
    "8. PyTorch `tensors` and PyTorch `nn.Module`instances both have device attributes.\n",
    "* False\n",
    "\n",
    "---\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 05 PyTorch Dataset Normalization - Torchvision.Transforms.Normalize()\n",
    "**In this episode, we're going to learn how to normalize a dataset. We'll see how dataset normalization is carried out in code, and we'll see how normalization affects the neural network training process.**\n",
    "\n",
    "## Data Normalization\n",
    "The idea of data [normalization](https://en.wikipedia.org/wiki/Normalization_(statistics)) is an **general concept** that refers to the act of **transforming** the original values of a dataset to new values. The new values are typically encoded relative to the dataset itself and are scaled in some way.\n",
    "\n",
    "### Feature Scaling\n",
    "For this reason, another name for data normalization that is sometimes used is [feature scaling](https://en.wikipedia.org/wiki/Feature_scaling). This term refers to the fact that when normalizing data, we often transform different features of a given dataset to a similar scale.\n",
    "\n",
    "In this case, we are not just thinking of a dataset of values but rather, **a dataset of elements** that have **multiple features**, each with their on value.\n",
    "\n",
    "Suppose for example that we are dealing with a dataset of **people**, and we have two relevant features in our dataset, **age** and **weight**. In this case, we can observe that the **magnitudes** or **scales** of these these two feature sets are **different**, i.e., the weights on average ar larger than the age.\n",
    "\n",
    "This difference in magnitude can be **problematic** when comparing or computing using machine learning algorithms. Hence, this can be **one reason** we might want to scale the values of these features to some similar scale via feature scaling.\n",
    "\n",
    "### Normalization Example\n",
    "When we normalize a dataset, we said that we typically encode some form of information about each particular value relative to the dataset at large and rescale the data. Let's consider an example.\n",
    "\n",
    "Suppose we have a set $S$ of positive numbers. Now, suppose we choose a random value $x$ from the set $s$ and ask the following question:<br><br>\n",
    "<center><b>Is this value  the largest member of the set ?</b></center>\n",
    "\n",
    "In this case, the answer is that **we don't know**. We simply don't have enough information to answer the question.\n",
    "\n",
    "However, let's suppose now that we are told that the set $S$ has been normalized by **dividing** every value by **the largest value** inside the set. Given this normalization process, the information of which value is largest has been encoded and the data has been rescaled.\n",
    "\n",
    "The **largest** member of the set is **1**, and the data has been scaled to the interval <math>\n",
    "  <mo stretchy=\"false\">[</mo>\n",
    "  <mn>0</mn>\n",
    "  <mo>,</mo>\n",
    "  <mn>1</mn>\n",
    "  <mo stretchy=\"false\">]</mo>\n",
    "</math>.\n",
    "\n",
    "### What Is Standardization\n",
    "Data [standardization](https://en.wikipedia.org/wiki/Standard_score) is a specific type of normalization technique. It is sometimes referred to as **z-score normalization**. The z-score, a.k.a. **standard score**, is the transformed value for each data point.\n",
    "\n",
    "To normalize a dataset using standardization, we take every value $x$ inside the dataset and transform it to its corresponding $z$ value using the following formula:\n",
    "$$z=\\frac{x-mean}{std}$$\n",
    "\n",
    "After performing this computation on every $x$ value inside our dataset, we have a new normalized dataset of $z$ values. The mean and standard deviation values are with respect to the dataset as a whole.\n",
    "\n",
    "Suppose that a given set $S$ of numbers has $n$ members.\n",
    "\n",
    "The mean of the set $S$ is given by the following equation:\n",
    "$$ mean = \\frac{1}{n} \\left( \\sum_{i=1}^{n} x_{i} \\right) $$\n",
    "The standard deviation of the set  is given by the following equation:\n",
    "$$ std = \\sqrt{\\frac{1}{n} \\left(\\sum\\limits_{i=1}^{n} \\left( x_{i}-mean \\right) ^{2}\\right)} $$\n",
    "We have seen how normalizing by dividing by the largest value had the effect of transforming the largest value to **1**, this standardization process transforms the dataset's mean value to **0** and its standard deviation to **1**.\n",
    "\n",
    "It's **important** to note that when we normalize a dataset, we typically **group** these operations by **feature**. This means that the mean and standard deviation values are relative to **each feature set** that's being normalized. If we are working with **images**, the features are the **RGB color channels**, so we **normalize each color channel** with respect to the **mean** and **standard deviation** values calculated across **all pixels** in every **images** for the respective **color channel**.\n",
    "\n",
    "## Normalize A Dataset In Code\n",
    "Let's jump into a code example. The first step is to initialize our dataset, so in this example we'll use the Fashion MNIST dataset that we've been working with up to this point in the series.\n",
    "```python\n",
    "train_set = torchvision.datasets.FashionMNIST(\n",
    "    root = './data'\n",
    "    ,train=True\n",
    "    ,download = True\n",
    "    ,transform = transforms.Compose([\n",
    "        transforms.ToTensor()\n",
    "    ])\n",
    ")\n",
    "```\n",
    "PyTorch allows us to normalize our dataset using the **standardization process** we've just seen by passing in the mean and standard deviation values for **each color channel** to the `Normalize()` transform.\n",
    "```python\n",
    "torchvision.transforms.Normalize(\n",
    "    [meanOfChannel1, meanOfChannel2, meanOfChannel3] \n",
    "    , [stdOfChannel1, stdOfChannel2, stdOfChannel3] \n",
    ")\n",
    "```\n",
    "\n",
    "Since the images inside our dataset only have a **single channel**, we only need to pass in solo mean and standard deviation values. In order to do this we need to first calculate these values. Sometimes the values might be posted online somewhere, so we can get them that way. However, when in doubt, we can just calculate the manually.\n",
    "\n",
    "There are two ways it can be done. The easy way, and the harder way. The **easy way** can be achieved if the dataset is **small enough to fit into memory all at once**. **Otherwise**, we have to **iterate over the data** which is slightly harder.\n",
    "\n",
    "### Calculating `mean` And `std` The Easy Way\n",
    "The easy way is easy. All we have to do is load the dataset using the data loader and get **a single batch tensor** that contains **all the data**. To do this we set the **batch size** to be equal to the **training set length**.\n",
    "```python\n",
    "loader = DataLoader(train_set, batch_size = len(train_set), num_workers = 1)\n",
    "data = next(iter(loader))\n",
    "data[0].mean(),data[0].std()\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<torch.autograd.grad_mode.set_grad_enabled at 0x276aad111f0>"
      ]
     },
     "execution_count": 69,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import json\n",
    "import time\n",
    "\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "import pandas as pd\n",
    "\n",
    "\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "from itertools import product\n",
    "from collections import namedtuple, OrderedDict\n",
    "from IPython import display\n",
    "\n",
    "torch.set_printoptions(linewidth=120)  # Display options for output\n",
    "torch.set_grad_enabled(True)  # Already on by default"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**from IPython import display**不要落了这个"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_set = torchvision.datasets.FashionMNIST(\n",
    "    root='./data'\n",
    "    ,train=True\n",
    "    ,download=True\n",
    "    ,transform=transforms.Compose([\n",
    "        transforms.ToTensor()\n",
    "    ])\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor(0.2860), tensor(0.3530))"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "loader = torch.utils.data.DataLoader(train_set, batch_size = len(train_set), num_workers = 1)\n",
    "data = next(iter(loader))\n",
    "data[0].mean(),data[0].std() #data[0]是image，data[1]是label"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here, we can obtain the mean and standard deviation values by simply using the corresponding PyTorch tensor methods.\n",
    "### Calculating mean And std The Hard Way\n",
    "The hard way is hard because we need to **manually** implement the formulas for the mean and standard deviation and **iterate** over smaller batches of the dataset.\n",
    "\n",
    "First, we create a data loader with a **smaller batch size**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "loader = torch.utils.data.DataLoader(train_set,batch_size = 1000, num_workers = 1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<torch.utils.data.dataloader.DataLoader at 0x276aad096a0>"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "loader"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then, we calculate our $n$ value or **total number** of **pixels**:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "num_of_pixels = len(train_set) * 28 * 28"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note that the $28 * 28$ is the height and width of the images inside our dataset. Now, we **sum** the **pixels values** by **iterating over each batch**, and we calculate the **mean** by **dividing** this **sum** by the total number of pixels. 因为是单通道的灰度图，这些pixel value在0到1之间"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [],
   "source": [
    "total_sum = 0\n",
    "for batch in loader:\n",
    "    total_sum += batch[0].sum() # batch[0]为image的pixel tensor\n",
    "mean = total_sum / num_of_pixels"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor(0.2860)"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "mean"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, we calculate the sum of the **squared errors** by iterating thorough each batch, and this allows us to calculate the standard deviation by dividing the sum of the squared errors by the total number of pixels and square rooting the result."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [],
   "source": [
    "sum_of_squared_error = 0\n",
    "for batch in loader:\n",
    "    sum_of_squared_error += ((batch[0] - mean).pow(2)).sum() # 平方和\n",
    "std = torch.sqrt(sum_of_squared_error / num_of_pixels)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor(0.2860), tensor(0.3530))"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "mean,std"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Using The `mean` And `std` Values\n",
    "Our task is to use these values to transform the pixel values inside our dataset to their corresponding standardized values. To do this we create a new train_set only this time we pass a **normalization transform** to the transforms composition."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_set_normal = torchvision.datasets.FashionMNIST(\n",
    "    root='./data'\n",
    "    ,train = True\n",
    "    ,download=True\n",
    "    ,transform=transforms.Compose([\n",
    "        transforms.ToTensor()\n",
    "        ,transforms.Normalize(mean, std)\n",
    "    ])\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note that the **order** of the transforms **matters** inside the composition. The images are loaded as Python **PIL objects**, so we must add the `ToTensor()` transform before the `Normalize()` transform due to the fact that the `Normalize()` transform expects a **tensor** as input.\n",
    "\n",
    "Now, that our dataset has a `Normalize()` transform, the data will be **normalized** when it is **loaded** by the data loader. Remember, for each image the **following transform** will be applied to **every pixel** in the image.$$z=\\frac{x-mean}{std}$$\n",
    "\n",
    "This has the effect of rescaling our data relative to the mean and standard deviation of the dataset. Let's see this in action by recalculating these values."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor(1.2368e-05), tensor(1.0000))"
      ]
     },
     "execution_count": 41,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "loader = torch.utils.data.DataLoader(\n",
    "    train_set_normal\n",
    "    ,batch_size=len(train_set)\n",
    "    ,num_workers = 1\n",
    ")\n",
    "data = next(iter(loader))\n",
    "data[0].mean(),data[0].std()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here, we can see that the mean value is now **0** and the standard deviation value is now **1**.\n",
    "## Training With Normalized Data\n",
    "Let's see now how training with and without normalized data affects the training process. To this test, we'll do 20 epochs under each condition.\n",
    "\n",
    "Let's create a **dictionary** of **training sets** that we can use to run the test in the framework that we've been building throughout the course."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [],
   "source": [
    "trainsets = {\n",
    "    'not_normal':train_set\n",
    "    ,'normal':train_set_normal\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we can add these two train_sets to our configuration and access the values inside our runs loop."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [],
   "source": [
    "params = OrderedDict(\n",
    "    lr = [.01]\n",
    "    ,batch_size = [1000]\n",
    "    ,num_workers = [1]\n",
    "    ,device = ['cuda']\n",
    "    ,trainset = ['not_normal','normal']\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Network(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)\n",
    "        self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)\n",
    "\n",
    "        self.fc1 = nn.Linear(in_features=12 * 4 * 4, out_features=120)\n",
    "        self.fc2 = nn.Linear(in_features=120, out_features=60)\n",
    "        self.out = nn.Linear(in_features=60, out_features=10)\n",
    "\n",
    "    def forward(self, t):\n",
    "        t = t\n",
    "\n",
    "        t = self.conv1(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t,  kernel_size=2, stride=2)\n",
    "\n",
    "        t = self.conv2(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t, kernel_size=2, stride=2)\n",
    "\n",
    "        t = t.reshape(-1,12*4*4)\n",
    "        t = self.fc1(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        t = self.fc2(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        t = self.out(t)\n",
    "\n",
    "        return t\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [],
   "source": [
    "class RunBuilder():\n",
    "    @staticmethod\n",
    "    def get_runs(params):\n",
    "        Run = namedtuple('Run', params.keys())\n",
    "\n",
    "        runs = []\n",
    "        for v in product(*params.values()):\n",
    "            runs.append(Run(*v))\n",
    "\n",
    "        return runs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {},
   "outputs": [],
   "source": [
    "class RunManager():\n",
    "    def __init__(self):\n",
    "        self.epoch_count = 0\n",
    "        self.epoch_loss = 0\n",
    "        self.epoch_num_correct = 0\n",
    "        self.epoch_start_time = None\n",
    "\n",
    "        self.run_params = None\n",
    "        self.run_count = 0\n",
    "        self.run_data = []\n",
    "        self.run_start_time = None\n",
    "\n",
    "        self.network = None\n",
    "        self.loader = None\n",
    "        self.tb = None\n",
    "\n",
    "    def begin_run(self, run, network, loader):\n",
    "\n",
    "        self.run_start_time = time.time()\n",
    "        self.run_params = run\n",
    "        self.run_count += 1\n",
    "\n",
    "        self.network = network\n",
    "        self.loader = loader\n",
    "        self.tb = SummaryWriter(comment=f'-{run}')\n",
    "\n",
    "        images,labels = next(iter(self.loader))\n",
    "        grid = torchvision.utils.make_grid(images)\n",
    "\n",
    "        self.tb.add_image('images',grid)\n",
    "        self.tb.add_graph(self.network, images.to(getattr(run,'device','cpu')))\n",
    "\n",
    "    def end_run(self):\n",
    "        self.tb.close()\n",
    "        self.epoch_count = 0\n",
    "\n",
    "    def begin_epoch(self):\n",
    "        self.epoch_start_time = time.time()\n",
    "\n",
    "        self.epoch_count += 1\n",
    "        self.epoch_loss = 0\n",
    "        self.epoch_num_correct = 0\n",
    "    \n",
    "    def end_epoch(self):\n",
    "\n",
    "        epoch_duration = time.time() - self.epoch_start_time\n",
    "        run_duration = time.time() - self.run_start_time\n",
    "\n",
    "        loss = self.epoch_loss / len(self.loader.dataset)\n",
    "        accuracy = self.epoch_num_correct / len(self.loader.dataset)\n",
    "\n",
    "        self.tb.add_scalar('Loss',loss,self.epoch_count)\n",
    "        self.tb.add_scalar('Accuracy',accuracy,self.epoch_count)\n",
    "\n",
    "        for name,param in self.network.named_parameters():\n",
    "            self.tb.add_histogram(name, param, self.epoch_count)\n",
    "            self.tb.add_histogram(f'{name}.grad', param.grad, self.epoch_count)\n",
    "\n",
    "        results = OrderedDict()\n",
    "        results[\"run\"] = self.run_count\n",
    "        results[\"epoch\"] = self.epoch_count\n",
    "        results[\"loss\"] = loss\n",
    "        results[\"accuracy\"] = accuracy\n",
    "        results[\"epoch duration\"] = epoch_duration\n",
    "        results[\"run duration\"] = run_duration\n",
    "        for k,v in self.run_params._asdict().items():results[k] = v\n",
    "        self.run_data.append(results)\n",
    "\n",
    "        df = pd.DataFrame.from_dict(self.run_data,orient='columns')\n",
    "        \n",
    "        \n",
    "        display.clear_output(wait=True)\n",
    "        display.display(df)\n",
    "\n",
    "    def get_num_correct(self,preds, labels):\n",
    "        return preds.argmax(dim=1).eq(labels).sum().item()\n",
    "\n",
    "    def track_loss(self,loss,batch):\n",
    "        self.epoch_loss += loss.item() * batch[0].shape[0]\n",
    "\n",
    "    def track_num_correct(self,preds,labels):\n",
    "        self.epoch_num_correct += self.get_num_correct(preds,labels)\n",
    "\n",
    "    def save(self, fileName):\n",
    "\n",
    "        pd.DataFrame.from_dict(\n",
    "            self.run_data,orient = 'columns'\n",
    "        ).to_csv(f'{fileName}.csv')\n",
    "\n",
    "        with open(f'{fileName}.json','w',encoding='utf-8') as f:\n",
    "            json.dump(self.run_data,f, ensure_ascii=False, indent = 4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "metadata": {},
   "outputs": [],
   "source": [
    "m = RunManager()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>run</th>\n",
       "      <th>epoch</th>\n",
       "      <th>loss</th>\n",
       "      <th>accuracy</th>\n",
       "      <th>epoch duration</th>\n",
       "      <th>run duration</th>\n",
       "      <th>lr</th>\n",
       "      <th>batch_size</th>\n",
       "      <th>num_workers</th>\n",
       "      <th>device</th>\n",
       "      <th>trainset</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>0.947734</td>\n",
       "      <td>0.652950</td>\n",
       "      <td>7.394213</td>\n",
       "      <td>9.470620</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>1</td>\n",
       "      <td>2</td>\n",
       "      <td>0.503703</td>\n",
       "      <td>0.806650</td>\n",
       "      <td>7.257630</td>\n",
       "      <td>16.862890</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0.415752</td>\n",
       "      <td>0.846583</td>\n",
       "      <td>7.157878</td>\n",
       "      <td>24.144438</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>1</td>\n",
       "      <td>4</td>\n",
       "      <td>0.370302</td>\n",
       "      <td>0.863400</td>\n",
       "      <td>7.059152</td>\n",
       "      <td>31.336203</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>1</td>\n",
       "      <td>5</td>\n",
       "      <td>0.340771</td>\n",
       "      <td>0.874400</td>\n",
       "      <td>7.065094</td>\n",
       "      <td>38.531948</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>1</td>\n",
       "      <td>6</td>\n",
       "      <td>0.310214</td>\n",
       "      <td>0.884050</td>\n",
       "      <td>7.095062</td>\n",
       "      <td>45.756663</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6</th>\n",
       "      <td>1</td>\n",
       "      <td>7</td>\n",
       "      <td>0.292965</td>\n",
       "      <td>0.891167</td>\n",
       "      <td>7.107009</td>\n",
       "      <td>52.999277</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7</th>\n",
       "      <td>1</td>\n",
       "      <td>8</td>\n",
       "      <td>0.282665</td>\n",
       "      <td>0.895983</td>\n",
       "      <td>7.073123</td>\n",
       "      <td>60.207014</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>8</th>\n",
       "      <td>1</td>\n",
       "      <td>9</td>\n",
       "      <td>0.269787</td>\n",
       "      <td>0.900000</td>\n",
       "      <td>7.095014</td>\n",
       "      <td>67.437665</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>9</th>\n",
       "      <td>1</td>\n",
       "      <td>10</td>\n",
       "      <td>0.265291</td>\n",
       "      <td>0.901583</td>\n",
       "      <td>7.113964</td>\n",
       "      <td>74.686269</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10</th>\n",
       "      <td>1</td>\n",
       "      <td>11</td>\n",
       "      <td>0.263894</td>\n",
       "      <td>0.901067</td>\n",
       "      <td>7.125991</td>\n",
       "      <td>81.941913</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>11</th>\n",
       "      <td>1</td>\n",
       "      <td>12</td>\n",
       "      <td>0.258176</td>\n",
       "      <td>0.902900</td>\n",
       "      <td>7.133910</td>\n",
       "      <td>89.209466</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>12</th>\n",
       "      <td>1</td>\n",
       "      <td>13</td>\n",
       "      <td>0.247274</td>\n",
       "      <td>0.908200</td>\n",
       "      <td>7.460052</td>\n",
       "      <td>96.805156</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>13</th>\n",
       "      <td>1</td>\n",
       "      <td>14</td>\n",
       "      <td>0.242614</td>\n",
       "      <td>0.909133</td>\n",
       "      <td>7.289548</td>\n",
       "      <td>104.233333</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>14</th>\n",
       "      <td>1</td>\n",
       "      <td>15</td>\n",
       "      <td>0.240678</td>\n",
       "      <td>0.909400</td>\n",
       "      <td>7.109976</td>\n",
       "      <td>111.478945</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>15</th>\n",
       "      <td>1</td>\n",
       "      <td>16</td>\n",
       "      <td>0.235477</td>\n",
       "      <td>0.910983</td>\n",
       "      <td>7.080069</td>\n",
       "      <td>118.692657</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>16</th>\n",
       "      <td>1</td>\n",
       "      <td>17</td>\n",
       "      <td>0.240044</td>\n",
       "      <td>0.908883</td>\n",
       "      <td>7.065121</td>\n",
       "      <td>125.898401</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>17</th>\n",
       "      <td>1</td>\n",
       "      <td>18</td>\n",
       "      <td>0.225766</td>\n",
       "      <td>0.914917</td>\n",
       "      <td>7.224724</td>\n",
       "      <td>133.260759</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>18</th>\n",
       "      <td>1</td>\n",
       "      <td>19</td>\n",
       "      <td>0.225115</td>\n",
       "      <td>0.914500</td>\n",
       "      <td>7.580782</td>\n",
       "      <td>141.047989</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>19</th>\n",
       "      <td>1</td>\n",
       "      <td>20</td>\n",
       "      <td>0.221359</td>\n",
       "      <td>0.915817</td>\n",
       "      <td>7.379379</td>\n",
       "      <td>148.569969</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>20</th>\n",
       "      <td>2</td>\n",
       "      <td>1</td>\n",
       "      <td>0.833520</td>\n",
       "      <td>0.681400</td>\n",
       "      <td>10.044157</td>\n",
       "      <td>12.189428</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>21</th>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "      <td>0.451986</td>\n",
       "      <td>0.833567</td>\n",
       "      <td>9.743954</td>\n",
       "      <td>22.072975</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>22</th>\n",
       "      <td>2</td>\n",
       "      <td>3</td>\n",
       "      <td>0.385024</td>\n",
       "      <td>0.859483</td>\n",
       "      <td>9.772849</td>\n",
       "      <td>31.972456</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>23</th>\n",
       "      <td>2</td>\n",
       "      <td>4</td>\n",
       "      <td>0.347837</td>\n",
       "      <td>0.871950</td>\n",
       "      <td>9.683090</td>\n",
       "      <td>41.789188</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>24</th>\n",
       "      <td>2</td>\n",
       "      <td>5</td>\n",
       "      <td>0.326322</td>\n",
       "      <td>0.880483</td>\n",
       "      <td>10.171783</td>\n",
       "      <td>52.093617</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>25</th>\n",
       "      <td>2</td>\n",
       "      <td>6</td>\n",
       "      <td>0.312304</td>\n",
       "      <td>0.884683</td>\n",
       "      <td>10.397179</td>\n",
       "      <td>62.627430</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>26</th>\n",
       "      <td>2</td>\n",
       "      <td>7</td>\n",
       "      <td>0.300420</td>\n",
       "      <td>0.889750</td>\n",
       "      <td>9.819810</td>\n",
       "      <td>72.573901</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>27</th>\n",
       "      <td>2</td>\n",
       "      <td>8</td>\n",
       "      <td>0.286401</td>\n",
       "      <td>0.894633</td>\n",
       "      <td>10.025243</td>\n",
       "      <td>82.735780</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>28</th>\n",
       "      <td>2</td>\n",
       "      <td>9</td>\n",
       "      <td>0.278579</td>\n",
       "      <td>0.895850</td>\n",
       "      <td>12.863579</td>\n",
       "      <td>95.786824</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>29</th>\n",
       "      <td>2</td>\n",
       "      <td>10</td>\n",
       "      <td>0.272421</td>\n",
       "      <td>0.897983</td>\n",
       "      <td>10.387205</td>\n",
       "      <td>106.320636</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>30</th>\n",
       "      <td>2</td>\n",
       "      <td>11</td>\n",
       "      <td>0.263395</td>\n",
       "      <td>0.901750</td>\n",
       "      <td>10.303489</td>\n",
       "      <td>116.774724</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>31</th>\n",
       "      <td>2</td>\n",
       "      <td>12</td>\n",
       "      <td>0.253376</td>\n",
       "      <td>0.905383</td>\n",
       "      <td>13.399867</td>\n",
       "      <td>130.307212</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>32</th>\n",
       "      <td>2</td>\n",
       "      <td>13</td>\n",
       "      <td>0.245732</td>\n",
       "      <td>0.908450</td>\n",
       "      <td>12.127912</td>\n",
       "      <td>142.584723</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>33</th>\n",
       "      <td>2</td>\n",
       "      <td>14</td>\n",
       "      <td>0.242410</td>\n",
       "      <td>0.909450</td>\n",
       "      <td>11.944404</td>\n",
       "      <td>154.700178</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>34</th>\n",
       "      <td>2</td>\n",
       "      <td>15</td>\n",
       "      <td>0.235163</td>\n",
       "      <td>0.911317</td>\n",
       "      <td>11.828560</td>\n",
       "      <td>166.674348</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>35</th>\n",
       "      <td>2</td>\n",
       "      <td>16</td>\n",
       "      <td>0.229931</td>\n",
       "      <td>0.913033</td>\n",
       "      <td>11.896156</td>\n",
       "      <td>178.719088</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>36</th>\n",
       "      <td>2</td>\n",
       "      <td>17</td>\n",
       "      <td>0.230035</td>\n",
       "      <td>0.913633</td>\n",
       "      <td>11.658277</td>\n",
       "      <td>190.525438</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>37</th>\n",
       "      <td>2</td>\n",
       "      <td>18</td>\n",
       "      <td>0.231750</td>\n",
       "      <td>0.911200</td>\n",
       "      <td>11.905782</td>\n",
       "      <td>202.583802</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>38</th>\n",
       "      <td>2</td>\n",
       "      <td>19</td>\n",
       "      <td>0.227248</td>\n",
       "      <td>0.913667</td>\n",
       "      <td>12.207076</td>\n",
       "      <td>214.937480</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>39</th>\n",
       "      <td>2</td>\n",
       "      <td>20</td>\n",
       "      <td>0.222482</td>\n",
       "      <td>0.915383</td>\n",
       "      <td>12.208193</td>\n",
       "      <td>227.286297</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "    run  epoch      loss  accuracy  epoch duration  run duration    lr  \\\n",
       "0     1      1  0.947734  0.652950        7.394213      9.470620  0.01   \n",
       "1     1      2  0.503703  0.806650        7.257630     16.862890  0.01   \n",
       "2     1      3  0.415752  0.846583        7.157878     24.144438  0.01   \n",
       "3     1      4  0.370302  0.863400        7.059152     31.336203  0.01   \n",
       "4     1      5  0.340771  0.874400        7.065094     38.531948  0.01   \n",
       "5     1      6  0.310214  0.884050        7.095062     45.756663  0.01   \n",
       "6     1      7  0.292965  0.891167        7.107009     52.999277  0.01   \n",
       "7     1      8  0.282665  0.895983        7.073123     60.207014  0.01   \n",
       "8     1      9  0.269787  0.900000        7.095014     67.437665  0.01   \n",
       "9     1     10  0.265291  0.901583        7.113964     74.686269  0.01   \n",
       "10    1     11  0.263894  0.901067        7.125991     81.941913  0.01   \n",
       "11    1     12  0.258176  0.902900        7.133910     89.209466  0.01   \n",
       "12    1     13  0.247274  0.908200        7.460052     96.805156  0.01   \n",
       "13    1     14  0.242614  0.909133        7.289548    104.233333  0.01   \n",
       "14    1     15  0.240678  0.909400        7.109976    111.478945  0.01   \n",
       "15    1     16  0.235477  0.910983        7.080069    118.692657  0.01   \n",
       "16    1     17  0.240044  0.908883        7.065121    125.898401  0.01   \n",
       "17    1     18  0.225766  0.914917        7.224724    133.260759  0.01   \n",
       "18    1     19  0.225115  0.914500        7.580782    141.047989  0.01   \n",
       "19    1     20  0.221359  0.915817        7.379379    148.569969  0.01   \n",
       "20    2      1  0.833520  0.681400       10.044157     12.189428  0.01   \n",
       "21    2      2  0.451986  0.833567        9.743954     22.072975  0.01   \n",
       "22    2      3  0.385024  0.859483        9.772849     31.972456  0.01   \n",
       "23    2      4  0.347837  0.871950        9.683090     41.789188  0.01   \n",
       "24    2      5  0.326322  0.880483       10.171783     52.093617  0.01   \n",
       "25    2      6  0.312304  0.884683       10.397179     62.627430  0.01   \n",
       "26    2      7  0.300420  0.889750        9.819810     72.573901  0.01   \n",
       "27    2      8  0.286401  0.894633       10.025243     82.735780  0.01   \n",
       "28    2      9  0.278579  0.895850       12.863579     95.786824  0.01   \n",
       "29    2     10  0.272421  0.897983       10.387205    106.320636  0.01   \n",
       "30    2     11  0.263395  0.901750       10.303489    116.774724  0.01   \n",
       "31    2     12  0.253376  0.905383       13.399867    130.307212  0.01   \n",
       "32    2     13  0.245732  0.908450       12.127912    142.584723  0.01   \n",
       "33    2     14  0.242410  0.909450       11.944404    154.700178  0.01   \n",
       "34    2     15  0.235163  0.911317       11.828560    166.674348  0.01   \n",
       "35    2     16  0.229931  0.913033       11.896156    178.719088  0.01   \n",
       "36    2     17  0.230035  0.913633       11.658277    190.525438  0.01   \n",
       "37    2     18  0.231750  0.911200       11.905782    202.583802  0.01   \n",
       "38    2     19  0.227248  0.913667       12.207076    214.937480  0.01   \n",
       "39    2     20  0.222482  0.915383       12.208193    227.286297  0.01   \n",
       "\n",
       "    batch_size  num_workers device    trainset  \n",
       "0         1000            1   cuda  not_normal  \n",
       "1         1000            1   cuda  not_normal  \n",
       "2         1000            1   cuda  not_normal  \n",
       "3         1000            1   cuda  not_normal  \n",
       "4         1000            1   cuda  not_normal  \n",
       "5         1000            1   cuda  not_normal  \n",
       "6         1000            1   cuda  not_normal  \n",
       "7         1000            1   cuda  not_normal  \n",
       "8         1000            1   cuda  not_normal  \n",
       "9         1000            1   cuda  not_normal  \n",
       "10        1000            1   cuda  not_normal  \n",
       "11        1000            1   cuda  not_normal  \n",
       "12        1000            1   cuda  not_normal  \n",
       "13        1000            1   cuda  not_normal  \n",
       "14        1000            1   cuda  not_normal  \n",
       "15        1000            1   cuda  not_normal  \n",
       "16        1000            1   cuda  not_normal  \n",
       "17        1000            1   cuda  not_normal  \n",
       "18        1000            1   cuda  not_normal  \n",
       "19        1000            1   cuda  not_normal  \n",
       "20        1000            1   cuda      normal  \n",
       "21        1000            1   cuda      normal  \n",
       "22        1000            1   cuda      normal  \n",
       "23        1000            1   cuda      normal  \n",
       "24        1000            1   cuda      normal  \n",
       "25        1000            1   cuda      normal  \n",
       "26        1000            1   cuda      normal  \n",
       "27        1000            1   cuda      normal  \n",
       "28        1000            1   cuda      normal  \n",
       "29        1000            1   cuda      normal  \n",
       "30        1000            1   cuda      normal  \n",
       "31        1000            1   cuda      normal  \n",
       "32        1000            1   cuda      normal  \n",
       "33        1000            1   cuda      normal  \n",
       "34        1000            1   cuda      normal  \n",
       "35        1000            1   cuda      normal  \n",
       "36        1000            1   cuda      normal  \n",
       "37        1000            1   cuda      normal  \n",
       "38        1000            1   cuda      normal  \n",
       "39        1000            1   cuda      normal  "
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "for run in RunBuilder.get_runs(params):\n",
    "\n",
    "    device = torch.device(run.device)\n",
    "    network = Network().to(device)\n",
    "    loader = torch.utils.data.DataLoader(\n",
    "          trainsets[run.trainset]\n",
    "        , batch_size=run.batch_size\n",
    "        , num_workers=run.num_workers\n",
    "    )\n",
    "    optimizer = optim.Adam(network.parameters(), lr=run.lr)\n",
    "\n",
    "    m.begin_run(run, network, loader)\n",
    "    for epoch in range(20):\n",
    "        m.begin_epoch()\n",
    "        for batch in loader:\n",
    "\n",
    "            images = batch[0].to(device)\n",
    "            labels = batch[1].to(device)\n",
    "            preds = network(images) # Pass Batch\n",
    "            loss = F.cross_entropy(preds, labels) # Calculate Loss\n",
    "            optimizer.zero_grad() # Zero Gradients\n",
    "            loss.backward() # Calculate Gradients\n",
    "            optimizer.step() # Update Weights\n",
    "\n",
    "            m.track_loss(loss, batch)\n",
    "            m.track_num_correct(preds, labels)\n",
    "        m.end_epoch()\n",
    "    m.end_run()\n",
    "m.save('results2')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>run</th>\n",
       "      <th>epoch</th>\n",
       "      <th>loss</th>\n",
       "      <th>accuracy</th>\n",
       "      <th>epoch duration</th>\n",
       "      <th>run duration</th>\n",
       "      <th>lr</th>\n",
       "      <th>batch_size</th>\n",
       "      <th>num_workers</th>\n",
       "      <th>device</th>\n",
       "      <th>trainset</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>19</th>\n",
       "      <td>1</td>\n",
       "      <td>20</td>\n",
       "      <td>0.221359</td>\n",
       "      <td>0.915817</td>\n",
       "      <td>7.379379</td>\n",
       "      <td>148.569969</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>39</th>\n",
       "      <td>2</td>\n",
       "      <td>20</td>\n",
       "      <td>0.222482</td>\n",
       "      <td>0.915383</td>\n",
       "      <td>12.208193</td>\n",
       "      <td>227.286297</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>17</th>\n",
       "      <td>1</td>\n",
       "      <td>18</td>\n",
       "      <td>0.225766</td>\n",
       "      <td>0.914917</td>\n",
       "      <td>7.224724</td>\n",
       "      <td>133.260759</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>18</th>\n",
       "      <td>1</td>\n",
       "      <td>19</td>\n",
       "      <td>0.225115</td>\n",
       "      <td>0.914500</td>\n",
       "      <td>7.580782</td>\n",
       "      <td>141.047989</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>38</th>\n",
       "      <td>2</td>\n",
       "      <td>19</td>\n",
       "      <td>0.227248</td>\n",
       "      <td>0.913667</td>\n",
       "      <td>12.207076</td>\n",
       "      <td>214.937480</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>36</th>\n",
       "      <td>2</td>\n",
       "      <td>17</td>\n",
       "      <td>0.230035</td>\n",
       "      <td>0.913633</td>\n",
       "      <td>11.658277</td>\n",
       "      <td>190.525438</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>35</th>\n",
       "      <td>2</td>\n",
       "      <td>16</td>\n",
       "      <td>0.229931</td>\n",
       "      <td>0.913033</td>\n",
       "      <td>11.896156</td>\n",
       "      <td>178.719088</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>34</th>\n",
       "      <td>2</td>\n",
       "      <td>15</td>\n",
       "      <td>0.235163</td>\n",
       "      <td>0.911317</td>\n",
       "      <td>11.828560</td>\n",
       "      <td>166.674348</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>37</th>\n",
       "      <td>2</td>\n",
       "      <td>18</td>\n",
       "      <td>0.231750</td>\n",
       "      <td>0.911200</td>\n",
       "      <td>11.905782</td>\n",
       "      <td>202.583802</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>15</th>\n",
       "      <td>1</td>\n",
       "      <td>16</td>\n",
       "      <td>0.235477</td>\n",
       "      <td>0.910983</td>\n",
       "      <td>7.080069</td>\n",
       "      <td>118.692657</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>33</th>\n",
       "      <td>2</td>\n",
       "      <td>14</td>\n",
       "      <td>0.242410</td>\n",
       "      <td>0.909450</td>\n",
       "      <td>11.944404</td>\n",
       "      <td>154.700178</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>14</th>\n",
       "      <td>1</td>\n",
       "      <td>15</td>\n",
       "      <td>0.240678</td>\n",
       "      <td>0.909400</td>\n",
       "      <td>7.109976</td>\n",
       "      <td>111.478945</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>13</th>\n",
       "      <td>1</td>\n",
       "      <td>14</td>\n",
       "      <td>0.242614</td>\n",
       "      <td>0.909133</td>\n",
       "      <td>7.289548</td>\n",
       "      <td>104.233333</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>16</th>\n",
       "      <td>1</td>\n",
       "      <td>17</td>\n",
       "      <td>0.240044</td>\n",
       "      <td>0.908883</td>\n",
       "      <td>7.065121</td>\n",
       "      <td>125.898401</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>32</th>\n",
       "      <td>2</td>\n",
       "      <td>13</td>\n",
       "      <td>0.245732</td>\n",
       "      <td>0.908450</td>\n",
       "      <td>12.127912</td>\n",
       "      <td>142.584723</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>12</th>\n",
       "      <td>1</td>\n",
       "      <td>13</td>\n",
       "      <td>0.247274</td>\n",
       "      <td>0.908200</td>\n",
       "      <td>7.460052</td>\n",
       "      <td>96.805156</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>31</th>\n",
       "      <td>2</td>\n",
       "      <td>12</td>\n",
       "      <td>0.253376</td>\n",
       "      <td>0.905383</td>\n",
       "      <td>13.399867</td>\n",
       "      <td>130.307212</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>11</th>\n",
       "      <td>1</td>\n",
       "      <td>12</td>\n",
       "      <td>0.258176</td>\n",
       "      <td>0.902900</td>\n",
       "      <td>7.133910</td>\n",
       "      <td>89.209466</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>30</th>\n",
       "      <td>2</td>\n",
       "      <td>11</td>\n",
       "      <td>0.263395</td>\n",
       "      <td>0.901750</td>\n",
       "      <td>10.303489</td>\n",
       "      <td>116.774724</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>9</th>\n",
       "      <td>1</td>\n",
       "      <td>10</td>\n",
       "      <td>0.265291</td>\n",
       "      <td>0.901583</td>\n",
       "      <td>7.113964</td>\n",
       "      <td>74.686269</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10</th>\n",
       "      <td>1</td>\n",
       "      <td>11</td>\n",
       "      <td>0.263894</td>\n",
       "      <td>0.901067</td>\n",
       "      <td>7.125991</td>\n",
       "      <td>81.941913</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>8</th>\n",
       "      <td>1</td>\n",
       "      <td>9</td>\n",
       "      <td>0.269787</td>\n",
       "      <td>0.900000</td>\n",
       "      <td>7.095014</td>\n",
       "      <td>67.437665</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>29</th>\n",
       "      <td>2</td>\n",
       "      <td>10</td>\n",
       "      <td>0.272421</td>\n",
       "      <td>0.897983</td>\n",
       "      <td>10.387205</td>\n",
       "      <td>106.320636</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7</th>\n",
       "      <td>1</td>\n",
       "      <td>8</td>\n",
       "      <td>0.282665</td>\n",
       "      <td>0.895983</td>\n",
       "      <td>7.073123</td>\n",
       "      <td>60.207014</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>28</th>\n",
       "      <td>2</td>\n",
       "      <td>9</td>\n",
       "      <td>0.278579</td>\n",
       "      <td>0.895850</td>\n",
       "      <td>12.863579</td>\n",
       "      <td>95.786824</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>27</th>\n",
       "      <td>2</td>\n",
       "      <td>8</td>\n",
       "      <td>0.286401</td>\n",
       "      <td>0.894633</td>\n",
       "      <td>10.025243</td>\n",
       "      <td>82.735780</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6</th>\n",
       "      <td>1</td>\n",
       "      <td>7</td>\n",
       "      <td>0.292965</td>\n",
       "      <td>0.891167</td>\n",
       "      <td>7.107009</td>\n",
       "      <td>52.999277</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>26</th>\n",
       "      <td>2</td>\n",
       "      <td>7</td>\n",
       "      <td>0.300420</td>\n",
       "      <td>0.889750</td>\n",
       "      <td>9.819810</td>\n",
       "      <td>72.573901</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>25</th>\n",
       "      <td>2</td>\n",
       "      <td>6</td>\n",
       "      <td>0.312304</td>\n",
       "      <td>0.884683</td>\n",
       "      <td>10.397179</td>\n",
       "      <td>62.627430</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>1</td>\n",
       "      <td>6</td>\n",
       "      <td>0.310214</td>\n",
       "      <td>0.884050</td>\n",
       "      <td>7.095062</td>\n",
       "      <td>45.756663</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>24</th>\n",
       "      <td>2</td>\n",
       "      <td>5</td>\n",
       "      <td>0.326322</td>\n",
       "      <td>0.880483</td>\n",
       "      <td>10.171783</td>\n",
       "      <td>52.093617</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>1</td>\n",
       "      <td>5</td>\n",
       "      <td>0.340771</td>\n",
       "      <td>0.874400</td>\n",
       "      <td>7.065094</td>\n",
       "      <td>38.531948</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>23</th>\n",
       "      <td>2</td>\n",
       "      <td>4</td>\n",
       "      <td>0.347837</td>\n",
       "      <td>0.871950</td>\n",
       "      <td>9.683090</td>\n",
       "      <td>41.789188</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>1</td>\n",
       "      <td>4</td>\n",
       "      <td>0.370302</td>\n",
       "      <td>0.863400</td>\n",
       "      <td>7.059152</td>\n",
       "      <td>31.336203</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>22</th>\n",
       "      <td>2</td>\n",
       "      <td>3</td>\n",
       "      <td>0.385024</td>\n",
       "      <td>0.859483</td>\n",
       "      <td>9.772849</td>\n",
       "      <td>31.972456</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0.415752</td>\n",
       "      <td>0.846583</td>\n",
       "      <td>7.157878</td>\n",
       "      <td>24.144438</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>21</th>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "      <td>0.451986</td>\n",
       "      <td>0.833567</td>\n",
       "      <td>9.743954</td>\n",
       "      <td>22.072975</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>1</td>\n",
       "      <td>2</td>\n",
       "      <td>0.503703</td>\n",
       "      <td>0.806650</td>\n",
       "      <td>7.257630</td>\n",
       "      <td>16.862890</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>20</th>\n",
       "      <td>2</td>\n",
       "      <td>1</td>\n",
       "      <td>0.833520</td>\n",
       "      <td>0.681400</td>\n",
       "      <td>10.044157</td>\n",
       "      <td>12.189428</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>0.947734</td>\n",
       "      <td>0.652950</td>\n",
       "      <td>7.394213</td>\n",
       "      <td>9.470620</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>not_normal</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "    run  epoch      loss  accuracy  epoch duration  run duration    lr  \\\n",
       "19    1     20  0.221359  0.915817        7.379379    148.569969  0.01   \n",
       "39    2     20  0.222482  0.915383       12.208193    227.286297  0.01   \n",
       "17    1     18  0.225766  0.914917        7.224724    133.260759  0.01   \n",
       "18    1     19  0.225115  0.914500        7.580782    141.047989  0.01   \n",
       "38    2     19  0.227248  0.913667       12.207076    214.937480  0.01   \n",
       "36    2     17  0.230035  0.913633       11.658277    190.525438  0.01   \n",
       "35    2     16  0.229931  0.913033       11.896156    178.719088  0.01   \n",
       "34    2     15  0.235163  0.911317       11.828560    166.674348  0.01   \n",
       "37    2     18  0.231750  0.911200       11.905782    202.583802  0.01   \n",
       "15    1     16  0.235477  0.910983        7.080069    118.692657  0.01   \n",
       "33    2     14  0.242410  0.909450       11.944404    154.700178  0.01   \n",
       "14    1     15  0.240678  0.909400        7.109976    111.478945  0.01   \n",
       "13    1     14  0.242614  0.909133        7.289548    104.233333  0.01   \n",
       "16    1     17  0.240044  0.908883        7.065121    125.898401  0.01   \n",
       "32    2     13  0.245732  0.908450       12.127912    142.584723  0.01   \n",
       "12    1     13  0.247274  0.908200        7.460052     96.805156  0.01   \n",
       "31    2     12  0.253376  0.905383       13.399867    130.307212  0.01   \n",
       "11    1     12  0.258176  0.902900        7.133910     89.209466  0.01   \n",
       "30    2     11  0.263395  0.901750       10.303489    116.774724  0.01   \n",
       "9     1     10  0.265291  0.901583        7.113964     74.686269  0.01   \n",
       "10    1     11  0.263894  0.901067        7.125991     81.941913  0.01   \n",
       "8     1      9  0.269787  0.900000        7.095014     67.437665  0.01   \n",
       "29    2     10  0.272421  0.897983       10.387205    106.320636  0.01   \n",
       "7     1      8  0.282665  0.895983        7.073123     60.207014  0.01   \n",
       "28    2      9  0.278579  0.895850       12.863579     95.786824  0.01   \n",
       "27    2      8  0.286401  0.894633       10.025243     82.735780  0.01   \n",
       "6     1      7  0.292965  0.891167        7.107009     52.999277  0.01   \n",
       "26    2      7  0.300420  0.889750        9.819810     72.573901  0.01   \n",
       "25    2      6  0.312304  0.884683       10.397179     62.627430  0.01   \n",
       "5     1      6  0.310214  0.884050        7.095062     45.756663  0.01   \n",
       "24    2      5  0.326322  0.880483       10.171783     52.093617  0.01   \n",
       "4     1      5  0.340771  0.874400        7.065094     38.531948  0.01   \n",
       "23    2      4  0.347837  0.871950        9.683090     41.789188  0.01   \n",
       "3     1      4  0.370302  0.863400        7.059152     31.336203  0.01   \n",
       "22    2      3  0.385024  0.859483        9.772849     31.972456  0.01   \n",
       "2     1      3  0.415752  0.846583        7.157878     24.144438  0.01   \n",
       "21    2      2  0.451986  0.833567        9.743954     22.072975  0.01   \n",
       "1     1      2  0.503703  0.806650        7.257630     16.862890  0.01   \n",
       "20    2      1  0.833520  0.681400       10.044157     12.189428  0.01   \n",
       "0     1      1  0.947734  0.652950        7.394213      9.470620  0.01   \n",
       "\n",
       "    batch_size  num_workers device    trainset  \n",
       "19        1000            1   cuda  not_normal  \n",
       "39        1000            1   cuda      normal  \n",
       "17        1000            1   cuda  not_normal  \n",
       "18        1000            1   cuda  not_normal  \n",
       "38        1000            1   cuda      normal  \n",
       "36        1000            1   cuda      normal  \n",
       "35        1000            1   cuda      normal  \n",
       "34        1000            1   cuda      normal  \n",
       "37        1000            1   cuda      normal  \n",
       "15        1000            1   cuda  not_normal  \n",
       "33        1000            1   cuda      normal  \n",
       "14        1000            1   cuda  not_normal  \n",
       "13        1000            1   cuda  not_normal  \n",
       "16        1000            1   cuda  not_normal  \n",
       "32        1000            1   cuda      normal  \n",
       "12        1000            1   cuda  not_normal  \n",
       "31        1000            1   cuda      normal  \n",
       "11        1000            1   cuda  not_normal  \n",
       "30        1000            1   cuda      normal  \n",
       "9         1000            1   cuda  not_normal  \n",
       "10        1000            1   cuda  not_normal  \n",
       "8         1000            1   cuda  not_normal  \n",
       "29        1000            1   cuda      normal  \n",
       "7         1000            1   cuda  not_normal  \n",
       "28        1000            1   cuda      normal  \n",
       "27        1000            1   cuda      normal  \n",
       "6         1000            1   cuda  not_normal  \n",
       "26        1000            1   cuda      normal  \n",
       "25        1000            1   cuda      normal  \n",
       "5         1000            1   cuda  not_normal  \n",
       "24        1000            1   cuda      normal  \n",
       "4         1000            1   cuda  not_normal  \n",
       "23        1000            1   cuda      normal  \n",
       "3         1000            1   cuda  not_normal  \n",
       "22        1000            1   cuda      normal  \n",
       "2         1000            1   cuda  not_normal  \n",
       "21        1000            1   cuda      normal  \n",
       "1         1000            1   cuda  not_normal  \n",
       "20        1000            1   cuda      normal  \n",
       "0         1000            1   cuda  not_normal  "
      ]
     },
     "execution_count": 73,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "pd.DataFrame.from_dict(m.run_data).sort_values('accuracy', ascending=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Quiz 05\n",
    "1. Data normalization refers to a single specific algorithm for transforming data to a new form.\n",
    "* False\n",
    "\n",
    "2. After a data is normalized, the new values typically encode information relative to the original dataset and are _______________ in some way.\n",
    "* rescaled\n",
    "\n",
    "3. When we normalize a dataset we usually target each feature set inside the dataset independently\n",
    "* True\n",
    "\n",
    "4. Feature scaling is the act of transforming different features of a dataset to similar scales.\n",
    "* True\n",
    "\n",
    "5. The _______________ of a feature set refers to the value range of the data.\n",
    "* scale\n",
    "\n",
    "6. Suppose we normalize a set of positive values by dividing each value by the maximum value of the set. What will the largest value of the new normalized set be?\n",
    "* 1\n",
    "\n",
    "7. Suppose we normalize a set of positive values by dividing each value by the maximum value of the set. This normalized values will be rescaled to which interval?\n",
    "* $[0,1]$\n",
    "\n",
    "8. Data normalization is a specific type of standardization technique.\n",
    "* False"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
