{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 01 Sequential\n",
    "**In this episode, we're going to learn how to use PyTorch's Sequential class to build neural networks.**\n",
    "## PyTorch Sequential Module\n",
    "The `Sequential` class allows us to build PyTorch neural networks on-the-fly **without** having to build an **explicit class**. This make it much easier to rapidly build networks and allows us to skip over the step where we implement the `forward()` method. When we use the sequential way of building a PyTorch network, we construct the `forward()` method implicitly by defining our network's architecture sequentially.\n",
    "\n",
    "A sequential module is a **container or wrapper** class that **extends** the `nn.Module` base class and allows us to compose modules together. We can compose any `nn.Module` with in any other `nn.Module`.\n",
    "\n",
    "This means that we can compose layers to make networks, and since networks are also `nn.Module` instances, we can also compose networks with one another. Additionally, since the Sequential class is also a `nn.Module` itself, we can even compose `Sequential` modules with one another.\n",
    "\n",
    "At this point, we may be wondering about other required functions and operations, like pooling operations or activation functions. We'll, the answer is that all of the functions and operations in the `nn.functional` API have been wrapped up into `nn.Module` classes. This allows us to pass things like activation function to `Sequential` wrappers to fully build out our networks in a sequential way.\n",
    "\n",
    "## Building PyTorch Sequential Networks\n",
    "There are **three** ways to create a Sequential model. Let's see them in action.\n",
    "### Code Setup\n",
    "Firstly, we handle our imports."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "\n",
    "import matplotlib.pyplot as plt\n",
    "import math\n",
    "\n",
    "from collections import OrderedDict\n",
    "\n",
    "torch.set_printoptions(linewidth=150)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then, we need to create a dataset that we can use for the purposes of passing a sample to the networks we will be building."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_set = torchvision.datasets.FashionMNIST(\n",
    "    root = './data',\n",
    "    train = True,\n",
    "    download = True,\n",
    "    transform = transforms.Compose([transforms.ToTensor()])\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we'll grab a sample image from the FashionMNIST dataset instance."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([1, 28, 28])"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "image, label = train_set[0]\n",
    "image.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we'll grab some values that will be used to construct our network"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "784"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "in_features = image.numel()\n",
    "in_features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "392"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "out_features = math.floor(in_features / 2)\n",
    "out_features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "int"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "type(image.numel())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "10"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "out_classes = len(train_set.classes)\n",
    "out_classes"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Sequential Model Initialization: Way 1\n",
    "The first way to create a sequential model is to pass `nn.Module` instances **directly** to the `Sequential` class constructor."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "network1 = nn.Sequential(\n",
    "    nn.Flatten(start_dim = 1),\n",
    "    nn.Linear(in_features, out_features),\n",
    "    nn.Linear(out_features, out_classes)\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Sequential(\n",
       "  (0): Flatten()\n",
       "  (1): Linear(in_features=784, out_features=392, bias=True)\n",
       "  (2): Linear(in_features=392, out_features=10, bias=True)\n",
       ")"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "network1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Sequential Model Initialization: Way 2\n",
    "The second way to create a sequential model is to create an `OrderedDict` that contains `nn.Module` instances. Then, pass the dictionary to the `Sequential` class constructor."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "layers = OrderedDict([\n",
    "    ('flat',nn.Flatten(start_dim=1)),\n",
    "    ('hidden',nn.Linear(in_features, out_features)),\n",
    "    ('output',nn.Linear(out_features, out_classes))\n",
    "])\n",
    "\n",
    "network2 = nn.Sequential(layers)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Sequential(\n",
       "  (flat): Flatten()\n",
       "  (hidden): Linear(in_features=784, out_features=392, bias=True)\n",
       "  (output): Linear(in_features=392, out_features=10, bias=True)\n",
       ")"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "network2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This way of initialization allows us to name the `nn.Module` instances explicitly.\n",
    "### Sequential Model Initialization: Way 3\n",
    "The third way of creating a sequential model is to create a sequential instance using an empty constructor. Then, we can use the `add_module()` method to add `nn.Module` instances to the network after it has already been initialize."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "network3 = nn.Sequential()\n",
    "network3.add_module('flat',nn.Flatten(start_dim = 1))\n",
    "network3.add_module('hidden',nn.Linear(in_features,out_features))\n",
    "network3.add_module('output',nn.Linear(out_features,out_classes))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Sequential(\n",
       "  (flat): Flatten()\n",
       "  (hidden): Linear(in_features=784, out_features=392, bias=True)\n",
       "  (output): Linear(in_features=392, out_features=10, bias=True)\n",
       ")"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "network3"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Class Definition Vs Sequential\n",
    "So far in this course, we've been working with a CNN that we defined using a class definition. The network is defined like this:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Network(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.conv1 = nn.Conv2d(1, 6, 5)\n",
    "        self.conv2 = nn.Conv2d(6, 12, 5)\n",
    "\n",
    "        self.fc1 = nn.Linear(in_features=12*4*4, out_features=120)\n",
    "        self.fc2 = nn.Linear(in_features=120, out_features=60)\n",
    "        self.out = nn.Linear(in_features=60, out_features=10)\n",
    "\n",
    "    def forward(self, t):\n",
    "\n",
    "        t = F.relu(self.conv1(t))\n",
    "        t = F.max_pool2d(t, kernel_size=2, stride=2)\n",
    "\n",
    "        t = F.relu(self.conv2(t))\n",
    "        t = F.max_pool2d(t, kernel_size=2, stride=2)\n",
    "\n",
    "        t = t.flatten(start_dim=1)\n",
    "        t = F.relu(self.fc1(t))\n",
    "        t = F.relu(self.fc2(t))\n",
    "        t = self.out(t)\n",
    "\n",
    "        return t"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We get an instance of the network like so:\n",
    "```python\n",
    "network = Network()\n",
    "```\n",
    "Now, let's see how this same network can be created using the `Sequential` class. It works like this:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "sequential = nn.Sequential(\n",
    "      nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)\n",
    "    , nn.ReLU()\n",
    "    , nn.MaxPool2d(kernel_size=2, stride=2)\n",
    "    , nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)\n",
    "    , nn.ReLU()\n",
    "    , nn.MaxPool2d(kernel_size=2, stride=2)\n",
    "    , nn.Flatten(start_dim=1)  \n",
    "    , nn.Linear(in_features=12*4*4, out_features=120)\n",
    "    , nn.ReLU()\n",
    "    , nn.Linear(in_features=120, out_features=60)\n",
    "    , nn.ReLU()\n",
    "    , nn.Linear(in_features=60, out_features=10)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We said that these networks are the **same**. But what do we mean? In this case, we mean that the networks have the **same architecture**. From a programming standpoint, the two networks are **different types** under the hood.\n",
    "\n",
    "Note that we can get the same output predictions for these two networks if we fix the seed that is used to generate random numbers in PyTorch. This is because both network's will have randomly generated weights. To be sure the weights are the same, we use the PyTorch method below before creating each network.\n",
    "`torch.manual_seed(50)`\n",
    "It's important to note that the method must be called twice, once before each network initialization."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Quiz 01\n",
    "1. The `Sequential` class allows us to build PyTorch neural networks on-the-fly without having to build an explicit _______________.\n",
    "  * class\n",
    "\n",
    "2. When we build a `Sequential` model, our `forward()` method is defined explicitly.\n",
    "  * False\n",
    "  \n",
    "3. A sequential module is a container or _______________ class that allows us to compose modules together.\n",
    "  * wrapper\n",
    "  \n",
    "4. The Sequential class extends the _______________ class.\n",
    "  * nn.Module\n",
    "  \n",
    "5. The `nn.Flatten()` module is a wrapper class that wraps the `torch.flatten()` function.\n",
    "  * True\n",
    "  \n",
    "---\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#  02 Batch Normalization In PyTorch\n",
    "**In this episode, we're going to see how we can add batch normalization to a PyTorch CNN.**\n",
    "\n",
    "## What Is Batch Normalization?\n",
    "In order to understand batch normalization, we need to first understand what data normalization is in general, and we learned about this concept in the episode on dataset [normalization](https://deeplizard.com/learn/video/lu7TCu7HeYc).\n",
    "\n",
    "When we normalize a dataset, we are normalizing the **input data** that will be passed to the network, and when we add **batch normalization** to our network, we are normalizing the data again **after** it has passed through one or more **layers**.\n",
    "\n",
    "One question that may come to mind is the following:\n",
    "<center><b>Why normalize again if the input is already normalized?</b></center>\n",
    "\n",
    "Well, as the data begins moving though layers, the values will begin to shift as the layer transformations are preformed. Normalizing the outputs from a layer ensures that the **scale stays in a specific range** as the data flows though the network from input to output.\n",
    "    \n",
    "The specific normalization technique that is typically used is called **standardization**. This is where we calculate a z-score using the mean and standard deviation.\n",
    "$$z=\\frac{x-mean}{std}$$\n",
    "\n",
    "### How Batch Norm Works\n",
    "When using batch norm, the mean and standard deviation values are calculated with respect to the **batch** at the time normalization is applied. This is **opposed to** the **entire dataset**, like we saw with dataset normalization.\n",
    "\n",
    "Additionally, there are two learnable parameters that allow the data the data to be scaled and shifted. We saw this in the paper: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift](https://arxiv.org/pdf/1502.03167.pdf)\n",
    "\n",
    "Note that the scaling given by $\\gamma$ corresponds to the multiplication operation, and the sifting given by $\\beta$ corresponds to the addition operation.\n",
    "\n",
    "The *Scale* and *sift* operations sound fancy, but they simply mean *multiply* and *add*.\n",
    "\n",
    "These learnable parameters give the distribution of values more freedom to move around, adjusting to the right fit.\n",
    "\n",
    "The scale and sift values can be thought of as the slope and y-intercept values of a line, both which allow the line to be adjusted to fit various locations on the 2D plane.\n",
    "\n",
    "## Adding Batch Norm To A CNN\n",
    "Alright, let's create two networks, **one with batch norm** and **one without**. Then, we'll test these setups using the testing framework we've developed so far in the course. To do this, we'll make use of the `nn.Sequential` class.\n",
    "\n",
    "Our first network will be called `network1`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "import time\n",
    "\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "import pandas as pd\n",
    "\n",
    "\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "from itertools import product\n",
    "from collections import namedtuple, OrderedDict\n",
    "from IPython import display\n",
    "\n",
    "torch.set_printoptions(linewidth=120)  # Display options for output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.manual_seed(50)\n",
    "network1 = nn.Sequential(\n",
    "      nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)\n",
    "    , nn.ReLU()\n",
    "    , nn.MaxPool2d(kernel_size=2, stride=2)\n",
    "    , nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)\n",
    "    , nn.ReLU()\n",
    "    , nn.MaxPool2d(kernel_size=2, stride=2)\n",
    "    , nn.Flatten(start_dim=1)  \n",
    "    , nn.Linear(in_features=12*4*4, out_features=120)\n",
    "    , nn.ReLU()\n",
    "    , nn.Linear(in_features=120, out_features=60)\n",
    "    , nn.ReLU()\n",
    "    , nn.Linear(in_features=60, out_features=10)\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Our second network will be called network2\n",
    "torch.manual_seed(50)\n",
    "network2 = nn.Sequential(\n",
    "      nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)\n",
    "    , nn.ReLU()\n",
    "    , nn.MaxPool2d(kernel_size=2, stride=2)\n",
    "    , nn.BatchNorm2d(6)\n",
    "    , nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)\n",
    "    , nn.ReLU()\n",
    "    , nn.MaxPool2d(kernel_size=2, stride=2)\n",
    "    , nn.Flatten(start_dim=1)  \n",
    "    , nn.Linear(in_features=12*4*4, out_features=120)\n",
    "    , nn.ReLU()\n",
    "    , nn.BatchNorm1d(120)\n",
    "    , nn.Linear(in_features=120, out_features=60)\n",
    "    , nn.ReLU()\n",
    "    , nn.Linear(in_features=60, out_features=10)\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we'll create a networks dictionary that we'll use to store the two networks."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [],
   "source": [
    "networks = {\n",
    "    'no_batch_norm':network1,\n",
    "    'batch_norm':network2\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The names or keys of this dictionary will be used inside our run loop to access each network. To configure our runs, we can use the keys of the dictionary opposed to writing out each value explicity. This is pretty cool because it allows us to easily test different networks with one another simply by adding more networks to the dictionary."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [],
   "source": [
    "params = OrderedDict(\n",
    "    lr = [.01],\n",
    "    batch_size = [1000],\n",
    "    num_workers = [1],\n",
    "    device = ['cuda'],\n",
    "    trainset = ['normal'],\n",
    "    network = list(networks.keys())\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_set = torchvision.datasets.FashionMNIST(\n",
    "    root='./data'\n",
    "    ,train=True\n",
    "    ,download=True\n",
    "    ,transform=transforms.Compose([\n",
    "        transforms.ToTensor()\n",
    "    ])\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [],
   "source": [
    "loader = torch.utils.data.DataLoader(train_set, batch_size = len(train_set), num_workers = 1)\n",
    "data = next(iter(loader))\n",
    "mean = data[0].mean()\n",
    "std = data[0].std() #data[0]是image，data[1]是label"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_set_normal = torchvision.datasets.FashionMNIST(\n",
    "    root='./data'\n",
    "    ,train = True\n",
    "    ,download=True\n",
    "    ,transform=transforms.Compose([\n",
    "        transforms.ToTensor()\n",
    "        ,transforms.Normalize(mean, std)\n",
    "    ])\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [],
   "source": [
    "class RunBuilder():\n",
    "    @staticmethod\n",
    "    def get_runs(params):\n",
    "        Run = namedtuple('Run', params.keys())\n",
    "\n",
    "        runs = []\n",
    "        for v in product(*params.values()):\n",
    "            runs.append(Run(*v))\n",
    "\n",
    "        return runs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [],
   "source": [
    "class RunManager():\n",
    "    def __init__(self):\n",
    "        self.epoch_count = 0\n",
    "        self.epoch_loss = 0\n",
    "        self.epoch_num_correct = 0\n",
    "        self.epoch_start_time = None\n",
    "\n",
    "        self.run_params = None\n",
    "        self.run_count = 0\n",
    "        self.run_data = []\n",
    "        self.run_start_time = None\n",
    "\n",
    "        self.network = None\n",
    "        self.loader = None\n",
    "        self.tb = None\n",
    "\n",
    "    def begin_run(self, run, network, loader):\n",
    "\n",
    "        self.run_start_time = time.time()\n",
    "        self.run_params = run\n",
    "        self.run_count += 1\n",
    "\n",
    "        self.network = network\n",
    "        self.loader = loader\n",
    "        self.tb = SummaryWriter(comment=f'-{run}')\n",
    "\n",
    "        images,labels = next(iter(self.loader))\n",
    "        grid = torchvision.utils.make_grid(images)\n",
    "\n",
    "        self.tb.add_image('images',grid)\n",
    "        #self.tb.add_graph(self.network, images.to(getattr(run,'device','cpu')))\n",
    "\n",
    "    def end_run(self):\n",
    "        self.tb.close()\n",
    "        self.epoch_count = 0\n",
    "\n",
    "    def begin_epoch(self):\n",
    "        self.epoch_start_time = time.time()\n",
    "\n",
    "        self.epoch_count += 1\n",
    "        self.epoch_loss = 0\n",
    "        self.epoch_num_correct = 0\n",
    "    \n",
    "    def end_epoch(self):\n",
    "\n",
    "        epoch_duration = time.time() - self.epoch_start_time\n",
    "        run_duration = time.time() - self.run_start_time\n",
    "\n",
    "        loss = self.epoch_loss / len(self.loader.dataset)\n",
    "        accuracy = self.epoch_num_correct / len(self.loader.dataset)\n",
    "\n",
    "        self.tb.add_scalar('Loss',loss,self.epoch_count)\n",
    "        self.tb.add_scalar('Accuracy',accuracy,self.epoch_count)\n",
    "\n",
    "        for name,param in self.network.named_parameters():\n",
    "            self.tb.add_histogram(name, param, self.epoch_count)\n",
    "            self.tb.add_histogram(f'{name}.grad', param.grad, self.epoch_count)\n",
    "\n",
    "        results = OrderedDict()\n",
    "        results[\"run\"] = self.run_count\n",
    "        results[\"epoch\"] = self.epoch_count\n",
    "        results[\"loss\"] = loss\n",
    "        results[\"accuracy\"] = accuracy\n",
    "        results[\"epoch duration\"] = epoch_duration\n",
    "        results[\"run duration\"] = run_duration\n",
    "        for k,v in self.run_params._asdict().items():results[k] = v\n",
    "        self.run_data.append(results)\n",
    "\n",
    "        df = pd.DataFrame.from_dict(self.run_data,orient='columns')\n",
    "        \n",
    "        \n",
    "        display.clear_output(wait=True)\n",
    "        display.display(df)\n",
    "\n",
    "    def get_num_correct(self,preds, labels):\n",
    "        return preds.argmax(dim=1).eq(labels).sum().item()\n",
    "\n",
    "    def track_loss(self,loss,batch):\n",
    "        self.epoch_loss += loss.item() * batch[0].shape[0]\n",
    "\n",
    "    def track_num_correct(self,preds,labels):\n",
    "        self.epoch_num_correct += self.get_num_correct(preds,labels)\n",
    "\n",
    "    def save(self, fileName):\n",
    "\n",
    "        pd.DataFrame.from_dict(\n",
    "            self.run_data,orient = 'columns'\n",
    "        ).to_csv(f'{fileName}.csv')\n",
    "\n",
    "        with open(f'{fileName}.json','w',encoding='utf-8') as f:\n",
    "            json.dump(self.run_data,f, ensure_ascii=False, indent = 4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [],
   "source": [
    "m = RunManager()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [],
   "source": [
    "trainsets = {\n",
    "    'not_normal':train_set\n",
    "    ,'normal':train_set_normal\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [],
   "source": [
    "params = OrderedDict(\n",
    "    lr = [.01],\n",
    "    batch_size = [1000],\n",
    "    num_workers = [1],\n",
    "    device = ['cuda'],\n",
    "    trainset = ['normal'],\n",
    "    network = list(networks.keys())\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>run</th>\n",
       "      <th>epoch</th>\n",
       "      <th>loss</th>\n",
       "      <th>accuracy</th>\n",
       "      <th>epoch duration</th>\n",
       "      <th>run duration</th>\n",
       "      <th>lr</th>\n",
       "      <th>batch_size</th>\n",
       "      <th>num_workers</th>\n",
       "      <th>device</th>\n",
       "      <th>trainset</th>\n",
       "      <th>network</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>0.902308</td>\n",
       "      <td>0.665700</td>\n",
       "      <td>11.045463</td>\n",
       "      <td>12.710009</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>1</td>\n",
       "      <td>2</td>\n",
       "      <td>0.481590</td>\n",
       "      <td>0.816767</td>\n",
       "      <td>10.109992</td>\n",
       "      <td>22.955631</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0.394888</td>\n",
       "      <td>0.854983</td>\n",
       "      <td>10.082085</td>\n",
       "      <td>33.174318</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>1</td>\n",
       "      <td>4</td>\n",
       "      <td>0.355903</td>\n",
       "      <td>0.869950</td>\n",
       "      <td>10.019200</td>\n",
       "      <td>43.323171</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>1</td>\n",
       "      <td>5</td>\n",
       "      <td>0.336055</td>\n",
       "      <td>0.875917</td>\n",
       "      <td>10.001249</td>\n",
       "      <td>53.456068</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>1</td>\n",
       "      <td>6</td>\n",
       "      <td>0.317772</td>\n",
       "      <td>0.883133</td>\n",
       "      <td>10.050117</td>\n",
       "      <td>63.638831</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6</th>\n",
       "      <td>1</td>\n",
       "      <td>7</td>\n",
       "      <td>0.300082</td>\n",
       "      <td>0.889450</td>\n",
       "      <td>10.098988</td>\n",
       "      <td>73.863483</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7</th>\n",
       "      <td>1</td>\n",
       "      <td>8</td>\n",
       "      <td>0.284005</td>\n",
       "      <td>0.894600</td>\n",
       "      <td>9.989282</td>\n",
       "      <td>83.986406</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>8</th>\n",
       "      <td>1</td>\n",
       "      <td>9</td>\n",
       "      <td>0.276586</td>\n",
       "      <td>0.897367</td>\n",
       "      <td>10.084028</td>\n",
       "      <td>94.206071</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>9</th>\n",
       "      <td>1</td>\n",
       "      <td>10</td>\n",
       "      <td>0.275722</td>\n",
       "      <td>0.897200</td>\n",
       "      <td>10.049120</td>\n",
       "      <td>104.388834</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10</th>\n",
       "      <td>1</td>\n",
       "      <td>11</td>\n",
       "      <td>0.268197</td>\n",
       "      <td>0.899450</td>\n",
       "      <td>9.992310</td>\n",
       "      <td>114.508802</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>11</th>\n",
       "      <td>1</td>\n",
       "      <td>12</td>\n",
       "      <td>0.262196</td>\n",
       "      <td>0.901300</td>\n",
       "      <td>9.968384</td>\n",
       "      <td>124.606841</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>12</th>\n",
       "      <td>1</td>\n",
       "      <td>13</td>\n",
       "      <td>0.255895</td>\n",
       "      <td>0.904567</td>\n",
       "      <td>10.045131</td>\n",
       "      <td>134.783583</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>13</th>\n",
       "      <td>1</td>\n",
       "      <td>14</td>\n",
       "      <td>0.248653</td>\n",
       "      <td>0.906483</td>\n",
       "      <td>9.919468</td>\n",
       "      <td>144.834699</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>14</th>\n",
       "      <td>1</td>\n",
       "      <td>15</td>\n",
       "      <td>0.243708</td>\n",
       "      <td>0.908033</td>\n",
       "      <td>9.961360</td>\n",
       "      <td>154.920726</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>15</th>\n",
       "      <td>1</td>\n",
       "      <td>16</td>\n",
       "      <td>0.242617</td>\n",
       "      <td>0.909200</td>\n",
       "      <td>10.011235</td>\n",
       "      <td>165.057619</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>16</th>\n",
       "      <td>1</td>\n",
       "      <td>17</td>\n",
       "      <td>0.231671</td>\n",
       "      <td>0.912700</td>\n",
       "      <td>10.014242</td>\n",
       "      <td>175.201502</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>17</th>\n",
       "      <td>1</td>\n",
       "      <td>18</td>\n",
       "      <td>0.227298</td>\n",
       "      <td>0.914483</td>\n",
       "      <td>10.041692</td>\n",
       "      <td>185.370855</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>18</th>\n",
       "      <td>1</td>\n",
       "      <td>19</td>\n",
       "      <td>0.226496</td>\n",
       "      <td>0.913883</td>\n",
       "      <td>10.105968</td>\n",
       "      <td>195.617447</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>19</th>\n",
       "      <td>1</td>\n",
       "      <td>20</td>\n",
       "      <td>0.224667</td>\n",
       "      <td>0.914400</td>\n",
       "      <td>9.944400</td>\n",
       "      <td>205.697484</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>20</th>\n",
       "      <td>2</td>\n",
       "      <td>1</td>\n",
       "      <td>0.583311</td>\n",
       "      <td>0.788050</td>\n",
       "      <td>10.097049</td>\n",
       "      <td>11.708766</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>21</th>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "      <td>0.350777</td>\n",
       "      <td>0.868167</td>\n",
       "      <td>9.926465</td>\n",
       "      <td>21.805776</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>22</th>\n",
       "      <td>2</td>\n",
       "      <td>3</td>\n",
       "      <td>0.311882</td>\n",
       "      <td>0.883250</td>\n",
       "      <td>9.968338</td>\n",
       "      <td>31.946636</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>23</th>\n",
       "      <td>2</td>\n",
       "      <td>4</td>\n",
       "      <td>0.284915</td>\n",
       "      <td>0.893700</td>\n",
       "      <td>10.128925</td>\n",
       "      <td>42.248099</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>24</th>\n",
       "      <td>2</td>\n",
       "      <td>5</td>\n",
       "      <td>0.270163</td>\n",
       "      <td>0.899233</td>\n",
       "      <td>10.235666</td>\n",
       "      <td>52.651300</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>25</th>\n",
       "      <td>2</td>\n",
       "      <td>6</td>\n",
       "      <td>0.254517</td>\n",
       "      <td>0.904817</td>\n",
       "      <td>10.366303</td>\n",
       "      <td>63.192137</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>26</th>\n",
       "      <td>2</td>\n",
       "      <td>7</td>\n",
       "      <td>0.246401</td>\n",
       "      <td>0.907917</td>\n",
       "      <td>10.444469</td>\n",
       "      <td>73.808118</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>27</th>\n",
       "      <td>2</td>\n",
       "      <td>8</td>\n",
       "      <td>0.238529</td>\n",
       "      <td>0.911133</td>\n",
       "      <td>10.096993</td>\n",
       "      <td>84.077650</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>28</th>\n",
       "      <td>2</td>\n",
       "      <td>9</td>\n",
       "      <td>0.227155</td>\n",
       "      <td>0.915483</td>\n",
       "      <td>10.328374</td>\n",
       "      <td>94.571581</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>29</th>\n",
       "      <td>2</td>\n",
       "      <td>10</td>\n",
       "      <td>0.218066</td>\n",
       "      <td>0.917917</td>\n",
       "      <td>10.529835</td>\n",
       "      <td>105.277943</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>30</th>\n",
       "      <td>2</td>\n",
       "      <td>11</td>\n",
       "      <td>0.209708</td>\n",
       "      <td>0.922033</td>\n",
       "      <td>10.109958</td>\n",
       "      <td>115.554456</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>31</th>\n",
       "      <td>2</td>\n",
       "      <td>12</td>\n",
       "      <td>0.205429</td>\n",
       "      <td>0.922500</td>\n",
       "      <td>10.020198</td>\n",
       "      <td>125.746195</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>32</th>\n",
       "      <td>2</td>\n",
       "      <td>13</td>\n",
       "      <td>0.203388</td>\n",
       "      <td>0.923050</td>\n",
       "      <td>10.139892</td>\n",
       "      <td>136.051644</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>33</th>\n",
       "      <td>2</td>\n",
       "      <td>14</td>\n",
       "      <td>0.197369</td>\n",
       "      <td>0.925033</td>\n",
       "      <td>10.125924</td>\n",
       "      <td>146.349096</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>34</th>\n",
       "      <td>2</td>\n",
       "      <td>15</td>\n",
       "      <td>0.196619</td>\n",
       "      <td>0.925683</td>\n",
       "      <td>10.192805</td>\n",
       "      <td>156.711448</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>35</th>\n",
       "      <td>2</td>\n",
       "      <td>16</td>\n",
       "      <td>0.191689</td>\n",
       "      <td>0.927700</td>\n",
       "      <td>10.083067</td>\n",
       "      <td>166.950032</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>36</th>\n",
       "      <td>2</td>\n",
       "      <td>17</td>\n",
       "      <td>0.187812</td>\n",
       "      <td>0.929850</td>\n",
       "      <td>10.041142</td>\n",
       "      <td>177.159722</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>37</th>\n",
       "      <td>2</td>\n",
       "      <td>18</td>\n",
       "      <td>0.184065</td>\n",
       "      <td>0.929483</td>\n",
       "      <td>10.037162</td>\n",
       "      <td>187.373412</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>38</th>\n",
       "      <td>2</td>\n",
       "      <td>19</td>\n",
       "      <td>0.174516</td>\n",
       "      <td>0.932917</td>\n",
       "      <td>10.030169</td>\n",
       "      <td>197.570128</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>39</th>\n",
       "      <td>2</td>\n",
       "      <td>20</td>\n",
       "      <td>0.168771</td>\n",
       "      <td>0.935367</td>\n",
       "      <td>10.035158</td>\n",
       "      <td>207.777825</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "    run  epoch      loss  accuracy  epoch duration  run duration    lr  \\\n",
       "0     1      1  0.902308  0.665700       11.045463     12.710009  0.01   \n",
       "1     1      2  0.481590  0.816767       10.109992     22.955631  0.01   \n",
       "2     1      3  0.394888  0.854983       10.082085     33.174318  0.01   \n",
       "3     1      4  0.355903  0.869950       10.019200     43.323171  0.01   \n",
       "4     1      5  0.336055  0.875917       10.001249     53.456068  0.01   \n",
       "5     1      6  0.317772  0.883133       10.050117     63.638831  0.01   \n",
       "6     1      7  0.300082  0.889450       10.098988     73.863483  0.01   \n",
       "7     1      8  0.284005  0.894600        9.989282     83.986406  0.01   \n",
       "8     1      9  0.276586  0.897367       10.084028     94.206071  0.01   \n",
       "9     1     10  0.275722  0.897200       10.049120    104.388834  0.01   \n",
       "10    1     11  0.268197  0.899450        9.992310    114.508802  0.01   \n",
       "11    1     12  0.262196  0.901300        9.968384    124.606841  0.01   \n",
       "12    1     13  0.255895  0.904567       10.045131    134.783583  0.01   \n",
       "13    1     14  0.248653  0.906483        9.919468    144.834699  0.01   \n",
       "14    1     15  0.243708  0.908033        9.961360    154.920726  0.01   \n",
       "15    1     16  0.242617  0.909200       10.011235    165.057619  0.01   \n",
       "16    1     17  0.231671  0.912700       10.014242    175.201502  0.01   \n",
       "17    1     18  0.227298  0.914483       10.041692    185.370855  0.01   \n",
       "18    1     19  0.226496  0.913883       10.105968    195.617447  0.01   \n",
       "19    1     20  0.224667  0.914400        9.944400    205.697484  0.01   \n",
       "20    2      1  0.583311  0.788050       10.097049     11.708766  0.01   \n",
       "21    2      2  0.350777  0.868167        9.926465     21.805776  0.01   \n",
       "22    2      3  0.311882  0.883250        9.968338     31.946636  0.01   \n",
       "23    2      4  0.284915  0.893700       10.128925     42.248099  0.01   \n",
       "24    2      5  0.270163  0.899233       10.235666     52.651300  0.01   \n",
       "25    2      6  0.254517  0.904817       10.366303     63.192137  0.01   \n",
       "26    2      7  0.246401  0.907917       10.444469     73.808118  0.01   \n",
       "27    2      8  0.238529  0.911133       10.096993     84.077650  0.01   \n",
       "28    2      9  0.227155  0.915483       10.328374     94.571581  0.01   \n",
       "29    2     10  0.218066  0.917917       10.529835    105.277943  0.01   \n",
       "30    2     11  0.209708  0.922033       10.109958    115.554456  0.01   \n",
       "31    2     12  0.205429  0.922500       10.020198    125.746195  0.01   \n",
       "32    2     13  0.203388  0.923050       10.139892    136.051644  0.01   \n",
       "33    2     14  0.197369  0.925033       10.125924    146.349096  0.01   \n",
       "34    2     15  0.196619  0.925683       10.192805    156.711448  0.01   \n",
       "35    2     16  0.191689  0.927700       10.083067    166.950032  0.01   \n",
       "36    2     17  0.187812  0.929850       10.041142    177.159722  0.01   \n",
       "37    2     18  0.184065  0.929483       10.037162    187.373412  0.01   \n",
       "38    2     19  0.174516  0.932917       10.030169    197.570128  0.01   \n",
       "39    2     20  0.168771  0.935367       10.035158    207.777825  0.01   \n",
       "\n",
       "    batch_size  num_workers device trainset        network  \n",
       "0         1000            1   cuda   normal  no_batch_norm  \n",
       "1         1000            1   cuda   normal  no_batch_norm  \n",
       "2         1000            1   cuda   normal  no_batch_norm  \n",
       "3         1000            1   cuda   normal  no_batch_norm  \n",
       "4         1000            1   cuda   normal  no_batch_norm  \n",
       "5         1000            1   cuda   normal  no_batch_norm  \n",
       "6         1000            1   cuda   normal  no_batch_norm  \n",
       "7         1000            1   cuda   normal  no_batch_norm  \n",
       "8         1000            1   cuda   normal  no_batch_norm  \n",
       "9         1000            1   cuda   normal  no_batch_norm  \n",
       "10        1000            1   cuda   normal  no_batch_norm  \n",
       "11        1000            1   cuda   normal  no_batch_norm  \n",
       "12        1000            1   cuda   normal  no_batch_norm  \n",
       "13        1000            1   cuda   normal  no_batch_norm  \n",
       "14        1000            1   cuda   normal  no_batch_norm  \n",
       "15        1000            1   cuda   normal  no_batch_norm  \n",
       "16        1000            1   cuda   normal  no_batch_norm  \n",
       "17        1000            1   cuda   normal  no_batch_norm  \n",
       "18        1000            1   cuda   normal  no_batch_norm  \n",
       "19        1000            1   cuda   normal  no_batch_norm  \n",
       "20        1000            1   cuda   normal     batch_norm  \n",
       "21        1000            1   cuda   normal     batch_norm  \n",
       "22        1000            1   cuda   normal     batch_norm  \n",
       "23        1000            1   cuda   normal     batch_norm  \n",
       "24        1000            1   cuda   normal     batch_norm  \n",
       "25        1000            1   cuda   normal     batch_norm  \n",
       "26        1000            1   cuda   normal     batch_norm  \n",
       "27        1000            1   cuda   normal     batch_norm  \n",
       "28        1000            1   cuda   normal     batch_norm  \n",
       "29        1000            1   cuda   normal     batch_norm  \n",
       "30        1000            1   cuda   normal     batch_norm  \n",
       "31        1000            1   cuda   normal     batch_norm  \n",
       "32        1000            1   cuda   normal     batch_norm  \n",
       "33        1000            1   cuda   normal     batch_norm  \n",
       "34        1000            1   cuda   normal     batch_norm  \n",
       "35        1000            1   cuda   normal     batch_norm  \n",
       "36        1000            1   cuda   normal     batch_norm  \n",
       "37        1000            1   cuda   normal     batch_norm  \n",
       "38        1000            1   cuda   normal     batch_norm  \n",
       "39        1000            1   cuda   normal     batch_norm  "
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "for run in RunBuilder.get_runs(params):\n",
    "\n",
    "    device = torch.device(run.device)\n",
    "    network = networks[run.network].to(device)## UPDATE\n",
    "    loader = torch.utils.data.DataLoader(\n",
    "          trainsets[run.trainset]\n",
    "        , batch_size=run.batch_size\n",
    "        , num_workers=run.num_workers\n",
    "    )\n",
    "    optimizer = optim.Adam(network.parameters(), lr=run.lr)\n",
    "\n",
    "    m.begin_run(run, network, loader)\n",
    "    for epoch in range(20):\n",
    "        m.begin_epoch()\n",
    "        for batch in loader:\n",
    "\n",
    "            images = batch[0].to(device)\n",
    "            labels = batch[1].to(device)\n",
    "            preds = network(images) # Pass Batch\n",
    "            loss = F.cross_entropy(preds, labels) # Calculate Loss\n",
    "            optimizer.zero_grad() # Zero Gradients\n",
    "            loss.backward() # Calculate Gradients\n",
    "            optimizer.step() # Update Weights\n",
    "\n",
    "            m.track_loss(loss, batch)\n",
    "            m.track_num_correct(preds, labels)\n",
    "        m.end_epoch()\n",
    "    m.end_run()\n",
    "m.save('results3')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>run</th>\n",
       "      <th>epoch</th>\n",
       "      <th>loss</th>\n",
       "      <th>accuracy</th>\n",
       "      <th>epoch duration</th>\n",
       "      <th>run duration</th>\n",
       "      <th>lr</th>\n",
       "      <th>batch_size</th>\n",
       "      <th>num_workers</th>\n",
       "      <th>device</th>\n",
       "      <th>trainset</th>\n",
       "      <th>network</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>39</th>\n",
       "      <td>2</td>\n",
       "      <td>20</td>\n",
       "      <td>0.168771</td>\n",
       "      <td>0.935367</td>\n",
       "      <td>10.035158</td>\n",
       "      <td>207.777825</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>38</th>\n",
       "      <td>2</td>\n",
       "      <td>19</td>\n",
       "      <td>0.174516</td>\n",
       "      <td>0.932917</td>\n",
       "      <td>10.030169</td>\n",
       "      <td>197.570128</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>36</th>\n",
       "      <td>2</td>\n",
       "      <td>17</td>\n",
       "      <td>0.187812</td>\n",
       "      <td>0.929850</td>\n",
       "      <td>10.041142</td>\n",
       "      <td>177.159722</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>37</th>\n",
       "      <td>2</td>\n",
       "      <td>18</td>\n",
       "      <td>0.184065</td>\n",
       "      <td>0.929483</td>\n",
       "      <td>10.037162</td>\n",
       "      <td>187.373412</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>35</th>\n",
       "      <td>2</td>\n",
       "      <td>16</td>\n",
       "      <td>0.191689</td>\n",
       "      <td>0.927700</td>\n",
       "      <td>10.083067</td>\n",
       "      <td>166.950032</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>34</th>\n",
       "      <td>2</td>\n",
       "      <td>15</td>\n",
       "      <td>0.196619</td>\n",
       "      <td>0.925683</td>\n",
       "      <td>10.192805</td>\n",
       "      <td>156.711448</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>33</th>\n",
       "      <td>2</td>\n",
       "      <td>14</td>\n",
       "      <td>0.197369</td>\n",
       "      <td>0.925033</td>\n",
       "      <td>10.125924</td>\n",
       "      <td>146.349096</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>32</th>\n",
       "      <td>2</td>\n",
       "      <td>13</td>\n",
       "      <td>0.203388</td>\n",
       "      <td>0.923050</td>\n",
       "      <td>10.139892</td>\n",
       "      <td>136.051644</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>31</th>\n",
       "      <td>2</td>\n",
       "      <td>12</td>\n",
       "      <td>0.205429</td>\n",
       "      <td>0.922500</td>\n",
       "      <td>10.020198</td>\n",
       "      <td>125.746195</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>30</th>\n",
       "      <td>2</td>\n",
       "      <td>11</td>\n",
       "      <td>0.209708</td>\n",
       "      <td>0.922033</td>\n",
       "      <td>10.109958</td>\n",
       "      <td>115.554456</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>29</th>\n",
       "      <td>2</td>\n",
       "      <td>10</td>\n",
       "      <td>0.218066</td>\n",
       "      <td>0.917917</td>\n",
       "      <td>10.529835</td>\n",
       "      <td>105.277943</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>28</th>\n",
       "      <td>2</td>\n",
       "      <td>9</td>\n",
       "      <td>0.227155</td>\n",
       "      <td>0.915483</td>\n",
       "      <td>10.328374</td>\n",
       "      <td>94.571581</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>17</th>\n",
       "      <td>1</td>\n",
       "      <td>18</td>\n",
       "      <td>0.227298</td>\n",
       "      <td>0.914483</td>\n",
       "      <td>10.041692</td>\n",
       "      <td>185.370855</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>19</th>\n",
       "      <td>1</td>\n",
       "      <td>20</td>\n",
       "      <td>0.224667</td>\n",
       "      <td>0.914400</td>\n",
       "      <td>9.944400</td>\n",
       "      <td>205.697484</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>18</th>\n",
       "      <td>1</td>\n",
       "      <td>19</td>\n",
       "      <td>0.226496</td>\n",
       "      <td>0.913883</td>\n",
       "      <td>10.105968</td>\n",
       "      <td>195.617447</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>16</th>\n",
       "      <td>1</td>\n",
       "      <td>17</td>\n",
       "      <td>0.231671</td>\n",
       "      <td>0.912700</td>\n",
       "      <td>10.014242</td>\n",
       "      <td>175.201502</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>27</th>\n",
       "      <td>2</td>\n",
       "      <td>8</td>\n",
       "      <td>0.238529</td>\n",
       "      <td>0.911133</td>\n",
       "      <td>10.096993</td>\n",
       "      <td>84.077650</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>15</th>\n",
       "      <td>1</td>\n",
       "      <td>16</td>\n",
       "      <td>0.242617</td>\n",
       "      <td>0.909200</td>\n",
       "      <td>10.011235</td>\n",
       "      <td>165.057619</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>14</th>\n",
       "      <td>1</td>\n",
       "      <td>15</td>\n",
       "      <td>0.243708</td>\n",
       "      <td>0.908033</td>\n",
       "      <td>9.961360</td>\n",
       "      <td>154.920726</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>26</th>\n",
       "      <td>2</td>\n",
       "      <td>7</td>\n",
       "      <td>0.246401</td>\n",
       "      <td>0.907917</td>\n",
       "      <td>10.444469</td>\n",
       "      <td>73.808118</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>13</th>\n",
       "      <td>1</td>\n",
       "      <td>14</td>\n",
       "      <td>0.248653</td>\n",
       "      <td>0.906483</td>\n",
       "      <td>9.919468</td>\n",
       "      <td>144.834699</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>25</th>\n",
       "      <td>2</td>\n",
       "      <td>6</td>\n",
       "      <td>0.254517</td>\n",
       "      <td>0.904817</td>\n",
       "      <td>10.366303</td>\n",
       "      <td>63.192137</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>12</th>\n",
       "      <td>1</td>\n",
       "      <td>13</td>\n",
       "      <td>0.255895</td>\n",
       "      <td>0.904567</td>\n",
       "      <td>10.045131</td>\n",
       "      <td>134.783583</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>11</th>\n",
       "      <td>1</td>\n",
       "      <td>12</td>\n",
       "      <td>0.262196</td>\n",
       "      <td>0.901300</td>\n",
       "      <td>9.968384</td>\n",
       "      <td>124.606841</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>10</th>\n",
       "      <td>1</td>\n",
       "      <td>11</td>\n",
       "      <td>0.268197</td>\n",
       "      <td>0.899450</td>\n",
       "      <td>9.992310</td>\n",
       "      <td>114.508802</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>24</th>\n",
       "      <td>2</td>\n",
       "      <td>5</td>\n",
       "      <td>0.270163</td>\n",
       "      <td>0.899233</td>\n",
       "      <td>10.235666</td>\n",
       "      <td>52.651300</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>8</th>\n",
       "      <td>1</td>\n",
       "      <td>9</td>\n",
       "      <td>0.276586</td>\n",
       "      <td>0.897367</td>\n",
       "      <td>10.084028</td>\n",
       "      <td>94.206071</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>9</th>\n",
       "      <td>1</td>\n",
       "      <td>10</td>\n",
       "      <td>0.275722</td>\n",
       "      <td>0.897200</td>\n",
       "      <td>10.049120</td>\n",
       "      <td>104.388834</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7</th>\n",
       "      <td>1</td>\n",
       "      <td>8</td>\n",
       "      <td>0.284005</td>\n",
       "      <td>0.894600</td>\n",
       "      <td>9.989282</td>\n",
       "      <td>83.986406</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>23</th>\n",
       "      <td>2</td>\n",
       "      <td>4</td>\n",
       "      <td>0.284915</td>\n",
       "      <td>0.893700</td>\n",
       "      <td>10.128925</td>\n",
       "      <td>42.248099</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6</th>\n",
       "      <td>1</td>\n",
       "      <td>7</td>\n",
       "      <td>0.300082</td>\n",
       "      <td>0.889450</td>\n",
       "      <td>10.098988</td>\n",
       "      <td>73.863483</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>22</th>\n",
       "      <td>2</td>\n",
       "      <td>3</td>\n",
       "      <td>0.311882</td>\n",
       "      <td>0.883250</td>\n",
       "      <td>9.968338</td>\n",
       "      <td>31.946636</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>1</td>\n",
       "      <td>6</td>\n",
       "      <td>0.317772</td>\n",
       "      <td>0.883133</td>\n",
       "      <td>10.050117</td>\n",
       "      <td>63.638831</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>1</td>\n",
       "      <td>5</td>\n",
       "      <td>0.336055</td>\n",
       "      <td>0.875917</td>\n",
       "      <td>10.001249</td>\n",
       "      <td>53.456068</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>1</td>\n",
       "      <td>4</td>\n",
       "      <td>0.355903</td>\n",
       "      <td>0.869950</td>\n",
       "      <td>10.019200</td>\n",
       "      <td>43.323171</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>21</th>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "      <td>0.350777</td>\n",
       "      <td>0.868167</td>\n",
       "      <td>9.926465</td>\n",
       "      <td>21.805776</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>1</td>\n",
       "      <td>3</td>\n",
       "      <td>0.394888</td>\n",
       "      <td>0.854983</td>\n",
       "      <td>10.082085</td>\n",
       "      <td>33.174318</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>1</td>\n",
       "      <td>2</td>\n",
       "      <td>0.481590</td>\n",
       "      <td>0.816767</td>\n",
       "      <td>10.109992</td>\n",
       "      <td>22.955631</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>20</th>\n",
       "      <td>2</td>\n",
       "      <td>1</td>\n",
       "      <td>0.583311</td>\n",
       "      <td>0.788050</td>\n",
       "      <td>10.097049</td>\n",
       "      <td>11.708766</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>batch_norm</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>0.902308</td>\n",
       "      <td>0.665700</td>\n",
       "      <td>11.045463</td>\n",
       "      <td>12.710009</td>\n",
       "      <td>0.01</td>\n",
       "      <td>1000</td>\n",
       "      <td>1</td>\n",
       "      <td>cuda</td>\n",
       "      <td>normal</td>\n",
       "      <td>no_batch_norm</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "    run  epoch      loss  accuracy  epoch duration  run duration    lr  \\\n",
       "39    2     20  0.168771  0.935367       10.035158    207.777825  0.01   \n",
       "38    2     19  0.174516  0.932917       10.030169    197.570128  0.01   \n",
       "36    2     17  0.187812  0.929850       10.041142    177.159722  0.01   \n",
       "37    2     18  0.184065  0.929483       10.037162    187.373412  0.01   \n",
       "35    2     16  0.191689  0.927700       10.083067    166.950032  0.01   \n",
       "34    2     15  0.196619  0.925683       10.192805    156.711448  0.01   \n",
       "33    2     14  0.197369  0.925033       10.125924    146.349096  0.01   \n",
       "32    2     13  0.203388  0.923050       10.139892    136.051644  0.01   \n",
       "31    2     12  0.205429  0.922500       10.020198    125.746195  0.01   \n",
       "30    2     11  0.209708  0.922033       10.109958    115.554456  0.01   \n",
       "29    2     10  0.218066  0.917917       10.529835    105.277943  0.01   \n",
       "28    2      9  0.227155  0.915483       10.328374     94.571581  0.01   \n",
       "17    1     18  0.227298  0.914483       10.041692    185.370855  0.01   \n",
       "19    1     20  0.224667  0.914400        9.944400    205.697484  0.01   \n",
       "18    1     19  0.226496  0.913883       10.105968    195.617447  0.01   \n",
       "16    1     17  0.231671  0.912700       10.014242    175.201502  0.01   \n",
       "27    2      8  0.238529  0.911133       10.096993     84.077650  0.01   \n",
       "15    1     16  0.242617  0.909200       10.011235    165.057619  0.01   \n",
       "14    1     15  0.243708  0.908033        9.961360    154.920726  0.01   \n",
       "26    2      7  0.246401  0.907917       10.444469     73.808118  0.01   \n",
       "13    1     14  0.248653  0.906483        9.919468    144.834699  0.01   \n",
       "25    2      6  0.254517  0.904817       10.366303     63.192137  0.01   \n",
       "12    1     13  0.255895  0.904567       10.045131    134.783583  0.01   \n",
       "11    1     12  0.262196  0.901300        9.968384    124.606841  0.01   \n",
       "10    1     11  0.268197  0.899450        9.992310    114.508802  0.01   \n",
       "24    2      5  0.270163  0.899233       10.235666     52.651300  0.01   \n",
       "8     1      9  0.276586  0.897367       10.084028     94.206071  0.01   \n",
       "9     1     10  0.275722  0.897200       10.049120    104.388834  0.01   \n",
       "7     1      8  0.284005  0.894600        9.989282     83.986406  0.01   \n",
       "23    2      4  0.284915  0.893700       10.128925     42.248099  0.01   \n",
       "6     1      7  0.300082  0.889450       10.098988     73.863483  0.01   \n",
       "22    2      3  0.311882  0.883250        9.968338     31.946636  0.01   \n",
       "5     1      6  0.317772  0.883133       10.050117     63.638831  0.01   \n",
       "4     1      5  0.336055  0.875917       10.001249     53.456068  0.01   \n",
       "3     1      4  0.355903  0.869950       10.019200     43.323171  0.01   \n",
       "21    2      2  0.350777  0.868167        9.926465     21.805776  0.01   \n",
       "2     1      3  0.394888  0.854983       10.082085     33.174318  0.01   \n",
       "1     1      2  0.481590  0.816767       10.109992     22.955631  0.01   \n",
       "20    2      1  0.583311  0.788050       10.097049     11.708766  0.01   \n",
       "0     1      1  0.902308  0.665700       11.045463     12.710009  0.01   \n",
       "\n",
       "    batch_size  num_workers device trainset        network  \n",
       "39        1000            1   cuda   normal     batch_norm  \n",
       "38        1000            1   cuda   normal     batch_norm  \n",
       "36        1000            1   cuda   normal     batch_norm  \n",
       "37        1000            1   cuda   normal     batch_norm  \n",
       "35        1000            1   cuda   normal     batch_norm  \n",
       "34        1000            1   cuda   normal     batch_norm  \n",
       "33        1000            1   cuda   normal     batch_norm  \n",
       "32        1000            1   cuda   normal     batch_norm  \n",
       "31        1000            1   cuda   normal     batch_norm  \n",
       "30        1000            1   cuda   normal     batch_norm  \n",
       "29        1000            1   cuda   normal     batch_norm  \n",
       "28        1000            1   cuda   normal     batch_norm  \n",
       "17        1000            1   cuda   normal  no_batch_norm  \n",
       "19        1000            1   cuda   normal  no_batch_norm  \n",
       "18        1000            1   cuda   normal  no_batch_norm  \n",
       "16        1000            1   cuda   normal  no_batch_norm  \n",
       "27        1000            1   cuda   normal     batch_norm  \n",
       "15        1000            1   cuda   normal  no_batch_norm  \n",
       "14        1000            1   cuda   normal  no_batch_norm  \n",
       "26        1000            1   cuda   normal     batch_norm  \n",
       "13        1000            1   cuda   normal  no_batch_norm  \n",
       "25        1000            1   cuda   normal     batch_norm  \n",
       "12        1000            1   cuda   normal  no_batch_norm  \n",
       "11        1000            1   cuda   normal  no_batch_norm  \n",
       "10        1000            1   cuda   normal  no_batch_norm  \n",
       "24        1000            1   cuda   normal     batch_norm  \n",
       "8         1000            1   cuda   normal  no_batch_norm  \n",
       "9         1000            1   cuda   normal  no_batch_norm  \n",
       "7         1000            1   cuda   normal  no_batch_norm  \n",
       "23        1000            1   cuda   normal     batch_norm  \n",
       "6         1000            1   cuda   normal  no_batch_norm  \n",
       "22        1000            1   cuda   normal     batch_norm  \n",
       "5         1000            1   cuda   normal  no_batch_norm  \n",
       "4         1000            1   cuda   normal  no_batch_norm  \n",
       "3         1000            1   cuda   normal  no_batch_norm  \n",
       "21        1000            1   cuda   normal     batch_norm  \n",
       "2         1000            1   cuda   normal  no_batch_norm  \n",
       "1         1000            1   cuda   normal  no_batch_norm  \n",
       "20        1000            1   cuda   normal     batch_norm  \n",
       "0         1000            1   cuda   normal  no_batch_norm  "
      ]
     },
     "execution_count": 54,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "pd.DataFrame.from_dict(m.run_data).sort_values('accuracy', ascending=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
