{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 01 Training Loop Run Builder - Neural Network Experimentation Code\n",
    "\n",
    "**In this episode, we’ll code a `RunBuilder` class that will allow us to generate multiple runs with varying parameters.**\n",
    "\n",
    "## Using The `RunBuilder` Class\n",
    "\n",
    "The purpose of this episode and the last couple of episodes of this series is to get ourselves into a position to be able to **efficiently experiment** with the training process that we’ve constructed. For this reason, we’re going to expand on something we touched on in the episode on hyperparameter experimentation. We’re going to make what we saw there a bit **cleaner**.\n",
    "\n",
    "We’re going to build a class called `RunBuilder`. but, before we look at how to build the class. Let’s see what it will allow us to do. We’ll start with our imports."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from collections import OrderedDict\n",
    "from collections import namedtuple\n",
    "from itertools import product"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We’re importing `OrderedDict` and `namedtuple` from collections and we’re importing a function called `product` from `itertools`. This `product()` function is the one we saw last time that computes a **Cartesian product** given multiple list inputs.\n",
    "\n",
    "Alright. This is the `RunBuilder` class that will build sets of parameters that define our runs. We’ll see how it works after we see how to use it.[*的用法](https://blog.csdn.net/yzj99848873/article/details/48025593/)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "class RunBuilder():\n",
    "    @staticmethod\n",
    "    def get_runs(params):\n",
    "\n",
    "        Run = namedtuple('Run', params.keys())\n",
    "\n",
    "        runs = []\n",
    "        for v in product(*params.values()):\n",
    "            runs.append(Run(*v))\n",
    "\n",
    "        return runs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The main thing to note about using this class is that it has a `static` method called `get_runs()`. This method will get the runs for us that it builds based on the parameters we pass in.\n",
    "\n",
    "Let’s define some parameters now."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "params = OrderedDict(\n",
    "    lr = [.01, .001]\n",
    "    ,batch_size = [1000, 10000]\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here, we’ve defined a set of parameters and values inside an dictionary. We have a set of learning rates and a set of batch sizes we want to try out. When we say try out, we mean that we want to do a **training** run for **each learning rate** and **each batch size** in the dictionary.\n",
    "\n",
    "To get these runs, we just call the `get_runs()` function of the `RunBuilder` class, passing in the parameters we’d like to use."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "runs = RunBuilder.get_runs(params)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Run(lr=0.01, batch_size=1000),\n",
       " Run(lr=0.01, batch_size=10000),\n",
       " Run(lr=0.001, batch_size=1000),\n",
       " Run(lr=0.001, batch_size=10000)]"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "runs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Great, we can see that the `RunBuilder` class has built and returned a **list** of four runs. Each of these runs has a learning rate and a batch size that defines the run.\n",
    "\n",
    "We can access an individual run by indexing into the list like so:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Run(lr=0.01, batch_size=1000)"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "run = runs[0]\n",
    "run"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Notice the string representation of the run output. This string representation was automatically generated for us by the **`Run` tuple class**, and this string can be used to **uniquely** identify the run if we want to write out run statistics to disk for TensorBoard or any other visualization program.\n",
    "\n",
    "Additionally, because the run is object is a `tuple` with named attributes, we can access the values using dot notation like so:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.01 1000\n"
     ]
    }
   ],
   "source": [
    "print(run.lr, run.batch_size)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally, since the list of runs is a Python iterable, we can iterate over the runs cleanly like so:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Run(lr=0.01, batch_size=1000) 0.01 1000\n",
      "Run(lr=0.01, batch_size=10000) 0.01 10000\n",
      "Run(lr=0.001, batch_size=1000) 0.001 1000\n",
      "Run(lr=0.001, batch_size=10000) 0.001 10000\n"
     ]
    }
   ],
   "source": [
    "for run in runs:\n",
    "    print(run,run.lr,run.batch_size)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "All we have to do to **add** additional values is to add them to the original **parameter list**, and if we want to add an additional type of parameter, all we have to do is add it. The new parameter and its values will automatically become available to be consumed inside the run. The string output for the run also updates as well.\n",
    "\n",
    "Two parameters:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Run(lr=0.01, batch_size=1000),\n",
       " Run(lr=0.01, batch_size=10000),\n",
       " Run(lr=0.001, batch_size=1000),\n",
       " Run(lr=0.001, batch_size=10000)]"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "params = OrderedDict(\n",
    "    lr = [.01, .001]\n",
    "    ,batch_size = [1000, 10000]\n",
    ")\n",
    "\n",
    "runs = RunBuilder.get_runs(params)\n",
    "runs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Run(lr=0.01, batch_size=1000, device='cuda'),\n",
       " Run(lr=0.01, batch_size=1000, device='cpu'),\n",
       " Run(lr=0.01, batch_size=10000, device='cuda'),\n",
       " Run(lr=0.01, batch_size=10000, device='cpu'),\n",
       " Run(lr=0.001, batch_size=1000, device='cuda'),\n",
       " Run(lr=0.001, batch_size=1000, device='cpu'),\n",
       " Run(lr=0.001, batch_size=10000, device='cuda'),\n",
       " Run(lr=0.001, batch_size=10000, device='cpu')]"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Three parameters:\n",
    "params = OrderedDict(\n",
    "    lr = [.01, .001]\n",
    "    ,batch_size = [1000, 10000]\n",
    "    ,device = [\"cuda\", \"cpu\"]\n",
    ")\n",
    "\n",
    "runs = RunBuilder.get_runs(params)\n",
    "runs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This functionality will allow us to have greater control as we experiment with different values during training.\n",
    "\n",
    "Let’s sees how to build this `RunBuilder` class.\n",
    "\n",
    "## Coding The `RunBuilder` Class\n",
    "The first thing we need to have is a **dictionary of parameters and values** we’d like to try."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "params = OrderedDict(\n",
    "    lr = [.01, .001]\n",
    "    ,batch_size = [1000, 10000]\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "odict_keys(['lr', 'batch_size'])"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Next, we get a list of keys from the dictionary.\n",
    "params.keys()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "odict_values([[0.01, 0.001], [1000, 10000]])"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Then, we get a list of values from the dictionary.\n",
    "params.values()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once we have both of these, we just make sure we understand both of them by inspecting their output. Once we do, we use these keys and values for what comes next. We’ll start with the keys."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "Run = namedtuple('Run',params.keys())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This line creates a new `tuple` **subclass** called `Run` that has named fields. This `Run` class is used to encapsulate the data for each of our runs. The field names of this class are set by the list of names passed to the constructor. First, we are passing the **class name**. Then, we are passing the **field names**, and in our case, we are passing the list of **keys from our dictionary**.\n",
    "\n",
    "Now that we have a class for our runs, we are ready to create some"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "runs = []\n",
    "for v in product(*params.values()):\n",
    "    runs.append(Run(*v))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First we create a list called `runs`. Then, we use the `product()` function from `itertools` to create the **Cartesian product** using the values for each parameter inside our dictionary. This gives us a set of **ordered pairs** that define our runs. We iterate over these adding a run to the `runs` list for each one.\n",
    "\n",
    "For each value in the Cartesian product we have an ordered tuples. The Cartesian product gives us every ordered pair so we have all possible order pairs of learning rates and batch sizes. When we pass the `tuple` to the `Run` constructor, we use the `* `operator to tell the constructor to accept the tuple values as arguments opposed to the `tuple` itself.\n",
    "\n",
    "Finally, we wrap this code in our `RunBuilder` class."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "class RunBuilder():\n",
    "    @staticmethod\n",
    "    def get_runs(params):\n",
    "\n",
    "        Run = namedtuple('Run', params.keys())\n",
    "\n",
    "        runs = []\n",
    "        for v in product(*params.values()):\n",
    "            runs.append(Run(*v))\n",
    "\n",
    "        return runs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Since the `get_runs()` method is static, we can call it using the class itself. We don’t need an instance of the class.\n",
    "\n",
    "Now, this allow us to update our training code in the following way:\n",
    "\n",
    "Before:\n",
    "```python\n",
    "for lr, batch_size, shuffle in product(*param_values):\n",
    "    comment = f' batch_size={batch_size} lr={lr} shuffle={shuffle}'\n",
    "\n",
    "    # Training process given the set of parameters\n",
    "```\n",
    "After:\n",
    "```python\n",
    "for run in RunBuilder.get_runs(params):\n",
    "    comment = f'-{run}'\n",
    "\n",
    "    # Training process given the set of parameters\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## What Is A Cartesian Product?\n",
    "\n",
    "Do you know about the Cartesian product? Like many things in life, the Cartesian product is a mathematical concept. The Cartesian product is a binary operation. The operation takes two sets as arguments and returns a third set as an output. Let's look at a general mathematical example.\n",
    "\n",
    "Suppose that $X$ is a set.\n",
    "\n",
    "Suppose that $Y$ is a set.\n",
    "\n",
    "The Cartesian product between two sets is denoted as,$X×Y$. The Cartesian product between the sets $X$ and the set $Y$ is defined to be the set of all ordered pairs $(x,y)$ such that little $x∈X$ and $y∈Y$. This can be expressed in the following way:$$X×Y={(x,y)∣x∈X and y∈ Y}$$\n",
    "\n",
    "This way of expressing the output of the Cartesian product is called set builder notation. It is cool. So $X$×$Y$ is the set of all ordered pairs $(x,y)$ such that little $x∈X$ and $y∈Y$ .\n",
    "\n",
    "To compute $X×Y$ we do the following:\n",
    "\n",
    "For every $x∈X$ and for every $y∈Y$, we collect the corresponding pair $(x,y)$. The resulting collection gives us the set of all ordered pairs little $(x,y)$ such that $x∈X$ and $y∈Y$.\n",
    "\n",
    "Here is a concrete example expressed in Python:\n",
    "```python\n",
    "X = {1,2,3}\n",
    "Y = {1,2,3}\n",
    "\n",
    "{ (x,y) for x in X for y in Y }\n",
    "```\n",
    "Output\n",
    "```python\n",
    "{(1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), (3, 1), (3, 2), (3, 3)}\n",
    "```\n",
    "Notice how powerful the mathematical code is. It covers all cases. Maybe you noticed that this can be achieved using for-loop iteration like so:\n",
    "```python\n",
    "X = {1,2,3}\n",
    "Y = {1,2,3}\n",
    "cartesian_product = set()\n",
    "for x in X:\n",
    "    for y in Y:\n",
    "        cartesian_product.add((x,y))\n",
    "cartesian_product\n",
    "```\n",
    "Output:\n",
    "```python\n",
    "{(1, 1), (1, 2), (1, 3), (2, 1), (2, 2), (2, 3), (3, 1), (3, 2), (3, 3)}\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Quiz 01\n",
    "1. This `Run` class is used to _______________ the data for each of our runs.\n",
    "```python\n",
    "Run = namedtuple('Run', params.keys())\n",
    "```\n",
    "* encapsulate<br>\n",
    "<br>\n",
    "2. The Cartesian product contains _______________.\n",
    "```python\n",
    "cp = product(*params.values())\n",
    "```\n",
    "* ordered tuples\n",
    "<br><br>\n",
    "3. Suppose we pass the parameters dictionary below to the `get_runs()` function. How many runs will be generated?\n",
    "```python\n",
    "params = OrderedDict(\n",
    "    lr = [.01, .001]\n",
    "    ,batch_size = [1000, 10000]\n",
    ")\n",
    "runs = RunBuilder.get_runs(params)\n",
    "```\n",
    "* 4\n",
    "<br><br>\n",
    "4. Suppose we pass the parameters dictionary below to the `get_runs()` function. How many runs will be generated?\n",
    "```python\n",
    "params = OrderedDict(\n",
    "    lr = [.01, .001]\n",
    "    ,batch_size = [1000, 10000]\n",
    "    ,device = [\"cuda\", \"cpu\"]\n",
    ")\n",
    "runs = RunBuilder.get_runs(params)\n",
    "```\n",
    "* 8\n",
    "\n",
    "---\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 02 CNN Training Loop Refactoring - Simultaneous Hyperparameter Testing\n",
    "\n",
    "## Refactoring The CNN Training Loop\n",
    "\n",
    "**In this episode, we will see how we can experiment with large numbers of hyperparameter values easily while still keeping our training loop and our results organized.**\n",
    "\n",
    "## Cleaning Up The Training Loop And Extracting Classes\n",
    "\n",
    "We built out quite a lot of functionality that allowed us to experiment with many different parameters and values, and we also made the calls need inside our training loop that would get our results into TensorBoard.\n",
    "\n",
    "All of this work has helped, but our training loop is pretty **crowded** now. In this episode, we're going to **clean up** our training loop and set the stage for more experimentation up by using the `RunBuilder` class that we built last time and by building a new class called `RunManager`.\n",
    "\n",
    "Our **goal** is to be able to *add parameters and values at the top*, and have all the values tested or tried during multiple training runs.\n",
    "\n",
    "For example, in this case, we are saying that we want to use **two parameters**, *lr* and *batch_size*, and for the batch_size we want to try two different values. This gives us a total of two training runs. Both runs will have the **same learning rat** while the **batch size varies**.\n",
    "\n",
    "```python\n",
    "params = OrderedDict(\n",
    "    lr = [.01]\n",
    "    ,batch_size = [1000, 2000]\n",
    ")\n",
    "```\n",
    "For the results, we'd like to see and be able to compare the both runs.\n",
    "\n",
    "<table class=\"table table-sm table-hover\">\n",
    "                                                    <thead>\n",
    "                                                        <tr>\n",
    "                                                            <th>run</th>\n",
    "                                                            <th>epoch</th>\n",
    "                                                            <th>loss</th>\n",
    "                                                            <th>accuracy</th>\n",
    "                                                            <th>epoch duration</th>\n",
    "                                                            <th>run duration</th>\n",
    "                                                            <th>lr</th>\n",
    "                                                            <th>batch_size</th>\n",
    "                                                        </tr>\n",
    "                                                    </thead>\n",
    "                                                    <tbody>\n",
    "                                                        <tr>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>0.983</td>\n",
    "                                                            <td>0.618</td>\n",
    "                                                            <td>48.697</td>\n",
    "                                                            <td>50.563</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>1000</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>2</td>\n",
    "                                                            <td>0.572</td>\n",
    "                                                            <td>0.777</td>\n",
    "                                                            <td>19.165</td>\n",
    "                                                            <td>69.794</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>1000</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>3</td>\n",
    "                                                            <td>0.468</td>\n",
    "                                                            <td>0.827</td>\n",
    "                                                            <td>19.366</td>\n",
    "                                                            <td>89.252</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>1000</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>4</td>\n",
    "                                                            <td>0.428</td>\n",
    "                                                            <td>0.843</td>\n",
    "                                                            <td>18.840</td>\n",
    "                                                            <td>108.176</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>1000</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>5</td>\n",
    "                                                            <td>0.389</td>\n",
    "                                                            <td>0.857</td>\n",
    "                                                            <td>19.082</td>\n",
    "                                                            <td>127.320</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>1000</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>2</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>1.271</td>\n",
    "                                                            <td>0.528</td>\n",
    "                                                            <td>18.558</td>\n",
    "                                                            <td>19.627</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>2000</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>2</td>\n",
    "                                                            <td>2</td>\n",
    "                                                            <td>0.623</td>\n",
    "                                                            <td>0.757</td>\n",
    "                                                            <td>19.822</td>\n",
    "                                                            <td>39.520</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>2000</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>2</td>\n",
    "                                                            <td>3</td>\n",
    "                                                            <td>0.526</td>\n",
    "                                                            <td>0.791</td>\n",
    "                                                            <td>21.101</td>\n",
    "                                                            <td>60.694</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>2000</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>2</td>\n",
    "                                                            <td>4</td>\n",
    "                                                            <td>0.478</td>\n",
    "                                                            <td>0.814</td>\n",
    "                                                            <td>20.332</td>\n",
    "                                                            <td>81.110</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>2000</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>2</td>\n",
    "                                                            <td>5</td>\n",
    "                                                            <td>0.440</td>\n",
    "                                                            <td>0.835</td>\n",
    "                                                            <td>20.413</td>\n",
    "                                                            <td>101.600</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>2000</td>\n",
    "                                                        </tr>\n",
    "                                                    </tbody>\n",
    "                                                </table>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### The Two Classes We Will Build\n",
    "\n",
    "To do this, we need to build two new classes. We built the first class called `RunBuilder` in the last episode. It's being called at the top.\n",
    "```python\n",
    "for run in RunBuilder.get_runs(params):\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class RunBuilder():\n",
    "    @staticmethod\n",
    "    def get_runs(params):\n",
    "\n",
    "        Run = namedtuple('Run', params.keys())\n",
    "\n",
    "        runs = []\n",
    "        for v in product(*params.values()):\n",
    "            runs.append(Run(*v))\n",
    "\n",
    "        return runs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, we need to build this `RunManager` class that will allow us to manage each run inside our run loop. The `RunManager` instance will allow us to **pull out** a lot of the tedious TensorBoard calls and allow us to add additional functionality as well.\n",
    "\n",
    "We'll see that as our number of parameters and runs get **larger**, TensorBoard will start to breakdown as a viable solution for reviewing our results.\n",
    "\n",
    "The `RunManager` will be invoked at **different stages** inside each of our runs. We'll have calls at the start and end of both the **run** and the **epoch** phases. We'll also have calls to **track the loss** and the **number of correct predictions** inside each epoch. Finally, at the end, we'll **save the run results** to disk.\n",
    "\n",
    "Let's see how to build this RunManager class.\n",
    "\n",
    "## Building The `RunManger` For Training Loop Runs\n",
    "Let's kick things off with our imports:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "import torch.nn.functional as F\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "\n",
    "from torch.utils.data import DataLoader\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "from IPython.display import display, clear_output\n",
    "import pandas as pd\n",
    "import time\n",
    "import json\n",
    "\n",
    "from itertools import product\n",
    "from collections import namedtuple\n",
    "from collections import OrderedDict"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For now, we'll take **no arguments in the constructor**, and we'll just define some attributes that will enable us to keep **track** of data across runs and across epochs.\n",
    "\n",
    "We'll track the following:\n",
    "* The number of epochs.\n",
    "* The running loss for an epoch.\n",
    "* The number of correct predictions for an epoch.\n",
    "* The start time of the epoch.\n",
    "\n",
    "Remember we saw that the `RunManager` class has two methods with epoch in the name. We have `begin_epoch()` and `end_epoch()`. These two methods will allow us to manage these values across the **epoch lifecycle**.\n",
    "\n",
    "Now, next we have some attributes for the runs. We have an attribute called `run_params`. This is the run definition in terms for the run parameters. It's value will be one of the **runs** returned by the `RunBuilder` class.\n",
    "\n",
    "Next, we have attributes to track the `run_count`, and the `run_data`. The `run_count` gives us the **run number** and the `run_data` is a list we'll use to keep track of the **parameter values** and the **results of each epoch** for each run, and so we'll see that we add a value to this list for each epoch. Then, we have the `run_start_time` which will be used to calculate the **run duration**.\n",
    "\n",
    "Alright, next we will save the network and the data loader that are being used for the run, as well as a `SummaryWriter` that we can use to save data for TensorBoard."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "# First, we declare the class using the class keyword.\n",
    "class RunManager():\n",
    "    # Next ,we'll define the class constructor\n",
    "    def __init__(self):\n",
    "        \n",
    "        self.epoch_count = 0\n",
    "        self.epoch_loss = 0\n",
    "        self.epoch_num_correct = 0\n",
    "        self.epoch_start_time = None\n",
    "        \n",
    "        self.run_params = None\n",
    "        self.run_count = 0\n",
    "        self.run_data = []\n",
    "        self.run_start_time = None\n",
    "        \n",
    "        self.network = None\n",
    "        self.loader = None\n",
    "        self.tb = None"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### What Are Code Smells?\n",
    "Do you smell that? There's something that doesn't smell right about this code. Have you heard of code smells before? Have you smelled them? A code smell is a term used to describe a condition where something about the code in front of our eyes doesn't seem right. It's like a **gut feeling** for software developers.\n",
    "\n",
    "A code smell **doesn't mean** that something is definitely **wrong**. A code smell **does not mean** the code is **incorrect**. It just means that there is likely a better way. In this case, the code smell is the fact that we have several variable names that have a **prefix**. The use of the prefix here indicates that the variables somehow belong together.\n",
    "\n",
    "Anytime we see this, we need to be thinking about **removing** these prefixes. Data that belongs together should be together. This is done by **encapsulating the data inside of a `class`**. After all, if the data belongs together, object oriented languages give us the ability to express this fact using classes.\n",
    "\n",
    "### Refactoring By Extracting A Class\n",
    "It's fine to leave this code in now, but later we might want to refactor this code by doing what is referred to as extracting a class. This is a refactoring **technique** where we remove these prefixes and create a class called `Epoch`, that has these attributes, `count`, `loss`, `num_correct`, and `start_time`.\n",
    "```python\n",
    "class Epoch():\n",
    "    def __init__(self):\n",
    "        self.count = 0\n",
    "        self.loss = 0\n",
    "        self.num_correct = 0\n",
    "        self.start_time = None \n",
    "```\n",
    "Then, we'll replace these class variable with an **instance** of the `Epoch` class. We might even change the count variable to have a more intuitive name, like say `number` or `id`. The reason we can leave this now is because refactoring is an iterative process, and this is our first iteration.\n",
    "\n",
    "### Extracting Classes Creates Layers Of Abstraction\n",
    "Actually, what we are doing now by building this class is **extracting a class **from our **main training loop** program. The code smell that we were addressing is the fact that our loop was becoming **cluttered** and beginning to appear overly **complex**.\n",
    "\n",
    "When we write a main program and then refactor it, we can think of this creating layers of abstraction that make the main program more and more **readable** and easier to understand. Each part of the program should be very easy to understand in its own right.\n",
    "\n",
    "When we **extract** code into its own **class** or **method**, we are creating additional **layers of abstraction**, and if we want to understand the implementation details of any of the layers, we dive in so to speak.\n",
    "\n",
    "In an iterative way, we can think of ***starting* with one single program**, and then, ***later* extracting the cod**e that creates deeper and deeper layers. The process can be view as a *branching tree-like structure*."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Beginning A Training Loop Run\n",
    "Anyway, let's look at the first method of this class which extracts the code needed to begin a run."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "def begin_run(self, run, network, loader):\n",
    "\n",
    "    self.run_start_time = time.time()\n",
    "\n",
    "    self.run_params = run\n",
    "    self.run_count += 1\n",
    "\n",
    "    self.network = network\n",
    "    self.loader = loader\n",
    "    self.tb = SummaryWriter(comment=f'-{run}')\n",
    "\n",
    "    images, labels = next(iter(self.loader))\n",
    "    grid = torchvision.utils.make_grid(images)\n",
    "\n",
    "    self.tb.add_image('images', grid)\n",
    "    self.tb.add_graph(self.network, images)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First, we capture the **start time** for the run. Then, we save the passed in run parameters and **increment** the run count by one. After this, we **save** our **network** and our **data loader**, and then, we initialize a `SummaryWriter` for TensorBoard. Notice how we are passing our **run** as the **comment** argument. This will allow us to **uniquely** identify our run inside TensorBoard.\n",
    "\n",
    "Alright, next we just have some TensorBoard calls that we made in our training loop before. These calls add our network and a batch of images to TensorBoard.\n",
    "\n",
    "When we end a run, all we have to do is **close** the TensorBoard handle and set the **epoch count** back to **zero** to be ready for the next run."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "def end_run(self):\n",
    "    self.tb.close()\n",
    "    self.epoch_count = 0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For starting an epoch, we first save the start time. Then, we increment the `epoch_count` by one and set the `epoch_loss` and `epoch_number_correct` to zero."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "def begin_epoch(self):\n",
    "    self.epoch_start_time = time.time()\n",
    "    \n",
    "    self.epoch_count += 1\n",
    "    self.epoch_loss = 0\n",
    "    self.epoch_num_correct = 0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, let's look at where the bulk of the action occurs which is **ending** an epoch."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "def end_epoch(self):\n",
    "    epoch_duration = time.time() - self.epoch_start_time\n",
    "    run_duration = time.time() - self.run_start_time\n",
    "    \n",
    "    loss = self.epoch_loss / len(self.loader.dataset)\n",
    "    accuracy = self.epoch_num_corect / len(self.loader.dataset)\n",
    "    \n",
    "    self.tb.add_scalar('Loss', loss, self.epoch_count)\n",
    "    self.tb.add_scalar('Accuracy',accuracy, self.epoch_count)\n",
    "    \n",
    "    for name,param in self.network.named_parameters():\n",
    "        self.tb.add_histogram(name, param, self.epoch_count)\n",
    "        self.tb.add_histogram(f'{name}.grad',param.grad, self.epoch_count)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We start by calculating the **epoch duration** and the **run duration**. Since we are at the **end** of an epoch, the epoch duration is final, but the **run duration** here represents the running time of the **current run**. The value will keep running until the run ends. However, we'll still **save** it with each epoch.\n",
    "\n",
    "Next, we compute the `epoch_loss` and `accuracy`, and we do it relative to the size of the training set. This gives us the **average loss** per sample. Then, we pass both of these values to TensorBoard.\n",
    "\n",
    "Next, we pass our network's **weights** and **gradient** values to TensorBoard like we did before.\n",
    "\n",
    "### Tracking Our Training Loop Performance\n",
    "We're ready know for whats new in this processing. This is the part that we are adding to give us **additional insight** when we preform **large numbers** of **runs**. We're going to **save** all of the data ourselves so we can analyze it outsize of TensorBoard.\n",
    "```python\n",
    "def end_epoch(self):\n",
    "    ...\n",
    "\n",
    "    results = OrderedDict()\n",
    "    results[\"run\"] = self.run_count\n",
    "    results[\"epoch\"] = self.epoch_count\n",
    "    results['loss'] = loss\n",
    "    results[\"accuracy\"] = accuracy\n",
    "    results['epoch duration'] = epoch_duration\n",
    "    results['run duration'] = run_duration\n",
    "    for k,v in self.run_params._asdict().items(): results[k] = v\n",
    "    self.run_data.append(results)\n",
    "\n",
    "    df = pd.DataFrame.from_dict(self.run_data, orient='columns')\n",
    "\n",
    "    ...\n",
    "```\n",
    "Here, we are building a dictionary that contains the keys and values we care about for our run. We add in the `run_count`, the `epoch_count`, the `loss`, the `accuracy`, the `epoch_duration`, and the `run_duration`.\n",
    "\n",
    "Then, we iterate over the keys and values inside our **run parameters** adding them to the **results dictionary**. This will allow us to see the parameters that are associated with the performance results.\n",
    "\n",
    "Finally, we append the **results** to the `run_data` list.\n",
    "\n",
    "Once the data is added to the list, we turn the data list into a `pandas` **data frame** so we can have formatted output."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [],
   "source": [
    "def end_epoch(self):\n",
    "    \n",
    "    results = OrderedDict()\n",
    "    results[\"run\"] = self.run_count\n",
    "    results[\"epoch\"] = self.epoch_count\n",
    "    results['loss'] = loss\n",
    "    results[\"accuracy\"] = accuracy\n",
    "    results['epoch duration'] = epoch_duration\n",
    "    results['run duration'] = run_duration\n",
    "    for k,v in self.run_params._asdict().item():\n",
    "        results[k] = v\n",
    "        self.run_data.append(results)\n",
    "    \n",
    "    df = pd.DataFrame.from_dict(self.run_data, orient = 'columns')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The next two lines are specific to **Jupyter notebook**. We clear the current output and display the new data frame.\n",
    "```python\n",
    "clear_output(wait=True)\n",
    "display(df)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Alright, that ends an epoch. One thing you may be wondering is how the `epoch_loss` and `epoch_num_correct` values were **tracked**. We'll we have two methods just below for that.\n",
    "\n",
    "We have a method called `track_loss()` and a method called `track_num_correct()`. These methods are called inside the training loop **after each batch**. The loss is passed into the `track_loss()` method and the predictions and labels are passed into the `track_num_correct()` method."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "def track_loss(self, loss, batch):\n",
    "    self.epoch_loss += loss.item() * batch[0].shape[0]\n",
    "    \n",
    "def track_num_correct(self, preds, labels):\n",
    "    self.epoch_num_correct += self.get_num_correct(preds, labels)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To calculate the number of correct predictions, we are using the same `get_num_correct()` function that we defined in previous episodes. The difference here is that the function is now **encapsulated inside our RunManager class**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [],
   "source": [
    "def _get_num_correct(self, preds, labels):\n",
    "    return preds.argmax(dim = 1).eq(labels).sum().item()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Lastly, we have a method called `save()` that **saves the run_data** in **two formats**, **json** and **csv**. This output goes to disk and makes it available for other apps to consume. For example, we can open the csv file in excel or we can even build our own even better TensorBoard with the data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [],
   "source": [
    "def save(self,fileName):\n",
    "    \n",
    "    pd.DataFrame.from_dict(\n",
    "        self.run_data, orient = 'columns'\n",
    "    ).to_csv(f'{fileName}.csv')\n",
    "    \n",
    "    with open(f'{fileName}.json','w',encoding = 'utf-8') as f:\n",
    "        json.dump(self.run_data, f, ensure_ascii = False,indent = 4)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Before：**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.6.0\n",
      "0.7.0\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "\n",
    "torch.set_printoptions(linewidth=120) # Display options for output\n",
    "torch.set_grad_enabled(True) # Already on by default\n",
    "\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "\n",
    "print(torch.__version__)\n",
    "print(torchvision.__version__)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "\n",
    "torch.set_printoptions(linewidth=120)  # Display options for output\n",
    "torch.set_grad_enabled(True)  # Already on by default\n",
    "\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "from itertools import product\n",
    "\n",
    "def get_num_correct(preds, labels):\n",
    "    return preds.argmax(dim=1).eq(labels).sum().item()\n",
    "\n",
    "\n",
    "class Network(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)\n",
    "        self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)\n",
    "\n",
    "        self.fc1 = nn.Linear(in_features=12 * 4 * 4, out_features=120)\n",
    "        self.fc2 = nn.Linear(in_features=120, out_features=60)\n",
    "        self.out = nn.Linear(in_features=60, out_features=10)\n",
    "\n",
    "    def forward(self, t):\n",
    "        t = t\n",
    "\n",
    "        t = self.conv1(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t,  kernel_size=2, stride=2)\n",
    "\n",
    "        t = self.conv2(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t, kernel_size=2, stride=2)\n",
    "\n",
    "        t = t.reshape(-1,12*4*4)\n",
    "        t = self.fc1(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        t = self.fc2(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        t = self.out(t)\n",
    "\n",
    "        return t\n",
    "\n",
    "train_set = torchvision.datasets.FashionMNIST(\n",
    "    root='./data/FashionMNIST'\n",
    "    ,download=True\n",
    "    ,transform = transforms.Compose([transforms.ToTensor()])\n",
    ")\n",
    "\n",
    "parameters = dict(\n",
    "    lr = [.01,.001]\n",
    "    ,batch_size = [100, 1000]\n",
    "    ,shuffle = [True, False]\n",
    ")\n",
    "\n",
    "param_values = [v for v in parameters.values()]\n",
    "\n",
    "for lr,batch_size,shuffle in product(*param_values):\n",
    "    comment = f'batch_size={batch_size} lr={lr} shuffle={shuffle}'\n",
    "\n",
    "    network = Network()\n",
    "\n",
    "    train_loder = torch.utils.data.DataLoader(\n",
    "        train_set,batch_size = batch_size,shuffle = shuffle\n",
    "    )\n",
    "\n",
    "    optimizer = torch.optim.Adam(\n",
    "        network.parameters(), lr=lr\n",
    "    )\n",
    "\n",
    "    images, labels = next(iter(train_loder))\n",
    "    grid = torchvision.utils.make_grid(images)\n",
    "\n",
    "    tb = SummaryWriter(comment=comment)\n",
    "    tb.add_image('images',grid)\n",
    "    tb.add_graph(network, images)\n",
    "\n",
    "    for epoch in range(2):\n",
    "\n",
    "        total_loss = 0\n",
    "        total_correct = 0\n",
    "\n",
    "        for batch in train_loder:\n",
    "            images,labels = batch\n",
    "\n",
    "            preds = network(images)\n",
    "            loss = F.cross_entropy(preds,labels)\n",
    "\n",
    "            optimizer.zero_grad()\n",
    "            loss.backward()\n",
    "            optimizer.step() # update weights\n",
    "\n",
    "            total_loss += loss.item() * batch_size\n",
    "            total_correct += get_num_correct(preds,labels)\n",
    "\n",
    "        tb.add_scalar('Loss',total_loss,epoch)\n",
    "        tb.add_scalar('Number Correct', total_correct, epoch)\n",
    "        tb.add_scalar('Accuracy', total_correct / len(train_set), epoch)\n",
    "\n",
    "        for name,weight in network.named_parameters():\n",
    "            tb.add_histogram(name,weight,epoch)\n",
    "            tb.add_histogram(f'{name}.grad',weight.grad,epoch)\n",
    "\n",
    "        print(\"epoch\", epoch, \"total_correct:\", total_correct, \"loss:\", total_loss)\n",
    "\n",
    "    tb.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**After：**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "import time\n",
    "\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "import torchvision\n",
    "import torchvision.transforms as transforms\n",
    "import pandas as pd\n",
    "\n",
    "\n",
    "from torch.utils.tensorboard import SummaryWriter\n",
    "from itertools import product\n",
    "from collections import namedtuple, OrderedDict\n",
    "\n",
    "torch.set_printoptions(linewidth=120)  # Display options for output\n",
    "torch.set_grad_enabled(True)  # Already on by default\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "class Network(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)\n",
    "        self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)\n",
    "\n",
    "        self.fc1 = nn.Linear(in_features=12 * 4 * 4, out_features=120)\n",
    "        self.fc2 = nn.Linear(in_features=120, out_features=60)\n",
    "        self.out = nn.Linear(in_features=60, out_features=10)\n",
    "\n",
    "    def forward(self, t):\n",
    "        t = t\n",
    "\n",
    "        t = self.conv1(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t,  kernel_size=2, stride=2)\n",
    "\n",
    "        t = self.conv2(t)\n",
    "        t = F.relu(t)\n",
    "        t = F.max_pool2d(t, kernel_size=2, stride=2)\n",
    "\n",
    "        t = t.reshape(-1,12*4*4)\n",
    "        t = self.fc1(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        t = self.fc2(t)\n",
    "        t = F.relu(t)\n",
    "\n",
    "        t = self.out(t)\n",
    "\n",
    "        return t\n",
    "\n",
    "\n",
    "class RunBuilder():\n",
    "    @staticmethod\n",
    "    def get_runs(params):\n",
    "        Run = namedtuple('Run', params.keys())\n",
    "\n",
    "        runs = []\n",
    "        for v in product(*params.values()):\n",
    "            runs.append(Run(*v))\n",
    "\n",
    "        return runs\n",
    "\n",
    "class RunManager():\n",
    "    def __init__(self):\n",
    "        self.epoch_count = 0\n",
    "        self.epoch_loss = 0\n",
    "        self.epoch_num_correct = 0\n",
    "        self.epoch_start_time = None\n",
    "\n",
    "        self.run_params = None\n",
    "        self.run_count = 0\n",
    "        self.run_data = []\n",
    "        self.run_start_time = None\n",
    "\n",
    "        self.network = None\n",
    "        self.loader = None\n",
    "        self.tb = None\n",
    "\n",
    "    def begin_run(self,run, network, loader):\n",
    "\n",
    "        self.run_start_time = time.time()\n",
    "\n",
    "        self.run_params = run\n",
    "        self.run_count += 1\n",
    "\n",
    "        self.network = network\n",
    "        self.loader = loader\n",
    "        self.tb = SummaryWriter(comment=f'-{run}')\n",
    "\n",
    "        images, labels = next(iter(self.loader))\n",
    "        grid = torchvision.utils.make_grid(images)\n",
    "\n",
    "        self.tb.add_image('images',grid)\n",
    "        self.tb.add_graph(self.network, images)\n",
    "\n",
    "    def end_run(self):\n",
    "        self.tb.close()\n",
    "        self.epoch_count = 0\n",
    "\n",
    "    def begin_epoch(self):\n",
    "        self.epoch_start_time = time.time()\n",
    "\n",
    "        self.epoch_count += 1\n",
    "        self.epoch_loss = 0\n",
    "        self.epoch_num_correct = 0\n",
    "\n",
    "    def end_epoch(self):\n",
    "\n",
    "        epoch_duration = time.time() - self.epoch_start_time\n",
    "        run_duration = time.time() - self.run_start_time\n",
    "\n",
    "        loss = self.epoch_loss / len(self.loader.dataset)\n",
    "        accuracy = self.epoch_num_correct / len(self.loader.dataset)\n",
    "\n",
    "        self.tb.add_scalar('Loss',loss,self.epoch_count)\n",
    "        self.tb.add_scalar('Accuracy',accuracy,self.epoch_count)\n",
    "\n",
    "        for name,param in self.network.named_parameters():\n",
    "            self.tb.add_histogram(name, param, self.epoch_count)\n",
    "            self.tb.add_histogram(f'{name}.grad', param.grad, self.epoch_count)\n",
    "\n",
    "        results = OrderedDict()\n",
    "        results[\"run\"] = self.run_count\n",
    "        results[\"epoch\"] = self.epoch_count\n",
    "        results[\"loss\"] = loss\n",
    "        results[\"accuracy\"] = accuracy\n",
    "        results[\"epoch duration\"] = epoch_duration\n",
    "        results[\"run duration\"] = run_duration\n",
    "        for k,v in self.run_params._asdict().items():results[k] = v\n",
    "        self.run_data.append(results)\n",
    "\n",
    "        df = pd.DataFrame.from_dict(self.run_data,orient='columns')\n",
    "\n",
    "    def get_num_correct(self,preds, labels):\n",
    "        return preds.argmax(dim=1).eq(labels).sum().item()\n",
    "\n",
    "    def track_loss(self,loss,batch):\n",
    "        self.epoch_loss += loss.item() * batch[0].shape[0]\n",
    "\n",
    "    def track_num_correct(self,preds,labels):\n",
    "        self.epoch_num_correct += self.get_num_correct(preds,labels)\n",
    "\n",
    "    def save(self, fileName):\n",
    "\n",
    "        pd.DataFrame.from_dict(\n",
    "            self.run_data,orient = 'columns'\n",
    "        ).to_csv(f'{fileName}.csv')\n",
    "\n",
    "        with open(f'{fileName}.json','w',encoding='utf-8') as f:\n",
    "            json.dump(self.run_data,f, ensure_ascii=False, indent = 4)\n",
    "\n",
    "train_set = torchvision.datasets.FashionMNIST(\n",
    "    root='./data/FashionMNIST'\n",
    "    ,download=True\n",
    "    ,transform = transforms.Compose([transforms.ToTensor()])\n",
    ")\n",
    "\n",
    "\n",
    "params = OrderedDict(\n",
    "    lr = [.01]\n",
    "    ,batch_size = [100, 1000]\n",
    ")\n",
    "\n",
    "m = RunManager()\n",
    "\n",
    "for run in RunBuilder.get_runs(params):\n",
    "\n",
    "    network = Network()\n",
    "    loader = torch.utils.data.DataLoader(train_set, batch_size = run.batch_size)\n",
    "\n",
    "    optimizer = torch.optim.Adam(\n",
    "        network.parameters(), lr=run.lr\n",
    "    )\n",
    "    \n",
    "    m.begin_run(run,network,loader)\n",
    "\n",
    "    for epoch in range(2):\n",
    "\n",
    "        m.begin_epoch()\n",
    "        for batch in loader:\n",
    "\n",
    "            images,labels = batch\n",
    "            preds = network(images)\n",
    "            loss = F.cross_entropy(preds,labels)\n",
    "\n",
    "            optimizer.zero_grad()\n",
    "            loss.backward()\n",
    "            optimizer.step() # update weights\n",
    "\n",
    "            m.track_loss(loss,batch)\n",
    "            m.track_num_correct(preds, labels)\n",
    "        m.end_epoch()\n",
    "    m.end_run()\n",
    "m.save('results')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**results.csv**  \n",
    "<table>\n",
    "<tr><td></td><td>run</td><td>epoch</td><td>loss</td><td>accuracy</td><td>epoch duration</td><td>run duration</td><td>lr</td><td>batch_size</td></tr>\n",
    "<tr><td>0</td><td>1</td><td>1</td><td>0.5638383385042349</td><td>0.7888166666666667</td><td>16.63007378578186</td><td>16.860456705093384</td><td>0.01</td><td>100</td></tr>\n",
    "<tr><td>1</td><td>1</td><td>2</td><td>0.38631543077528474</td><td>0.85825</td><td>13.861908674240112</td><td>30.825090169906616</td><td>0.01</td><td>100</td></tr>\n",
    "<tr><td>2</td><td>2</td><td>1</td><td>0.9647192517916362</td><td>0.63365</td><td>13.846972942352295</td><td>14.716645240783691</td><td>0.01</td><td>1000</td></tr>\n",
    "<tr><td>3</td><td>2</td><td>2</td><td>0.5320606335997582</td><td>0.7906</td><td>13.615568399429321</td><td>28.440921545028687</td><td>0.01</td><td>1000</td></tr>\n",
    "<tr><td></td></tr>\n",
    "</table>\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Quiz 02\n",
    "1. Groups of variables that share a prefix in their name is a code smell. Which refactoring technique can be used to improve code like this?  \n",
    "```python\n",
    "self.epoch_count = 0\n",
    "self.epoch_loss = 0\n",
    "self.epoch_num_correct = 0\n",
    "self.epoch_start_time = None\n",
    "```\n",
    "* Extract class<br><br>\n",
    "\n",
    "2. How many arguments does the run manager class constructor accept?\n",
    "* 0<br><br>\n",
    "\n",
    "\n",
    "3. In Python, the underscore prefix for methods of classes is used to signal what?\n",
    "```python\n",
    "def _get_num_correct(self, preds, labels):\n",
    "    return preds.argmax(dim=1).eq(labels).sum().item()\n",
    "```\n",
    "* The method is meant to be used internally inside the class<br><br>\n",
    "\n",
    "4. What does it feel like to be wrong?\n",
    "* It feels the same way as it feels to be right."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 03 PyTorch DataLoader Num_workers - Deep Learning Speed Limit Increase\n",
    "\n",
    "## PyTorch DataLoader `num_workers` Test - Speed Things Up\n",
    "\n",
    "**In this episode, we will see how we can speed up the neural network training process by utilizing the multiple process capabilities of the PyTorch DataLoader class.**\n",
    "\n",
    "## Speeding Up The Training Process\n",
    "To speed up the training process, we will make use of the `um_workers` optional attribute of the `DataLoader` class.\n",
    "\n",
    "The `num_workers` attribute tells the data loader instance how many sub-processes to use for data loading. By default, the `num_workers` value is set to zero, and a value of zero tells the loader to load the data inside the main process.\n",
    "\n",
    "This means that the training process will work sequentially inside the **main process**. After a batch is used during the training process and another one is needed, we read the batch data from disk.\n",
    "\n",
    "Now, if we have a worker process, we can make use of the fact that our machine has **multiple cores**. This means that the next batch can already be loaded and ready to go by the time the main process is ready for another batch. This is where the speed up comes from. The batches are loaded using additional worker processes and are **queued up** in memory.\n",
    "\n",
    "### Optimal Value For The `num_workers` Attribute\n",
    "The natural question that arises is, **how many** worker processes should we add? There are a lot of factors that can affect the optimal number here, so the best way to find out is to test.\n",
    "\n",
    "## Testing Values For The num_workers Attribute\n",
    "To set up this test, we'll create a list of `num_workers` values to try. We'll try the following values:\n",
    "* 0 (default)\n",
    "* 1\n",
    "* 2\n",
    "* 4\n",
    "* 8\n",
    "* 16\n",
    "For each of these values, we'll vary the batch size by trying the following values:\n",
    "* 100\n",
    "* 1000\n",
    "* 10000\n",
    "\n",
    "For the learning rate, we'll keep it at a constant value of `.01` for all of the runs.\n",
    "\n",
    "The last thing to mention about the setup here is the fact that we are only doing a single epoch for each of the runs.\n",
    "\n",
    "Alright, let's see what we get.\n",
    "## Different `num_workers` Values: Results\n",
    "Alright, we can see down below that we have the results. We completed a total of eighteen runs. We have three groups of differing batch sizes, and inside each of these groups, we varied the number of worker processes.\n",
    "<table class=\"table table-sm table-hover\">\n",
    "                                                    <thead>\n",
    "                                                        <tr style=\"text-align: right;\">\n",
    "                                                            <th>run</th>\n",
    "                                                            <th>epoch</th>\n",
    "                                                            <th>loss</th>\n",
    "                                                            <th>accuracy</th>\n",
    "                                                            <th>epoch duration</th>\n",
    "                                                            <th>run duration</th>\n",
    "                                                            <th>lr</th>\n",
    "                                                            <th>batch_size</th>\n",
    "                                                            <th>num_workers</th>\n",
    "                                                        </tr>\n",
    "                                                    </thead>\n",
    "                                                    <tbody>\n",
    "                                                        <tr>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>0.566253</td>\n",
    "                                                            <td>0.782583</td>\n",
    "                                                            <td>23.281029</td>\n",
    "                                                            <td>23.374832</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>100</td>\n",
    "                                                            <td>0</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>2</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>0.573350</td>\n",
    "                                                            <td>0.783917</td>\n",
    "                                                            <td>18.125359</td>\n",
    "                                                            <td>18.965940</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>100</td>\n",
    "                                                            <td>1</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>3</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>0.574852</td>\n",
    "                                                            <td>0.782133</td>\n",
    "                                                            <td>18.161020</td>\n",
    "                                                            <td>19.037995</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>100</td>\n",
    "                                                            <td>2</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>4</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>0.593246</td>\n",
    "                                                            <td>0.775067</td>\n",
    "                                                            <td>18.637056</td>\n",
    "                                                            <td>19.669869</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>100</td>\n",
    "                                                            <td>4</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>5</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>0.587598</td>\n",
    "                                                            <td>0.777500</td>\n",
    "                                                            <td>18.631994</td>\n",
    "                                                            <td>20.123626</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>100</td>\n",
    "                                                            <td>8</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>6</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>0.596401</td>\n",
    "                                                            <td>0.775983</td>\n",
    "                                                            <td>20.110439</td>\n",
    "                                                            <td>22.930428</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>100</td>\n",
    "                                                            <td>16</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>7</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>1.105825</td>\n",
    "                                                            <td>0.577500</td>\n",
    "                                                            <td>21.254815</td>\n",
    "                                                            <td>21.941008</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>1000</td>\n",
    "                                                            <td>0</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>8</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>1.013017</td>\n",
    "                                                            <td>0.612267</td>\n",
    "                                                            <td>15.961835</td>\n",
    "                                                            <td>17.457127</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>1000</td>\n",
    "                                                            <td>1</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>9</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>0.881558</td>\n",
    "                                                            <td>0.666200</td>\n",
    "                                                            <td>16.060656</td>\n",
    "                                                            <td>17.614599</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>1000</td>\n",
    "                                                            <td>2</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>10</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>1.034153</td>\n",
    "                                                            <td>0.606767</td>\n",
    "                                                            <td>16.206196</td>\n",
    "                                                            <td>17.883490</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>1000</td>\n",
    "                                                            <td>4</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>11</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>0.963817</td>\n",
    "                                                            <td>0.626400</td>\n",
    "                                                            <td>16.700765</td>\n",
    "                                                            <td>18.882340</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>1000</td>\n",
    "                                                            <td>8</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>12</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>1.046822</td>\n",
    "                                                            <td>0.601683</td>\n",
    "                                                            <td>17.912993</td>\n",
    "                                                            <td>21.747298</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>1000</td>\n",
    "                                                            <td>16</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>13</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>2.173913</td>\n",
    "                                                            <td>0.265983</td>\n",
    "                                                            <td>22.219368</td>\n",
    "                                                            <td>27.145123</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>10000</td>\n",
    "                                                            <td>0</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>14</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>2.156031</td>\n",
    "                                                            <td>0.191167</td>\n",
    "                                                            <td>16.563987</td>\n",
    "                                                            <td>23.368729</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>10000</td>\n",
    "                                                            <td>1</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>15</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>2.182048</td>\n",
    "                                                            <td>0.210250</td>\n",
    "                                                            <td>16.128202</td>\n",
    "                                                            <td>23.030015</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>10000</td>\n",
    "                                                            <td>2</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>16</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>2.245768</td>\n",
    "                                                            <td>0.200683</td>\n",
    "                                                            <td>16.248334</td>\n",
    "                                                            <td>22.108252</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>10000</td>\n",
    "                                                            <td>4</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>17</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>2.177970</td>\n",
    "                                                            <td>0.206483</td>\n",
    "                                                            <td>16.921782</td>\n",
    "                                                            <td>23.897321</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>10000</td>\n",
    "                                                            <td>8</td>\n",
    "                                                        </tr>\n",
    "                                                        <tr>\n",
    "                                                            <td>18</td>\n",
    "                                                            <td>1</td>\n",
    "                                                            <td>2.153342</td>\n",
    "                                                            <td>0.208017</td>\n",
    "                                                            <td>18.555999</td>\n",
    "                                                            <td>26.654219</td>\n",
    "                                                            <td>0.01</td>\n",
    "                                                            <td>10000</td>\n",
    "                                                            <td>16</td>\n",
    "                                                        </tr>\n",
    "                                                    </tbody>\n",
    "</table>\n",
    "                                                \n",
    "The main take-away from these results is that, across all three batch sizes, having a single worker process in addition to the main process resulted in a speed up of about twenty percent.\n",
    "\n",
    "<center><b>20% Faster!</b></center>\n",
    "Additionally, adding additional worker processes after the first one didn't really show any further improvements.\n",
    "\n",
    "## Interpreting The Results\n",
    "The twenty percent speed up that we see after adding a **single worker process** makes sense because the main process had less work to do.\n",
    "\n",
    "While the **main process** is busy performing the forward and backward passes, the worker process is loading the next batch. By the time the main process is ready for another batch, the worker process already has it queued up in memory.\n",
    "\n",
    "As a result, the main process doesn't have to read the data from disk. Instead, the data is already in memory, and this gives us the twenty percent speed up.\n",
    "\n",
    "Now, why are we not seeing additional speed ups after adding more workers?\n",
    "\n",
    "## Make It Go Faster With More Workers?\n",
    "We'll if one worker is enough to keep the queue full of data for the main process, then adding more batches of data to the queue **isn't** going to do anything. This is what I think we are seeing here.\n",
    "\n",
    "Just because we are adding more batches to the queue doesn't mean the batches are being processes faster. Thus, we are bounded by the time it takes to forward and backward propagate a given batch.\n",
    "\n",
    "We can even see that things start **bogging** as we get to **16 workers**.\n",
    "\n",
    "Hope this helps speed you up!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Quiz 03\n",
    "1. By default, the `num_workers` value is set to _______________.\n",
    "* 0<br><br>\n",
    "\n",
    "2. A value of _______________ tells the data loader to load the data inside the main process.\n",
    "* 0<br><br>\n",
    "\n",
    "3. If we have a worker process, we can make use of the fact that our machine has multiple _______________. This means that the next batch can be ready to go by the time the main process is ready for another batch.\n",
    "* cores"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
