{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Visualizing Solvers with TensorBoard"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This tutorial may assume knowledge from the tutorial on [Visualization with TensorBoard](minpy_visualization.ipynb). It is based on MinPy's [CNN Tutorial](../cnn_tutorial/cnn_tutorial.rst).\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Equip the CNN Tutorial with Visualization Functions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Set up as in the original tutorial."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "\"\"\"Convolution Neural Network example using only MXNet symbol.\"\"\"\n",
    "import sys\n",
    "\n",
    "from minpy.nn.io import NDArrayIter\n",
    "# Can also use MXNet IO here\n",
    "# from mxnet.io import NDArrayIter\n",
    "from minpy.core import Function\n",
    "from minpy.nn import layers\n",
    "from minpy.nn.model import ModelBase\n",
    "from minpy.nn.solver import Solver\n",
    "from examples.utils.data_utils import get_CIFAR10_data\n",
    "\n",
    "# Please uncomment following if you have GPU-enabled MXNet installed.\n",
    "#from minpy.context import set_context, gpu\n",
    "#set_context(gpu(0)) # set the global context as gpu(0)\n",
    "\n",
    "import mxnet as mx\n",
    "\n",
    "batch_size=128\n",
    "input_size=(3, 32, 32)\n",
    "flattened_input_size=3 * 32 * 32\n",
    "hidden_size=512\n",
    "num_classes=10"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Design a template for CNN."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class ConvolutionNet(ModelBase):\n",
    "    def __init__(self):\n",
    "        super(ConvolutionNet, self).__init__()\n",
    "        # Define symbols that using convolution and max pooling to extract better features\n",
    "        # from input image.\n",
    "        net = mx.sym.Variable(name='X')\n",
    "        net = mx.sym.Convolution(\n",
    "                data=net, name='conv', kernel=(7, 7), num_filter=32)\n",
    "        net = mx.sym.Activation(\n",
    "                data=net, act_type='relu')\n",
    "        net = mx.sym.Pooling(\n",
    "                data=net, name='pool', pool_type='max', kernel=(2, 2),\n",
    "                stride=(2, 2))\n",
    "        net = mx.sym.Flatten(data=net)\n",
    "        net = mx.sym.FullyConnected(\n",
    "                data=net, name='fc1', num_hidden=hidden_size)\n",
    "        net = mx.sym.Activation(\n",
    "                data=net, act_type='relu')\n",
    "        net = mx.sym.FullyConnected(\n",
    "                data=net, name='fc2', num_hidden=num_classes)\n",
    "        net = mx.sym.SoftmaxOutput(data=net, name='softmax', normalization='batch')\n",
    "        # Create forward function and add parameters to this model.\n",
    "        input_shapes = {'X': (batch_size,) + input_size, 'softmax_label': (batch_size,)}\n",
    "        self.cnn = Function(net, input_shapes=input_shapes, name='cnn')\n",
    "        self.add_params(self.cnn.get_params())\n",
    "\n",
    "    def forward_batch(self, batch, mode):\n",
    "        out = self.cnn(X=batch.data[0],\n",
    "                       softmax_label=batch.label[0],\n",
    "                       **self.params)\n",
    "        return out\n",
    "\n",
    "    def loss(self, predict, y):\n",
    "        return layers.softmax_cross_entropy(predict, y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Set `get_CIFAR10_data`'s argument to the data file location for cifar-10 dataset. The original tutorial applied an argparse to read the directory directly in the terminal. For the convenience of using a Jupyter notebook, this is not used here.\n",
    "\n",
    "Declare the directory for storing log files, which will be used for viusalization later.\n",
    "\n",
    "`visualize` is an optional argument of Solver and is set to be `False` by default. Set `visualize` to be `True` and pass the `summaries_dir` argument as well. We will touch the details of implementing visualization functions in Solver later."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def main():\n",
    "    # Create model.\n",
    "    model = ConvolutionNet()\n",
    "    # Create data iterators for training and testing sets.\n",
    "    data = get_CIFAR10_data('cifar-10-batches-py')\n",
    "    train_dataiter = NDArrayIter(data=data['X_train'],\n",
    "                                 label=data['y_train'],\n",
    "                                 batch_size=batch_size,\n",
    "                                 shuffle=True)\n",
    "    test_dataiter = NDArrayIter(data=data['X_test'],\n",
    "                                label=data['y_test'],\n",
    "                                batch_size=batch_size,\n",
    "                                shuffle=False)\n",
    "\n",
    "    # Declare the directory for storing data, which will be used for visualization with tensorboard later.\n",
    "    summaries_dir = '/private/tmp/cnn_log'\n",
    "\n",
    "    # Create solver.\n",
    "    solver = Solver(model,\n",
    "                    train_dataiter,\n",
    "                    test_dataiter,\n",
    "                    num_epochs=10,\n",
    "                    init_rule='gaussian',\n",
    "                    init_config={\n",
    "                        'stdvar': 0.001\n",
    "                    },\n",
    "                    update_rule='sgd_momentum',\n",
    "                    optim_config={\n",
    "                        'learning_rate': 1e-3,\n",
    "                        'momentum': 0.9\n",
    "                    },\n",
    "                    verbose=True,\n",
    "                    print_every=20,\n",
    "                    visualize=True,\n",
    "                    summaries_dir=summaries_dir)\n",
    "    # Initialize model parameters.\n",
    "    solver.init()\n",
    "    # Train!\n",
    "    solver.train()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(Iteration 1 / 3828) loss: 2.302535\n",
      "(Iteration 21 / 3828) loss: 2.302051\n",
      "(Iteration 41 / 3828) loss: 2.291640\n",
      "(Iteration 61 / 3828) loss: 2.133044\n",
      "(Iteration 81 / 3828) loss: 2.033680\n",
      "(Iteration 101 / 3828) loss: 1.995795\n",
      "(Iteration 121 / 3828) loss: 1.796180\n",
      "(Iteration 141 / 3828) loss: 1.884282\n",
      "(Iteration 161 / 3828) loss: 1.702727\n",
      "(Iteration 181 / 3828) loss: 1.745341\n",
      "(Iteration 201 / 3828) loss: 1.550407\n",
      "(Iteration 221 / 3828) loss: 1.405793\n",
      "(Iteration 241 / 3828) loss: 1.529175\n",
      "(Iteration 261 / 3828) loss: 1.440347\n",
      "(Iteration 281 / 3828) loss: 1.859766\n",
      "(Iteration 301 / 3828) loss: 1.416149\n",
      "(Iteration 321 / 3828) loss: 1.481019\n",
      "(Iteration 341 / 3828) loss: 1.501948\n",
      "(Iteration 361 / 3828) loss: 1.508027\n",
      "(Iteration 381 / 3828) loss: 1.516997\n",
      "(Epoch 1 / 10) train acc: 0.501953125, val_acc: 0.4931640625, time: 1253.37731194.\n",
      "(Iteration 401 / 3828) loss: 1.296929\n",
      "(Iteration 421 / 3828) loss: 1.496588\n",
      "(Iteration 441 / 3828) loss: 1.330925\n",
      "(Iteration 461 / 3828) loss: 1.450040\n",
      "(Iteration 481 / 3828) loss: 1.393043\n",
      "(Iteration 501 / 3828) loss: 1.239604\n",
      "(Iteration 521 / 3828) loss: 1.210205\n",
      "(Iteration 541 / 3828) loss: 1.295574\n",
      "(Iteration 561 / 3828) loss: 1.372109\n",
      "(Iteration 581 / 3828) loss: 1.231615\n",
      "(Iteration 601 / 3828) loss: 1.243544\n",
      "(Iteration 621 / 3828) loss: 1.313342\n",
      "(Iteration 641 / 3828) loss: 1.510346\n",
      "(Iteration 661 / 3828) loss: 1.155001\n",
      "(Iteration 681 / 3828) loss: 1.241223\n",
      "(Iteration 701 / 3828) loss: 1.305725\n",
      "(Iteration 721 / 3828) loss: 1.218895\n",
      "(Iteration 741 / 3828) loss: 1.208463\n",
      "(Iteration 761 / 3828) loss: 1.319934\n",
      "(Epoch 2 / 10) train acc: 0.5751953125, val_acc: 0.5517578125, time: 1238.14002705.\n",
      "(Iteration 781 / 3828) loss: 1.204560\n",
      "(Iteration 801 / 3828) loss: 1.388396\n",
      "(Iteration 821 / 3828) loss: 1.208335\n",
      "(Iteration 841 / 3828) loss: 1.197055\n",
      "(Iteration 861 / 3828) loss: 1.225983\n",
      "(Iteration 881 / 3828) loss: 1.007661\n",
      "(Iteration 901 / 3828) loss: 1.083537\n",
      "(Iteration 921 / 3828) loss: 1.170273\n",
      "(Iteration 941 / 3828) loss: 1.079046\n",
      "(Iteration 961 / 3828) loss: 1.060466\n",
      "(Iteration 981 / 3828) loss: 1.186217\n",
      "(Iteration 1001 / 3828) loss: 1.176932\n",
      "(Iteration 1021 / 3828) loss: 1.049240\n",
      "(Iteration 1041 / 3828) loss: 1.084303\n",
      "(Iteration 1061 / 3828) loss: 1.137581\n",
      "(Iteration 1081 / 3828) loss: 1.201812\n",
      "(Iteration 1101 / 3828) loss: 0.991179\n",
      "(Iteration 1121 / 3828) loss: 1.053682\n",
      "(Iteration 1141 / 3828) loss: 1.033876\n",
      "(Epoch 3 / 10) train acc: 0.5771484375, val_acc: 0.5859375, time: 1111.29330206.\n",
      "(Iteration 1161 / 3828) loss: 0.945752\n",
      "(Iteration 1181 / 3828) loss: 0.900214\n",
      "(Iteration 1201 / 3828) loss: 0.996316\n",
      "(Iteration 1221 / 3828) loss: 0.725004\n",
      "(Iteration 1241 / 3828) loss: 1.053474\n",
      "(Iteration 1261 / 3828) loss: 0.956877\n",
      "(Iteration 1281 / 3828) loss: 1.118823\n",
      "(Iteration 1301 / 3828) loss: 1.032918\n",
      "(Iteration 1321 / 3828) loss: 1.078873\n",
      "(Iteration 1341 / 3828) loss: 0.964023\n",
      "(Iteration 1361 / 3828) loss: 1.081211\n",
      "(Iteration 1381 / 3828) loss: 0.975109\n",
      "(Iteration 1401 / 3828) loss: 0.887941\n",
      "(Iteration 1421 / 3828) loss: 0.812622\n",
      "(Iteration 1441 / 3828) loss: 0.781776\n",
      "(Iteration 1461 / 3828) loss: 0.839401\n",
      "(Iteration 1481 / 3828) loss: 1.083514\n",
      "(Iteration 1501 / 3828) loss: 0.916411\n",
      "(Iteration 1521 / 3828) loss: 0.820561\n",
      "(Epoch 4 / 10) train acc: 0.658203125, val_acc: 0.599609375, time: 1107.30718303.\n",
      "(Iteration 1541 / 3828) loss: 0.956412\n",
      "(Iteration 1561 / 3828) loss: 0.835572\n",
      "(Iteration 1581 / 3828) loss: 0.791931\n",
      "(Iteration 1601 / 3828) loss: 0.892034\n",
      "(Iteration 1621 / 3828) loss: 0.846968\n",
      "(Iteration 1641 / 3828) loss: 0.790181\n",
      "(Iteration 1661 / 3828) loss: 1.008565\n",
      "(Iteration 1681 / 3828) loss: 0.971547\n",
      "(Iteration 1701 / 3828) loss: 0.904101\n",
      "(Iteration 1721 / 3828) loss: 0.764249\n",
      "(Iteration 1741 / 3828) loss: 0.839634\n",
      "(Iteration 1761 / 3828) loss: 0.667381\n",
      "(Iteration 1781 / 3828) loss: 0.892126\n",
      "(Iteration 1801 / 3828) loss: 0.790432\n",
      "(Iteration 1821 / 3828) loss: 0.915785\n",
      "(Iteration 1841 / 3828) loss: 0.701808\n",
      "(Iteration 1861 / 3828) loss: 0.713519\n",
      "(Iteration 1881 / 3828) loss: 0.939402\n",
      "(Iteration 1901 / 3828) loss: 0.728612\n",
      "(Epoch 5 / 10) train acc: 0.6630859375, val_acc: 0.5966796875, time: 1127.98228502.\n",
      "(Iteration 1921 / 3828) loss: 0.898663\n",
      "(Iteration 1941 / 3828) loss: 1.081481\n",
      "(Iteration 1961 / 3828) loss: 0.956133\n",
      "(Iteration 1981 / 3828) loss: 0.664632\n",
      "(Iteration 2001 / 3828) loss: 0.986162\n",
      "(Iteration 2021 / 3828) loss: 0.921607\n",
      "(Iteration 2041 / 3828) loss: 0.855872\n",
      "(Iteration 2061 / 3828) loss: 0.785384\n",
      "(Iteration 2081 / 3828) loss: 0.985731\n",
      "(Iteration 2101 / 3828) loss: 0.693248\n",
      "(Iteration 2121 / 3828) loss: 1.032196\n",
      "(Iteration 2141 / 3828) loss: 0.918029\n",
      "(Iteration 2161 / 3828) loss: 0.809714\n",
      "(Iteration 2181 / 3828) loss: 0.876201\n",
      "(Iteration 2201 / 3828) loss: 0.714913\n",
      "(Iteration 2221 / 3828) loss: 0.964526\n",
      "(Iteration 2241 / 3828) loss: 0.795892\n",
      "(Iteration 2261 / 3828) loss: 0.756644\n",
      "(Iteration 2281 / 3828) loss: 0.571955\n",
      "(Epoch 6 / 10) train acc: 0.720703125, val_acc: 0.6044921875, time: 1100.48066902.\n",
      "(Iteration 2301 / 3828) loss: 0.584125\n",
      "(Iteration 2321 / 3828) loss: 0.818221\n",
      "(Iteration 2341 / 3828) loss: 0.647816\n",
      "(Iteration 2361 / 3828) loss: 0.807244\n",
      "(Iteration 2381 / 3828) loss: 0.663801\n",
      "(Iteration 2401 / 3828) loss: 0.710950\n",
      "(Iteration 2421 / 3828) loss: 0.869763\n",
      "(Iteration 2441 / 3828) loss: 0.659388\n",
      "(Iteration 2461 / 3828) loss: 0.884262\n",
      "(Iteration 2481 / 3828) loss: 0.892994\n",
      "(Iteration 2501 / 3828) loss: 0.696201\n",
      "(Iteration 2521 / 3828) loss: 0.792361\n",
      "(Iteration 2541 / 3828) loss: 0.583030\n",
      "(Iteration 2561 / 3828) loss: 0.987736\n",
      "(Iteration 2581 / 3828) loss: 0.812939\n",
      "(Iteration 2601 / 3828) loss: 0.686343\n",
      "(Iteration 2621 / 3828) loss: 0.696793\n",
      "(Iteration 2641 / 3828) loss: 0.730227\n",
      "(Iteration 2661 / 3828) loss: 0.717481\n",
      "(Iteration 2681 / 3828) loss: 0.717061\n",
      "(Epoch 7 / 10) train acc: 0.6875, val_acc: 0.5849609375, time: 1019.18220496.\n",
      "(Iteration 2701 / 3828) loss: 0.960259\n",
      "(Iteration 2721 / 3828) loss: 0.851661\n",
      "(Iteration 2741 / 3828) loss: 0.547349\n",
      "(Iteration 2761 / 3828) loss: 0.629300\n",
      "(Iteration 2781 / 3828) loss: 0.794492\n",
      "(Iteration 2801 / 3828) loss: 0.674677\n",
      "(Iteration 2821 / 3828) loss: 0.547635\n",
      "(Iteration 2841 / 3828) loss: 0.633213\n",
      "(Iteration 2861 / 3828) loss: 0.817622\n",
      "(Iteration 2881 / 3828) loss: 0.759713\n",
      "(Iteration 2901 / 3828) loss: 0.746527\n",
      "(Iteration 2921 / 3828) loss: 0.809928\n",
      "(Iteration 2941 / 3828) loss: 0.804247\n",
      "(Iteration 2961 / 3828) loss: 0.593531\n",
      "(Iteration 2981 / 3828) loss: 0.884193\n",
      "(Iteration 3001 / 3828) loss: 0.645554\n",
      "(Iteration 3021 / 3828) loss: 0.568051\n",
      "(Iteration 3041 / 3828) loss: 0.523802\n",
      "(Iteration 3061 / 3828) loss: 0.691015\n",
      "(Epoch 8 / 10) train acc: 0.6953125, val_acc: 0.5947265625, time: 962.428817034.\n",
      "(Iteration 3081 / 3828) loss: 0.646333\n",
      "(Iteration 3101 / 3828) loss: 0.893681\n",
      "(Iteration 3121 / 3828) loss: 0.822102\n",
      "(Iteration 3141 / 3828) loss: 0.619557\n",
      "(Iteration 3161 / 3828) loss: 0.787171\n",
      "(Iteration 3181 / 3828) loss: 0.725924\n",
      "(Iteration 3201 / 3828) loss: 0.559321\n",
      "(Iteration 3221 / 3828) loss: 0.654796\n",
      "(Iteration 3241 / 3828) loss: 0.646047\n",
      "(Iteration 3261 / 3828) loss: 0.789430\n",
      "(Iteration 3281 / 3828) loss: 0.639559\n",
      "(Iteration 3301 / 3828) loss: 0.798087\n",
      "(Iteration 3321 / 3828) loss: 0.669927\n",
      "(Iteration 3341 / 3828) loss: 0.706900\n",
      "(Iteration 3361 / 3828) loss: 0.560583\n",
      "(Iteration 3381 / 3828) loss: 0.630658\n",
      "(Iteration 3401 / 3828) loss: 0.804180\n",
      "(Iteration 3421 / 3828) loss: 0.727579\n",
      "(Iteration 3441 / 3828) loss: 0.547852\n",
      "(Epoch 9 / 10) train acc: 0.6982421875, val_acc: 0.5888671875, time: 958.027697086.\n",
      "(Iteration 3461 / 3828) loss: 0.599252\n",
      "(Iteration 3481 / 3828) loss: 0.485362\n",
      "(Iteration 3501 / 3828) loss: 0.741121\n",
      "(Iteration 3521 / 3828) loss: 0.636478\n",
      "(Iteration 3541 / 3828) loss: 0.711437\n",
      "(Iteration 3561 / 3828) loss: 0.655215\n",
      "(Iteration 3581 / 3828) loss: 0.651631\n",
      "(Iteration 3601 / 3828) loss: 0.762882\n",
      "(Iteration 3621 / 3828) loss: 0.817763\n",
      "(Iteration 3641 / 3828) loss: 0.768698\n",
      "(Iteration 3661 / 3828) loss: 0.742337\n",
      "(Iteration 3681 / 3828) loss: 0.569759\n",
      "(Iteration 3701 / 3828) loss: 0.610525\n",
      "(Iteration 3721 / 3828) loss: 0.623297\n",
      "(Iteration 3741 / 3828) loss: 0.733673\n",
      "(Iteration 3761 / 3828) loss: 0.573780\n",
      "(Iteration 3781 / 3828) loss: 0.606257\n",
      "(Iteration 3801 / 3828) loss: 0.800820\n",
      "(Iteration 3821 / 3828) loss: 0.639535\n",
      "(Epoch 10 / 10) train acc: 0.7216796875, val_acc: 0.5810546875, time: 703.184649944.\n"
     ]
    }
   ],
   "source": [
    "if __name__ == '__main__':\n",
    "    main()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Open the terminal, and call the following command:\n",
    "\n",
    "~~~bash\n",
    "tensorboard --logdir=summaries_dir\n",
    "~~~\n",
    "\n",
    "Note you don't need to include `/private` for `summaries_dir`, so in this case `summaries_dir` will be `/tmp/cnn_log`.\n",
    "\n",
    "Once you start TensorBoard, you should see the visualization of scalars in the EVENTS section as below. The training accuracy, validation accuracy, training loss and the squared L2-norm of the gradient are implemented by default in the Solver.\n",
    "\n",
    "Note: If you have more than one `SummaryWriter`(2 in this case), the data of some `SummaryWriter`s might not be written into the log files immediately. But you should get whatever you want by the end of the training.\n",
    "\n",
    "![CNN Loss Curve](cnn_loss.png)\n",
    "\n",
    "![Curve of Squared L2-Norm](cnn_gradient_norm.png)\n",
    "\n",
    "![Training accuracy](cnn_accuracy.png)\n",
    "\n",
    "## Implementation Details of the Solver\n",
    "\n",
    "Now we touch the details of the implementation of visualization in Solver. This will not show a complete implementation for the Solver class.\n",
    "\n",
    "### Step1: Generate SummaryWriters\n",
    "\n",
    "If `self.visualize == True`, two `SummaryWriter`s will be generated by default, one for training and one for testing."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class Solver(object):\n",
    "    ...\n",
    "    def __init__(self, model, train_dataiter, test_dataiter, **kwargs):\n",
    "        ...\n",
    "        self.visualize = kwargs.pop('visualize', False)\n",
    "        \n",
    "        if self.visualize:\n",
    "            # Retrieve the summary directory. Create summary writers for training and test.\n",
    "            self.summaries_dir = kwargs.pop('summaries_dir', '/private/tmp/newlog')\n",
    "            self.train_writer = SummaryWriter(self.summaries_dir + '/train')\n",
    "            self.test_writer = SummaryWriter(self.summaries_dir + '/test')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step2: Set a Scalar Summary for Squared L2-norm of the Gradient"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def _step(self, batch, iteration):\n",
    "    ...\n",
    "    if self.visualize:\n",
    "            Grad_norm = 0\n",
    "\n",
    "        # Perform a parameter update\n",
    "        for p, w in self.model.params.items():\n",
    "            dw = grads[p]\n",
    "            if self.visualize:\n",
    "                norm = dw ** 2\n",
    "                while not isinstance(norm, minpy.array.Number):\n",
    "                    norm = sum(norm)\n",
    "                Grad_norm += norm\n",
    "            config = self.optim_configs[p]\n",
    "            next_w, next_config = self.update_rule(w, dw, config)\n",
    "            self.model.params[p] = next_w\n",
    "            self.optim_configs[p] = next_config\n",
    "\n",
    "        if self.visualize:\n",
    "            grad_norm_summary = summaryOps.scalarSummary('squared L2-norm of the gradient', Grad_norm)\n",
    "            self.train_writer.add_summary(grad_norm_summary, iteration)\n",
    "    ..."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step3: Set a Scalar Summary for Training Loss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def train(self):\n",
    "        \"\"\"\n",
    "        Run optimization to train the model.\n",
    "        \"\"\"\n",
    "        num_iterations = self.train_dataiter.getnumiterations(\n",
    "        ) * self.num_epochs\n",
    "        t = 0\n",
    "        for epoch in range(self.num_epochs):\n",
    "            start = time.time()\n",
    "            self.epoch = epoch + 1\n",
    "            for each_batch in self.train_dataiter:\n",
    "                self._step(each_batch, t + 1)\n",
    "                # Maybe print training loss\n",
    "                if self.verbose and t % self.print_every == 0:\n",
    "                    print('(Iteration %d / %d) loss: %f' %\n",
    "                          (t + 1, num_iterations, self.loss_history[-1]))\n",
    "                if self.visualize:\n",
    "                    # Add scalar summaries of training loss.\n",
    "                    loss_summary = summaryOps.scalarSummary('loss', self.loss_history[-1])\n",
    "                    self.train_writer.add_summary(loss_summary, t + 1)\n",
    "                \n",
    "                t += 1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Step4: Set a Scalar Summary for Training/Validation Accuracy"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def train(self):\n",
    "    ...\n",
    "    for epoch in range(self.num_epochs):\n",
    "        start = time.time()\n",
    "        self.epoch = epoch + 1\n",
    "        ...\n",
    "        # evaluate after each epoch\n",
    "        train_acc = self.check_accuracy(self.train_dataiter, num_samples=self.train_acc_num_samples)\n",
    "        val_acc = self.check_accuracy(self.test_dataiter)\n",
    "        self.train_acc_history.append(train_acc)\n",
    "        self.val_acc_history.append(val_acc)\n",
    "        ...\n",
    "        if self.visualize:\n",
    "            val_acc_summary = summaryOps.scalarSummary('accuracy', val_acc)\n",
    "            self.test_writer.add_summary(val_acc_summary, self.epoch)\n",
    "            train_acc_summary = summaryOps.scalarSummary('accuracy', train_acc)\n",
    "            self.train_writer.add_summary(train_acc_summary, self.epoch)\n",
    "        ..."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You could do whatever you want like cross entropy, dropout_keep_probability, mean, etc. This is a result from the TensorFlow's tutorial on constructing a deep convolutional MNIST classifier: [link](https://github.com/tensorflow/tensorflow/blob/r0.11/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py).\n",
    "\n",
    "![MNIST Result](mnist_result.png)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
