{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Training notebook for MobileNet v2 1.0 and 0.5 on ImageNet dataset\n",
    "\n",
    "## Overview\n",
    "Use this notebook to train a MobileNet model from scratch. **Make sure to have the ImageNet dataset prepared** according to the guidelines in the dataset section in [MobileNet readme](README.md) before proceeding."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prerequisites\n",
    "The following dependencies need to be installed before proceeding.\n",
    "* mxnet - `pip install mxnet-cu90mkl` (tested on this version GPU, can use other versions)\n",
    "* gluoncv - `pip install gluoncv`\n",
    "* numpy - `pip install numpy`\n",
    "* matplotlib - `pip install matplotlib`\n",
    "\n",
    "In order to train the model with a python script: \n",
    "* Generate the script : In Jupyter Notebook browser, go to File -> Download as -> Python (.py)\n",
    "* Run the script: `python train_mobilenet.py`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Import dependencies\n",
    "Verify that all dependencies are installed using the cell below. Continue if no errors encountered, warnings can be ignored."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import matplotlib\n",
    "matplotlib.use('Agg')\n",
    "\n",
    "import argparse, time, logging\n",
    "import mxnet as mx\n",
    "import numpy as np\n",
    "from mxnet import gluon, nd\n",
    "from mxnet import autograd as ag\n",
    "from mxnet.gluon import nn\n",
    "from mxnet.gluon.data.vision import transforms\n",
    "\n",
    "from gluoncv.data import imagenet\n",
    "from gluoncv.utils import makedirs, TrainingHistory\n",
    "\n",
    "import os\n",
    "from mxnet.context import cpu\n",
    "from mxnet.gluon.block import HybridBlock\n",
    "from mxnet.gluon.contrib.nn import HybridConcurrent\n",
    "import multiprocessing"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Specify model, hyperparameters and save locations\n",
    "\n",
    "The training was done on a p3.16xlarge ec2 instance on AWS. It has 8 Nvidia Tesla V100 GPUs (16GB each) and Intel Xeon E5-2686 v4 @ 2.70GHz with 64 threads.\n",
    "\n",
    "The `batch_size` set below is per device. For multiple GPUs there are different batches in each GPU of size `batch_size` simultaneously.\n",
    "\n",
    "The rest of the parameters can be tuned to fit the needs of a user. The values shown below were used to train the model in the model zoo."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# specify model - choose from (mobilenetv2_1.0, mobilenetv2_0.5)\n",
    "model_name = 'mobilenetv2_1.0' \n",
    "\n",
    "# path to training and validation images to use\n",
    "data_dir = '/home/ubuntu/imagenet/img_dataset'\n",
    "\n",
    "# training batch size per device (CPU/GPU)\n",
    "batch_size = 40\n",
    "\n",
    "# number of GPUs to use (automatically detect the number of GPUs)\n",
    "num_gpus = len(mx.test_utils.list_gpus())\n",
    "\n",
    "# number of pre-processing workers (automatically detect the number of workers)\n",
    "num_workers = multiprocessing.cpu_count()\n",
    "\n",
    "# number of training epochs \n",
    "#used as 480 for all of the models , used 1 over here to show demo for 1 epoch\n",
    "num_epochs = 1\n",
    "\n",
    "# learning rate\n",
    "lr = 0.045\n",
    "\n",
    "# momentum value for optimizer\n",
    "momentum = 0.9\n",
    "\n",
    "# weight decay rate\n",
    "wd = 0.00004\n",
    "\n",
    "# decay rate of learning rate\n",
    "lr_decay = 0.98\n",
    "\n",
    "# interval for periodic learning rate decays\n",
    "lr_decay_period = 1\n",
    "\n",
    "# epoches at which learning rate decays\n",
    "lr_decay_epoch = '30,60,90'\n",
    "\n",
    "# mode in which to train the model. options are symbolic, imperative, hybrid\n",
    "mode = 'hybrid'\n",
    "\n",
    "# Number of batches to wait before logging\n",
    "log_interval = 50\n",
    "\n",
    "# frequency of model saving\n",
    "save_frequency = 10\n",
    "\n",
    "# directory of saved models\n",
    "save_dir = 'params'\n",
    "\n",
    "#directory of training logs\n",
    "logging_dir = 'logs'\n",
    "\n",
    "# the path to save the history plot\n",
    "save_plot_dir = '.'\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Model definition in Gluon\n",
    "\n",
    "The class `MobileNetV2` contains model definitions of the MobileNet models and the required model is retrieved using the relevant constructor function: `mobilenet_v2_1_0` or `mobilenet_v2_0_5`. \n",
    "\n",
    "`RELU6`, `_add_conv`, `_add_conv_dw` and `LinearBottleneck` are helper functions and classes used in the model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "##This block contains definition for Mobilenet v2\n",
    "\n",
    "# Helpers\n",
    "class RELU6(nn.HybridBlock):\n",
    "    \"\"\"Relu6 used in MobileNetV2.\"\"\"\n",
    "\n",
    "    def __init__(self, **kwargs):\n",
    "        super(RELU6, self).__init__(**kwargs)\n",
    "\n",
    "    def hybrid_forward(self, F, x):\n",
    "        return F.clip(x, 0, 6, name=\"relu6\")\n",
    "\n",
    "\n",
    "def _add_conv(out, channels=1, kernel=1, stride=1, pad=0,\n",
    "              num_group=1, active=True, relu6=False):\n",
    "    out.add(nn.Conv2D(channels, kernel, stride, pad, groups=num_group, use_bias=False))\n",
    "    out.add(nn.BatchNorm(scale=True))\n",
    "    if active:\n",
    "        out.add(RELU6() if relu6 else nn.Activation('relu'))\n",
    "\n",
    "\n",
    "def _add_conv_dw(out, dw_channels, channels, stride, relu6=False):\n",
    "    _add_conv(out, channels=dw_channels, kernel=3, stride=stride,\n",
    "              pad=1, num_group=dw_channels, relu6=relu6)\n",
    "    _add_conv(out, channels=channels, relu6=relu6)\n",
    "\n",
    "\n",
    "class LinearBottleneck(nn.HybridBlock):\n",
    "    r\"\"\"LinearBottleneck used in MobileNetV2 model from the\n",
    "    `\"Inverted Residuals and Linear Bottlenecks:\n",
    "      Mobile Networks for Classification, Detection and Segmentation\"\n",
    "    <https://arxiv.org/abs/1801.04381>`_ paper.\n",
    "    Parameters\n",
    "    ----------\n",
    "    in_channels : int\n",
    "        Number of input channels.\n",
    "    channels : int\n",
    "        Number of output channels.\n",
    "    t : int\n",
    "        Layer expansion ratio.\n",
    "    stride : int\n",
    "        stride\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, in_channels, channels, t, stride, **kwargs):\n",
    "        super(LinearBottleneck, self).__init__(**kwargs)\n",
    "        self.use_shortcut = stride == 1 and in_channels == channels\n",
    "        with self.name_scope():\n",
    "            self.out = nn.HybridSequential()\n",
    "\n",
    "            _add_conv(self.out, in_channels * t, relu6=True)\n",
    "            _add_conv(self.out, in_channels * t, kernel=3, stride=stride,\n",
    "                      pad=1, num_group=in_channels * t, relu6=True)\n",
    "            _add_conv(self.out, channels, active=False, relu6=True)\n",
    "\n",
    "    def hybrid_forward(self, F, x):\n",
    "        out = self.out(x)\n",
    "        if self.use_shortcut:\n",
    "            out = F.elemwise_add(out, x)\n",
    "        return out\n",
    "\n",
    "\n",
    "# Net\n",
    "class MobileNetV2(nn.HybridBlock):\n",
    "    r\"\"\"MobileNetV2 model from the\n",
    "    `\"Inverted Residuals and Linear Bottlenecks:\n",
    "      Mobile Networks for Classification, Detection and Segmentation\"\n",
    "    <https://arxiv.org/abs/1801.04381>`_ paper.\n",
    "    Parameters\n",
    "    ----------\n",
    "    multiplier : float, default 1.0\n",
    "        The width multiplier for controling the model size. The actual number of channels\n",
    "        is equal to the original channel size multiplied by this multiplier.\n",
    "    classes : int, default 1000\n",
    "        Number of classes for the output layer.\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, multiplier=1.0, classes=1000, **kwargs):\n",
    "        super(MobileNetV2, self).__init__(**kwargs)\n",
    "        with self.name_scope():\n",
    "            self.features = nn.HybridSequential(prefix='features_')\n",
    "            with self.features.name_scope():\n",
    "                _add_conv(self.features, int(32 * multiplier), kernel=3,\n",
    "                          stride=2, pad=1, relu6=True)\n",
    "\n",
    "                in_channels_group = [int(x * multiplier) for x in [32] + [16] + [24] * 2\n",
    "                                     + [32] * 3 + [64] * 4 + [96] * 3 + [160] * 3]\n",
    "                channels_group = [int(x * multiplier) for x in [16] + [24] * 2 + [32] * 3\n",
    "                                  + [64] * 4 + [96] * 3 + [160] * 3 + [320]]\n",
    "                ts = [1] + [6] * 16\n",
    "                strides = [1, 2] * 2 + [1, 1, 2] + [1] * 6 + [2] + [1] * 3\n",
    "\n",
    "                for in_c, c, t, s in zip(in_channels_group, channels_group, ts, strides):\n",
    "                    self.features.add(LinearBottleneck(in_channels=in_c, channels=c,\n",
    "                                                       t=t, stride=s))\n",
    "\n",
    "                last_channels = int(1280 * multiplier) if multiplier > 1.0 else 1280\n",
    "                _add_conv(self.features, last_channels, relu6=True)\n",
    "\n",
    "                self.features.add(nn.GlobalAvgPool2D())\n",
    "\n",
    "            self.output = nn.HybridSequential(prefix='output_')\n",
    "            with self.output.name_scope():\n",
    "                self.output.add(\n",
    "                    nn.Conv2D(classes, 1, use_bias=False, prefix='pred_'),\n",
    "                    nn.Flatten()\n",
    "                )\n",
    "\n",
    "    def hybrid_forward(self, F, x):\n",
    "        x = self.features(x)\n",
    "        x = self.output(x)\n",
    "        return x\n",
    "\n",
    "\n",
    "# Constructor\n",
    "def get_mobilenet_v2(multiplier, **kwargs):\n",
    "    r\"\"\"MobileNetV2 model from the\n",
    "    `\"Inverted Residuals and Linear Bottlenecks:\n",
    "      Mobile Networks for Classification, Detection and Segmentation\"\n",
    "    <https://arxiv.org/abs/1801.04381>`_ paper.\n",
    "    Parameters\n",
    "    ----------\n",
    "    multiplier : float\n",
    "        The width multiplier for controling the model size. Only multipliers that are no\n",
    "        less than 0.25 are supported. The actual number of channels is equal to the original\n",
    "        channel size multiplied by this multiplier.\n",
    "    \"\"\"\n",
    "    net = MobileNetV2(multiplier, **kwargs)\n",
    "    return net\n",
    "\n",
    "def mobilenet_v2_1_0(**kwargs):\n",
    "    r\"\"\"MobileNetV2 model from the\n",
    "    `\"Inverted Residuals and Linear Bottlenecks:\n",
    "      Mobile Networks for Classification, Detection and Segmentation\"\n",
    "    <https://arxiv.org/abs/1801.04381>`_ paper.\n",
    "    \"\"\"\n",
    "    return get_mobilenet_v2(1.0, **kwargs)\n",
    "\n",
    "def mobilenet_v2_0_5(**kwargs):\n",
    "    r\"\"\"MobileNetV2 model from the\n",
    "    `\"Inverted Residuals and Linear Bottlenecks:\n",
    "      Mobile Networks for Classification, Detection and Segmentation\"\n",
    "    <https://arxiv.org/abs/1801.04381>`_ paper.\n",
    "    \"\"\"\n",
    "    return get_mobilenet_v2(0.5, **kwargs)\n",
    "models = {  \n",
    "            'mobilenetv2_1.0': mobilenet_v2_1_0,\n",
    "            'mobilenetv2_0.5': mobilenet_v2_0_5\n",
    "         }\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Helper code\n",
    "Define context, optimizer, accuracy metrics, retireve gluon model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Specify logging fucntion\n",
    "logging.basicConfig(level=logging.INFO)\n",
    "\n",
    "# Specify classes (1000 for ImageNet)\n",
    "classes = 1000\n",
    "# Extrapolate batches to all devices\n",
    "batch_size *= max(1, num_gpus)\n",
    "# Define context\n",
    "context = [mx.gpu(i) for i in range(num_gpus)] if num_gpus > 0 else [mx.cpu()]\n",
    "\n",
    "lr_decay_epoch = [int(i) for i in lr_decay_epoch.split(',')] + [np.inf]\n",
    "\n",
    "kwargs = {'classes': classes}\n",
    "\n",
    "# Define optimizer (nag = Nestrov Accelerated Gradient)\n",
    "optimizer = 'nag'\n",
    "optimizer_params = {'learning_rate': lr, 'wd': wd, 'momentum': momentum}\n",
    "\n",
    "# Retireve gluon model\n",
    "net = models[model_name](**kwargs)\n",
    "\n",
    "# Define accuracy measures - top1 error and top5 error\n",
    "acc_top1 = mx.metric.Accuracy()\n",
    "acc_top5 = mx.metric.TopKAccuracy(5)\n",
    "train_history = TrainingHistory(['training-top1-err', 'training-top5-err',\n",
    "                                 'validation-top1-err', 'validation-top5-err'])\n",
    "makedirs(save_dir)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Define preprocessing functions\n",
    "`preprocess_train_data(normalize, jitter_param, lighting_param)` : Do pre-processing and data augmentation of train images -> take random crops of size 224x224, do random left right flips, jitter image color and lighting, mormalize image\n",
    "\n",
    "`preprocess_test_data(normalize)` : Pre-process validation images -> resize to size 256x256, take center crop of size 224x224, normalize image"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "normalize = transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n",
    "jitter_param = 0.0\n",
    "lighting_param = 0.0\n",
    "\n",
    "# Input pre-processing for train data\n",
    "def preprocess_train_data(normalize, jitter_param, lighting_param):\n",
    "    transform_train = transforms.Compose([\n",
    "        transforms.Resize(480),\n",
    "        transforms.RandomResizedCrop(224),\n",
    "        transforms.RandomFlipLeftRight(),\n",
    "        transforms.RandomColorJitter(brightness=jitter_param, contrast=jitter_param,\n",
    "                                     saturation=jitter_param),\n",
    "        transforms.RandomLighting(lighting_param),\n",
    "        transforms.ToTensor(),\n",
    "        normalize\n",
    "    ])\n",
    "    return transform_train\n",
    "\n",
    "# Input pre-processing for validation data\n",
    "def preprocess_test_data(normalize):\n",
    "    transform_test = transforms.Compose([\n",
    "        transforms.Resize(256),\n",
    "        transforms.CenterCrop(224),\n",
    "        transforms.ToTensor(),\n",
    "        normalize\n",
    "    ])\n",
    "    return transform_test"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Define test function\n",
    "`test(ctx, val_data)` : Computes and returns validation errors on `val_data` using `ctx` context"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Test function\n",
    "def test(ctx, val_data):\n",
    "    # Reset accuracy metrics\n",
    "    acc_top1.reset()\n",
    "    acc_top5.reset()\n",
    "    for i, batch in enumerate(val_data):\n",
    "        # Load validation batch\n",
    "        data = gluon.utils.split_and_load(batch[0], ctx_list=ctx, batch_axis=0)\n",
    "        label = gluon.utils.split_and_load(batch[1], ctx_list=ctx, batch_axis=0)\n",
    "        # Perform forward pass\n",
    "        outputs = [net(X) for X in data]\n",
    "        # Update accuracy metrics\n",
    "        acc_top1.update(label, outputs)\n",
    "        acc_top5.update(label, outputs)\n",
    "    # Retrieve and return top1 and top5 errors\n",
    "    _, top1 = acc_top1.get()\n",
    "    _, top5 = acc_top5.get()\n",
    "    return (1-top1, 1-top5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Define train function\n",
    "`train(epochs, ctx)` : Train model for `epochs` epochs using `ctx` context, log training progress, compute and display validation errors after each epoch, take periodic snapshots of the model, generates training plot "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Train function\n",
    "def train(epochs, ctx):\n",
    "    if isinstance(ctx, mx.Context):\n",
    "        ctx = [ctx]\n",
    "    # Initialize network - Use method in MSRA paper <https://arxiv.org/abs/1502.01852>\n",
    "    net.initialize(mx.init.MSRAPrelu(), ctx=ctx)\n",
    "    # Prepare train and validation batches\n",
    "    transform_train = preprocess_train_data(normalize, jitter_param, lighting_param)\n",
    "    transform_test = preprocess_test_data(normalize)\n",
    "    train_data = gluon.data.DataLoader(\n",
    "        imagenet.classification.ImageNet(data_dir, train=True).transform_first(transform_train),\n",
    "        batch_size=batch_size, shuffle=True, last_batch='discard', num_workers=num_workers)\n",
    "    val_data = gluon.data.DataLoader(\n",
    "        imagenet.classification.ImageNet(data_dir, train=False).transform_first(transform_test),\n",
    "        batch_size=batch_size, shuffle=False, num_workers=num_workers)\n",
    "    # Define trainer\n",
    "    trainer = gluon.Trainer(net.collect_params(), optimizer, optimizer_params)\n",
    "    # Define loss\n",
    "    L = gluon.loss.SoftmaxCrossEntropyLoss()\n",
    "\n",
    "    lr_decay_count = 0\n",
    "\n",
    "    best_val_score = 1\n",
    "    # Main training loop - loop over epochs\n",
    "    for epoch in range(epochs):\n",
    "        tic = time.time()\n",
    "        # Reset accuracy metrics\n",
    "        acc_top1.reset()\n",
    "        acc_top5.reset()\n",
    "        btic = time.time()\n",
    "        train_loss = 0\n",
    "        num_batch = len(train_data)\n",
    "        \n",
    "        # Check and perform learning rate decay\n",
    "        if lr_decay_period and epoch and epoch % lr_decay_period == 0:\n",
    "            trainer.set_learning_rate(trainer.learning_rate*lr_decay)\n",
    "        elif lr_decay_period == 0 and epoch == lr_decay_epoch[lr_decay_count]:\n",
    "            trainer.set_learning_rate(trainer.learning_rate*lr_decay)\n",
    "            lr_decay_count += 1\n",
    "        # Loop over batches in an epoch\n",
    "        for i, batch in enumerate(train_data):\n",
    "            # Load train batch\n",
    "            data = gluon.utils.split_and_load(batch[0], ctx_list=ctx, batch_axis=0)\n",
    "            label = gluon.utils.split_and_load(batch[1], ctx_list=ctx, batch_axis=0)\n",
    "            label_smooth = label\n",
    "            # Perform forward pass\n",
    "            with ag.record():\n",
    "                outputs = [net(X) for X in data]\n",
    "                loss = [L(yhat, y) for yhat, y in zip(outputs, label_smooth)]\n",
    "            # Perform backward pass\n",
    "            ag.backward(loss)\n",
    "            # PErform updates\n",
    "            trainer.step(batch_size)\n",
    "            # Update accuracy metrics\n",
    "            acc_top1.update(label, outputs)\n",
    "            acc_top5.update(label, outputs)\n",
    "            # Update loss\n",
    "            train_loss += sum([l.sum().asscalar() for l in loss])\n",
    "            # Log training progress (after each `log_interval` batches)\n",
    "            if log_interval and not (i+1)%log_interval:\n",
    "                _, top1 = acc_top1.get()\n",
    "                _, top5 = acc_top5.get()\n",
    "                err_top1, err_top5 = (1-top1, 1-top5)\n",
    "                logging.info('Epoch[%d] Batch [%d]\\tSpeed: %f samples/sec\\ttop1-err=%f\\ttop5-err=%f'%(\n",
    "                             epoch, i, batch_size*log_interval/(time.time()-btic), err_top1, err_top5))\n",
    "                btic = time.time()\n",
    "\n",
    "        # Retrieve training errors and loss\n",
    "        _, top1 = acc_top1.get()\n",
    "        _, top5 = acc_top5.get()\n",
    "        err_top1, err_top5 = (1-top1, 1-top5)\n",
    "        train_loss /= num_batch * batch_size\n",
    "\n",
    "        # Compute validation errors\n",
    "        err_top1_val, err_top5_val = test(ctx, val_data)\n",
    "        # Update training history\n",
    "        train_history.update([err_top1, err_top5, err_top1_val, err_top5_val])\n",
    "        # Update plot\n",
    "        train_history.plot(['training-top1-err', 'validation-top1-err','training-top5-err', 'validation-top5-err'],\n",
    "                           save_path='%s/%s_top_error.png'%(save_plot_dir, model_name))\n",
    "\n",
    "        # Log training progress (after each epoch)\n",
    "        logging.info('[Epoch %d] training: err-top1=%f err-top5=%f loss=%f'%(epoch, err_top1, err_top5, train_loss))\n",
    "        logging.info('[Epoch %d] time cost: %f'%(epoch, time.time()-tic))\n",
    "        logging.info('[Epoch %d] validation: err-top1=%f err-top5=%f'%(epoch, err_top1_val, err_top5_val))\n",
    "\n",
    "        # Save a snapshot of the best model - use net.export to get MXNet symbols and params\n",
    "        if err_top1_val < best_val_score and epoch > 50:\n",
    "            best_val_score = err_top1_val\n",
    "            net.export('%s/%.4f-imagenet-%s-best'%(save_dir, best_val_score, model_name), epoch)\n",
    "        # Save a snapshot of the model after each 'save_frequency' epochs\n",
    "        if save_frequency and save_dir and (epoch + 1) % save_frequency == 0:\n",
    "            net.export('%s/%.4f-imagenet-%s'%(save_dir, best_val_score, model_name), epoch)\n",
    "    # Save a snapshot of the model at the end of training\n",
    "    if save_frequency and save_dir:\n",
    "        net.export('%s/%.4f-imagenet-%s'%(save_dir, best_val_score, model_name), epochs-1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Train model\n",
    "* Run the cell below to start training\n",
    "* Logs are displayed in the cell output\n",
    "* An example run of 1 epoch is shown here\n",
    "* Once training completes, the symbols and params files are saved in the root folder"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:root:Epoch[0] Batch [49]\tSpeed: 134.457589 samples/sec\ttop1-err=0.998687\ttop5-err=0.992500\n",
      "INFO:root:Epoch[0] Batch [99]\tSpeed: 352.802738 samples/sec\ttop1-err=0.998313\ttop5-err=0.989906\n",
      "INFO:root:Epoch[0] Batch [149]\tSpeed: 353.808498 samples/sec\ttop1-err=0.997479\ttop5-err=0.987396\n",
      "INFO:root:Epoch[0] Batch [199]\tSpeed: 365.283709 samples/sec\ttop1-err=0.996812\ttop5-err=0.984422\n",
      "INFO:root:Epoch[0] Batch [249]\tSpeed: 356.559782 samples/sec\ttop1-err=0.995988\ttop5-err=0.981375\n",
      "INFO:root:Epoch[0] Batch [299]\tSpeed: 351.135594 samples/sec\ttop1-err=0.994917\ttop5-err=0.977583\n",
      "INFO:root:Epoch[0] Batch [349]\tSpeed: 351.671025 samples/sec\ttop1-err=0.994036\ttop5-err=0.974375\n",
      "INFO:root:Epoch[0] Batch [399]\tSpeed: 367.645736 samples/sec\ttop1-err=0.992922\ttop5-err=0.970844\n",
      "INFO:root:Epoch[0] Batch [449]\tSpeed: 345.622136 samples/sec\ttop1-err=0.991944\ttop5-err=0.967021\n",
      "INFO:root:Epoch[0] Batch [499]\tSpeed: 353.563206 samples/sec\ttop1-err=0.991006\ttop5-err=0.963488\n",
      "INFO:root:Epoch[0] Batch [549]\tSpeed: 351.297008 samples/sec\ttop1-err=0.989994\ttop5-err=0.959739\n",
      "INFO:root:Epoch[0] Batch [599]\tSpeed: 347.548033 samples/sec\ttop1-err=0.988797\ttop5-err=0.956115\n",
      "INFO:root:Epoch[0] Batch [649]\tSpeed: 353.854703 samples/sec\ttop1-err=0.987567\ttop5-err=0.952558\n",
      "INFO:root:Epoch[0] Batch [699]\tSpeed: 347.510432 samples/sec\ttop1-err=0.986446\ttop5-err=0.949161\n",
      "INFO:root:Epoch[0] Batch [749]\tSpeed: 375.283858 samples/sec\ttop1-err=0.985204\ttop5-err=0.945675\n",
      "INFO:root:Epoch[0] Batch [799]\tSpeed: 361.286844 samples/sec\ttop1-err=0.984148\ttop5-err=0.942289\n",
      "INFO:root:Epoch[0] Batch [849]\tSpeed: 363.963575 samples/sec\ttop1-err=0.982996\ttop5-err=0.938952\n",
      "INFO:root:Epoch[0] Batch [899]\tSpeed: 360.328039 samples/sec\ttop1-err=0.981861\ttop5-err=0.935851\n",
      "INFO:root:Epoch[0] Batch [949]\tSpeed: 354.410384 samples/sec\ttop1-err=0.980477\ttop5-err=0.932303\n",
      "INFO:root:Epoch[0] Batch [999]\tSpeed: 363.189215 samples/sec\ttop1-err=0.979287\ttop5-err=0.928944\n",
      "INFO:root:Epoch[0] Batch [1049]\tSpeed: 365.094142 samples/sec\ttop1-err=0.978036\ttop5-err=0.925548\n",
      "INFO:root:Epoch[0] Batch [1099]\tSpeed: 350.084973 samples/sec\ttop1-err=0.976776\ttop5-err=0.922128\n",
      "INFO:root:Epoch[0] Batch [1149]\tSpeed: 355.938436 samples/sec\ttop1-err=0.975516\ttop5-err=0.918821\n",
      "INFO:root:Epoch[0] Batch [1199]\tSpeed: 356.932717 samples/sec\ttop1-err=0.974401\ttop5-err=0.915687\n",
      "INFO:root:Epoch[0] Batch [1249]\tSpeed: 351.991043 samples/sec\ttop1-err=0.973237\ttop5-err=0.912480\n",
      "INFO:root:Epoch[0] Batch [1299]\tSpeed: 357.672948 samples/sec\ttop1-err=0.971901\ttop5-err=0.908925\n",
      "INFO:root:Epoch[0] Batch [1349]\tSpeed: 349.324985 samples/sec\ttop1-err=0.970597\ttop5-err=0.905507\n",
      "INFO:root:Epoch[0] Batch [1399]\tSpeed: 361.156479 samples/sec\ttop1-err=0.969167\ttop5-err=0.902020\n",
      "INFO:root:Epoch[0] Batch [1449]\tSpeed: 354.017340 samples/sec\ttop1-err=0.967754\ttop5-err=0.898625\n",
      "INFO:root:Epoch[0] Batch [1499]\tSpeed: 352.669149 samples/sec\ttop1-err=0.966592\ttop5-err=0.895429\n",
      "INFO:root:Epoch[0] Batch [1549]\tSpeed: 368.659041 samples/sec\ttop1-err=0.965220\ttop5-err=0.891988\n",
      "INFO:root:Epoch[0] Batch [1599]\tSpeed: 350.788105 samples/sec\ttop1-err=0.963889\ttop5-err=0.888893\n",
      "INFO:root:Epoch[0] Batch [1649]\tSpeed: 363.680126 samples/sec\ttop1-err=0.962502\ttop5-err=0.885708\n",
      "INFO:root:Epoch[0] Batch [1699]\tSpeed: 342.916616 samples/sec\ttop1-err=0.961204\ttop5-err=0.882649\n",
      "INFO:root:Epoch[0] Batch [1749]\tSpeed: 351.836760 samples/sec\ttop1-err=0.959993\ttop5-err=0.879679\n",
      "INFO:root:Epoch[0] Batch [1799]\tSpeed: 361.511279 samples/sec\ttop1-err=0.958750\ttop5-err=0.876672\n",
      "INFO:root:Epoch[0] Batch [1849]\tSpeed: 357.088418 samples/sec\ttop1-err=0.957367\ttop5-err=0.873720\n",
      "INFO:root:Epoch[0] Batch [1899]\tSpeed: 359.035336 samples/sec\ttop1-err=0.956026\ttop5-err=0.870706\n",
      "INFO:root:Epoch[0] Batch [1949]\tSpeed: 345.968098 samples/sec\ttop1-err=0.954726\ttop5-err=0.867917\n",
      "INFO:root:Epoch[0] Batch [1999]\tSpeed: 356.865742 samples/sec\ttop1-err=0.953467\ttop5-err=0.864942\n",
      "INFO:root:Epoch[0] Batch [2049]\tSpeed: 345.419558 samples/sec\ttop1-err=0.952181\ttop5-err=0.861994\n",
      "INFO:root:Epoch[0] Batch [2099]\tSpeed: 361.600395 samples/sec\ttop1-err=0.950845\ttop5-err=0.859070\n",
      "INFO:root:Epoch[0] Batch [2149]\tSpeed: 356.105701 samples/sec\ttop1-err=0.949507\ttop5-err=0.856100\n",
      "INFO:root:Epoch[0] Batch [2199]\tSpeed: 347.486935 samples/sec\ttop1-err=0.948202\ttop5-err=0.853308\n",
      "INFO:root:Epoch[0] Batch [2249]\tSpeed: 346.968407 samples/sec\ttop1-err=0.946887\ttop5-err=0.850451\n",
      "INFO:root:Epoch[0] Batch [2299]\tSpeed: 370.297929 samples/sec\ttop1-err=0.945648\ttop5-err=0.847610\n",
      "INFO:root:Epoch[0] Batch [2349]\tSpeed: 354.885048 samples/sec\ttop1-err=0.944386\ttop5-err=0.844903\n",
      "INFO:root:Epoch[0] Batch [2399]\tSpeed: 368.601562 samples/sec\ttop1-err=0.943072\ttop5-err=0.842189\n",
      "INFO:root:Epoch[0] Batch [2449]\tSpeed: 346.312951 samples/sec\ttop1-err=0.941746\ttop5-err=0.839575\n",
      "INFO:root:Epoch[0] Batch [2499]\tSpeed: 348.725154 samples/sec\ttop1-err=0.940454\ttop5-err=0.836904\n",
      "INFO:root:Epoch[0] Batch [2549]\tSpeed: 357.238509 samples/sec\ttop1-err=0.939165\ttop5-err=0.834350\n",
      "INFO:root:Epoch[0] Batch [2599]\tSpeed: 367.534471 samples/sec\ttop1-err=0.937910\ttop5-err=0.831773\n",
      "INFO:root:Epoch[0] Batch [2649]\tSpeed: 364.138650 samples/sec\ttop1-err=0.936632\ttop5-err=0.829191\n",
      "INFO:root:Epoch[0] Batch [2699]\tSpeed: 366.876459 samples/sec\ttop1-err=0.935412\ttop5-err=0.826586\n",
      "INFO:root:Epoch[0] Batch [2749]\tSpeed: 364.518664 samples/sec\ttop1-err=0.934261\ttop5-err=0.824023\n",
      "INFO:root:Epoch[0] Batch [2799]\tSpeed: 350.699330 samples/sec\ttop1-err=0.933136\ttop5-err=0.821561\n",
      "INFO:root:Epoch[0] Batch [2849]\tSpeed: 344.178188 samples/sec\ttop1-err=0.932012\ttop5-err=0.819295\n",
      "INFO:root:Epoch[0] Batch [2899]\tSpeed: 347.011260 samples/sec\ttop1-err=0.930845\ttop5-err=0.816956\n",
      "INFO:root:Epoch[0] Batch [2949]\tSpeed: 382.832173 samples/sec\ttop1-err=0.929719\ttop5-err=0.814546\n",
      "INFO:root:Epoch[0] Batch [2999]\tSpeed: 342.897961 samples/sec\ttop1-err=0.928450\ttop5-err=0.812084\n",
      "INFO:root:Epoch[0] Batch [3049]\tSpeed: 354.381345 samples/sec\ttop1-err=0.927205\ttop5-err=0.809673\n",
      "INFO:root:Epoch[0] Batch [3099]\tSpeed: 365.801125 samples/sec\ttop1-err=0.925926\ttop5-err=0.807147\n",
      "INFO:root:Epoch[0] Batch [3149]\tSpeed: 346.078018 samples/sec\ttop1-err=0.924739\ttop5-err=0.804800\n",
      "INFO:root:Epoch[0] Batch [3199]\tSpeed: 358.566967 samples/sec\ttop1-err=0.923530\ttop5-err=0.802391\n",
      "INFO:root:Epoch[0] Batch [3249]\tSpeed: 384.839966 samples/sec\ttop1-err=0.922474\ttop5-err=0.800140\n",
      "INFO:root:Epoch[0] Batch [3299]\tSpeed: 353.408957 samples/sec\ttop1-err=0.921380\ttop5-err=0.797868\n",
      "INFO:root:Epoch[0] Batch [3349]\tSpeed: 352.419028 samples/sec\ttop1-err=0.920282\ttop5-err=0.795718\n",
      "INFO:root:Epoch[0] Batch [3399]\tSpeed: 356.679103 samples/sec\ttop1-err=0.919127\ttop5-err=0.793488\n",
      "INFO:root:Epoch[0] Batch [3449]\tSpeed: 361.406214 samples/sec\ttop1-err=0.918002\ttop5-err=0.791294\n",
      "INFO:root:Epoch[0] Batch [3499]\tSpeed: 359.902832 samples/sec\ttop1-err=0.916819\ttop5-err=0.789124\n",
      "INFO:root:Epoch[0] Batch [3549]\tSpeed: 349.573877 samples/sec\ttop1-err=0.915694\ttop5-err=0.787040\n",
      "INFO:root:Epoch[0] Batch [3599]\tSpeed: 358.121478 samples/sec\ttop1-err=0.914609\ttop5-err=0.784943\n",
      "INFO:root:Epoch[0] Batch [3649]\tSpeed: 352.442859 samples/sec\ttop1-err=0.913441\ttop5-err=0.782764\n",
      "INFO:root:Epoch[0] Batch [3699]\tSpeed: 353.606905 samples/sec\ttop1-err=0.912353\ttop5-err=0.780725\n",
      "INFO:root:Epoch[0] Batch [3749]\tSpeed: 373.602642 samples/sec\ttop1-err=0.911266\ttop5-err=0.778681\n",
      "INFO:root:Epoch[0] Batch [3799]\tSpeed: 348.107804 samples/sec\ttop1-err=0.910154\ttop5-err=0.776658\n",
      "INFO:root:Epoch[0] Batch [3849]\tSpeed: 363.536811 samples/sec\ttop1-err=0.909072\ttop5-err=0.774666\n",
      "INFO:root:Epoch[0] Batch [3899]\tSpeed: 357.819763 samples/sec\ttop1-err=0.907934\ttop5-err=0.772584\n",
      "INFO:root:Epoch[0] Batch [3949]\tSpeed: 401.529572 samples/sec\ttop1-err=0.906831\ttop5-err=0.770527\n",
      "INFO:root:Epoch[0] Batch [3999]\tSpeed: 1211.694700 samples/sec\ttop1-err=0.905745\ttop5-err=0.768595\n",
      "INFO:root:[Epoch 0] training: err-top1=0.905677 err-top5=0.768466 loss=5.103806\n",
      "INFO:root:[Epoch 0] time cost: 3667.099527\n",
      "INFO:root:[Epoch 0] validation: err-top1=0.793980 err-top5=0.568620\n"
     ]
    }
   ],
   "source": [
    "def main():\n",
    "    net.hybridize()\n",
    "    train(num_epochs, context)\n",
    "if __name__ == '__main__':\n",
    "    main()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Export model to ONNX format\n",
    "The conversion of the model to ONNX format is done using an internal converter which will be released soon. The notebook will be updated with the code for the export once the converter is released."
   ]
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "display_name": "",
  "kernelspec": {
   "display_name": "Environment (conda_anaconda3)",
   "language": "python",
   "name": "conda_anaconda3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.4"
  },
  "name": ""
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
