{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Quiz: LeNet-5 Concolutional Network\n",
    "====================\n",
    "\n",
    "In this quiz you have to train the LeNet model on the [CIFAR-10 dataset](../cifar10/cifar10.ipynb) and you have to modify the architecture to **make it deeper**. As usual you will find the **tag** `#QUIZ` in the part of the code you must modify or implement."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Implementing a deeper LeNet\n",
    "--------------------------------\n",
    "\n",
    "The deeper the better. However, deep architecture may have overfitting problems if you do not regulate the number of parameters. There are different way to **make a CNN deeper**. For instance you can add a new convolution-pooling unit. You should use carefully the stride parameter and check if the model does not shrink too much the feature maps. You can also add more dense layers, but the cost you have to pay is a rapid growth in the number of weights.\n",
    "\n",
    "Regarding the **accuracy metric** you are lucky enough, because the image input size and the number of lables of the CIFAR-10 is equal to the MNIST dataset. We can reuse the same code without modification.\n",
    "\n",
    "Another quiz you are asked to solve is to use the `tf.summary.image()` method in order to **show in Tensorboard** the feature maps generated by the convolutional layers you added. You have to do it when the model is in Train mode."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def my_model_fn(features, labels, mode):\n",
    "    #QUIZ: you have to define a new model of LeNet that is deeper\n",
    "\n",
    "    \n",
    "    #PREDICT mode\n",
    "    if mode == tf.estimator.ModeKeys.PREDICT:\n",
    "        predictions = {\"classes\": tf.argmax(input=logits, axis=1),\n",
    "                       \"probabilities\": tf.nn.softmax(logits, name=\"softmax_tensor\"),\n",
    "                       \"logits\": logits}\n",
    "        return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)\n",
    "    #TRAIN mode\n",
    "    elif mode == tf.estimator.ModeKeys.TRAIN:\n",
    "        loss = tf.losses.softmax_cross_entropy(onehot_labels=labels, logits=logits)\n",
    "        optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)\n",
    "        train_op = optimizer.minimize(loss=loss, global_step=tf.train.get_global_step())\n",
    "        accuracy = tf.metrics.accuracy(labels=tf.argmax(labels, axis=1), predictions=tf.argmax(logits, axis=1))\n",
    "        tf.summary.scalar('accuracy', accuracy[1]) #<-- accuracy[1] to grab the value\n",
    "        tf.summary.image(\"input_features\", tf.reshape(features, [-1, 32, 32, 1]), max_outputs=3)\n",
    "        #QUIZ: use tf.summary.image() to show the output of the new convolutional layers you added\n",
    "        \n",
    "        logging_hook = tf.train.LoggingTensorHook({\"accuracy\" : accuracy[1]}, every_n_iter=200)\n",
    "        return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op, training_hooks =[logging_hook])\n",
    "    #EVAL mode\n",
    "    elif mode == tf.estimator.ModeKeys.EVAL:\n",
    "        loss = tf.losses.softmax_cross_entropy(onehot_labels=labels, logits=logits)\n",
    "        accuracy = tf.metrics.accuracy(labels=tf.argmax(labels, axis=1), predictions=tf.argmax(logits, axis=1))\n",
    "        eval_metric = {\"accuracy\": accuracy}\n",
    "        return tf.estimator.EstimatorSpec(mode=mode, loss=loss, eval_metric_ops=eval_metric)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "lenet5 = tf.estimator.Estimator(model_fn=my_model_fn, model_dir=\"/tmp/tf_model\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Train the model\n",
    "------------------\n",
    "\n",
    "The training code is given, there are **no quizzes here**. However, you are asked to get the TFRecord file that must be used in order to load the dataset. You have to follow [this notebook](../cifar10/cifar10.ipynb), sorry about that...\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def my_input_fn():\n",
    "    def _parse_function(example_proto):\n",
    "        features = {\"image\": tf.FixedLenFeature((), tf.string, default_value=\"\"),\n",
    "                    \"label\": tf.FixedLenFeature((), tf.int64, default_value=0)}\n",
    "        parsed_features = tf.parse_single_example(example_proto, features)\n",
    "        image_decoded = tf.decode_raw(parsed_features[\"image\"], tf.uint8) #char -> uint8\n",
    "        image_reshaped = tf.reshape(image_decoded, [32, 32, 3])\n",
    "        image = tf.cast(image_reshaped, tf.float32)\n",
    "        label_one_hot = tf.one_hot(parsed_features[\"label\"], depth=10, dtype=tf.int32)\n",
    "        return image, label_one_hot\n",
    "\n",
    "    tf_train_dataset = tf.data.TFRecordDataset(\"./cifar10_train.tfrecord\")\n",
    "    tf_train_dataset = tf_train_dataset.map(_parse_function)\n",
    "    tf_train_dataset.cache() # caches entire dataset\n",
    "    tf_train_dataset = tf_train_dataset.shuffle(buffer_size = 50000 * 2)\n",
    "    tf_train_dataset = tf_train_dataset.repeat(11)\n",
    "    tf_train_dataset = tf_train_dataset.batch(32)\n",
    "    print \"Train dataset: \" + str(tf_train_dataset)\n",
    "    \n",
    "    iterator = tf_train_dataset.make_one_shot_iterator()\n",
    "    batch_features, batch_labels = iterator.get_next()\n",
    "    return batch_features, batch_labels"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "tf.logging.set_verbosity(tf.logging.INFO)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "lenet5.train(input_fn=my_input_fn, steps=2000)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Test the model\n",
    "------------------\n",
    "\n",
    "No quizzes here as well. The code to parse the dataset is given. However, you may need the training file in TFRecord format for the CIFAR-10, because maybe you have been so lazy to not complete [this notebook](../cifar10/cifar10.ipynb). There is actually something you may want to do: implement the code for visualizing a [confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix). You may be happy to know that tensorflow has a `tf.confusion_matrix()` method. Check the values in the confusion matrix to understand which categories are more often miss-labeled."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def my_eval_input_fn():\n",
    "    def _parse_function(example_proto):\n",
    "        features = {\"image\": tf.FixedLenFeature((), tf.string, default_value=\"\"),\n",
    "                    \"label\": tf.FixedLenFeature((), tf.int64, default_value=0)}\n",
    "        parsed_features = tf.parse_single_example(example_proto, features)\n",
    "        image_decoded = tf.decode_raw(parsed_features[\"image\"], tf.uint8) #char -> uint8\n",
    "        image_reshaped = tf.reshape(image_decoded, [32, 32, 3])\n",
    "        image = tf.cast(image_reshaped, tf.float32)\n",
    "        label_one_hot = tf.one_hot(parsed_features[\"label\"], depth=10, dtype=tf.int32)\n",
    "        return image, label_one_hot\n",
    "\n",
    "    tf_test_dataset = tf.data.TFRecordDataset(\"./cifar10_test.tfrecord\")\n",
    "    tf_test_dataset = tf_test_dataset.map(_parse_function)\n",
    "    tf_test_dataset.cache() # caches entire dataset\n",
    "    tf_test_dataset = tf_test_dataset.repeat(1) # repeats dataset this times\n",
    "    tf_test_dataset = tf_test_dataset.batch(1) # batch size\n",
    "    print \"Test dataset: \" + str(tf_test_dataset)   \n",
    "    \n",
    "    iterator_test = tf_test_dataset.make_one_shot_iterator()\n",
    "    batch_features, batch_labels = iterator_test.get_next()\n",
    "    return batch_features, batch_labels"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "lenet5.evaluate(input_fn=my_eval_input_fn, steps=10000)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Repeat again...\n",
    "-----------------\n",
    "\n",
    "Changing the network architecture is not the only way to improve the performances. Use dropout, batch normalization, and adaptive gradient methods (e.g. RMSProp, Adam, SGD, etc), to improve the accuracy on the test set."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
