{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Deep Learning Project: Build a Traffic Sign Recognition Classifier"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Overview:\n",
    "\n",
    "Imagine we are entering a drawing competition where the goal is to draw something amazing from scratch. Before the competition we train ourselves by looking at famous art, and observing the lines. To practice, we take a Monet, and then start to draw it ourselves, exclusively using arrows, trying to follow the lines. Then we compare our work to Monet's, with the goal that all our little arrows match his lines. Over time, the more we practice, the closer we can get out mental model to be like Monet's, and the better we will do in the art competition. This program is designed to do the process for us - to look at data, make some guesses of where the arrows should be, and create a model to use in the future.\n",
    "\n",
    "To do this, we have one piece of paper we use as the answer key, and the other we write our guesses on. On our answer key, we have Monet's starry night. It's a beauty. So we go over to our guesses page and start drawing arrows wildly - at random almost! Then after a time we go back to our answer key and compare the two. We carefully add up the lengths of each arrow and compare. We erase some of the arrows on the guess paper and re-write them as we learn more. We group arrows together, starting with basic outlines and shapes, then filling in more complex features. Slowly, over time, we update our mental model for what each grouping of arrows should look like. Eventually we feel confident that, if given a fresh sheet of paper, by following our mental model of the groups of arrows, we could systematically make it look like the answer key. This mental knowledge we have gained is the machine's trained model. The arrows are vectors. Groups of arrows are matrices. In essence we are using the groups of arrows to detect patterns in the information."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## Step 1: Dataset Summary & Exploration\n",
    "\n",
    "The pickled data is a dictionary with 4 key/value pairs:\n",
    "\n",
    "- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).\n",
    "- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.\n",
    "- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.\n",
    "- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**\n",
    "\n",
    "Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Load pickled data\n",
    "import pickle\n",
    "\n",
    "training_file = \"data/train.p\"\n",
    "testing_file = \"data/test.p\"\n",
    "\n",
    "with open(training_file, mode='rb') as f:\n",
    "    train = pickle.load(f)\n",
    "with open(testing_file, mode='rb') as f:\n",
    "    test = pickle.load(f)\n",
    "    \n",
    "X_train, y_train = train['features'], train['labels']\n",
    "X_test, y_test = test['features'], test['labels']\n",
    "\n",
    "print(y_train)\n",
    "print(y_train.shape)\n",
    "\n",
    "print(\"Loading Complete\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "# Number of training examples\n",
    "n_train = len(X_train)\n",
    "\n",
    "# Number of testing examples.\n",
    "n_test = len(X_test)\n",
    "\n",
    "# What's the shape of an traffic sign image?\n",
    "image_shape = X_train[0].shape\n",
    "\n",
    "# How many unique classes/labels there are in the dataset.\n",
    "n_classes = max(y_train) + 1\n",
    "\n",
    "print(\"Number of training examples =\", n_train)\n",
    "print(\"Number of testing examples =\", n_test)\n",
    "print(\"Image data shape is\", image_shape)\n",
    "print(\"Number of classes =\", n_classes)\n",
    "print(np.unique(y_train))\n",
    "print(\"Loading Complete\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "### Data exploration visualization code goes here.\n",
    "### Feel free to use as many code cells as needed.\n",
    "import matplotlib.pyplot as plt\n",
    "import random\n",
    "from PIL import Image\n",
    "import numpy as np\n",
    "import random\n",
    "from PIL import Image, ImageEnhance\n",
    "# Visualizations will be shown in the notebook.\n",
    "%matplotlib inline\n",
    "\n",
    "# Load name of id\n",
    "with open(\"signnames.csv\", \"r\") as f:\n",
    "    signnames = f.read()\n",
    "id_to_name = { int(line.split(\",\")[0]):line.split(\",\")[1] for line in signnames.split(\"\\n\")[1:] if len(line) > 0}\n",
    "\n",
    "\n",
    "graph_size = 3\n",
    "random_index_list = [random.randint(0, X_train.shape[0]) for _ in range(graph_size * graph_size)]\n",
    "fig = plt.figure(figsize=(15, 15))\n",
    "for i, index in enumerate(random_index_list):\n",
    "    a=fig.add_subplot(graph_size, graph_size, i+1)\n",
    "    #im = Image.fromarray(np.rollaxis(X_train[index] * 255, 0,3))\n",
    "    imgplot = plt.imshow(X_train[index])\n",
    "    # Plot some images\n",
    "    a.set_title('%s' % id_to_name[y_train[index]])\n",
    "\n",
    "plt.show()\n",
    "\n",
    "\n",
    "\n",
    "fig, ax = plt.subplots()\n",
    "# the histogram of the data\n",
    "values, bins, patches = ax.hist(y_train, n_classes, normed=10)\n",
    "\n",
    "# add a 'best fit' line\n",
    "ax.set_xlabel('Smarts')\n",
    "ax.set_title(r'Histogram of classess')\n",
    "\n",
    "# Tweak spacing to prevent clipping of ylabel\n",
    "fig.tight_layout()\n",
    "\n",
    "print (\"Most common index\")\n",
    "most_common_index = sorted(range(len(values)), key=lambda k: values[k], reverse=True)\n",
    "for index in most_common_index[:10]:\n",
    "    print(\"index: %s => %s = %s\" % (index, id_to_name[index], values[index]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import random\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "\n",
    "index = random.randint(1, len(X_train))\n",
    "image = X_train[index].squeeze()\n",
    "\n",
    "plt.figure(figsize=(1,1))\n",
    "plt.imshow(image)\n",
    "print(y_train[index])\n",
    "print(\"Complete\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# TODO refactor into proper function\n",
    "\n",
    "def to_gray_scale(img_data):\n",
    "    return cv2.cvtColor(img_data.astype(np.float32), cv2.COLOR_RGB2GRAY)\n",
    "\n",
    "import cv2\n",
    "\n",
    "# credit to https://carnd-forums.udacity.com/questions/26216649/problem-when-converting-to-gray\n",
    "X_train_gray = np.zeros([X_train.shape[0], X_train.shape[1], X_train.shape[2]])\n",
    "\n",
    "for feature in range(len(X_train)):\n",
    "    #print(X_train[feature].dtype)\n",
    "    X_train_gray[feature] = to_gray_scale(X_train[feature])\n",
    "    \n",
    "X_train = X_train_gray"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "index = random.randint(1, len(X_train))\n",
    "image = X_train[index].squeeze()\n",
    "\n",
    "plt.figure(figsize=(1,1))\n",
    "plt.imshow(image, cmap='gray')\n",
    "print(y_train[index])\n",
    "print(\"Loading Complete\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import cv2\n",
    "\n",
    "def transform_image(img,ang_range,shear_range,trans_range):\n",
    "    '''\n",
    "    This function transforms images to generate new images.\n",
    "    The function takes in following arguments,\n",
    "    1- Image\n",
    "    2- ang_range: Range of angles for rotation\n",
    "    3- shear_range: Range of values to apply affine transform to\n",
    "    4- trans_range: Range of values to apply translations over. \n",
    "    \n",
    "    A Random uniform distribution is used to generate different parameters for transformation\n",
    "    \n",
    "    '''\n",
    "    # Rotation\n",
    "    ang_rot = np.random.uniform(ang_range)-ang_range/2\n",
    "    # updated to reflect gray pipeline\n",
    "    rows,cols = img.shape    \n",
    "    Rot_M = cv2.getRotationMatrix2D((cols/2,rows/2),ang_rot,1)\n",
    "\n",
    "    # Translation\n",
    "    tr_x = trans_range*np.random.uniform()-trans_range/2\n",
    "    tr_y = trans_range*np.random.uniform()-trans_range/2\n",
    "    Trans_M = np.float32([[1,0,tr_x],[0,1,tr_y]])\n",
    "\n",
    "    # Shear\n",
    "    pts1 = np.float32([[5,5],[20,5],[5,20]])\n",
    "\n",
    "    pt1 = 5+shear_range*np.random.uniform()-shear_range/2\n",
    "    pt2 = 20+shear_range*np.random.uniform()-shear_range/2\n",
    "\n",
    "    pts2 = np.float32([[pt1,5],[pt2,pt1],[5,pt2]])\n",
    "\n",
    "    shear_M = cv2.getAffineTransform(pts1,pts2)\n",
    "        \n",
    "    img = cv2.warpAffine(img,Rot_M,(cols,rows))\n",
    "    img = cv2.warpAffine(img,Trans_M,(cols,rows))\n",
    "    img = cv2.warpAffine(img,shear_M,(cols,rows))\n",
    "    \n",
    "    return img"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Transformation testing"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import matplotlib.gridspec as gridspec\n",
    "import matplotlib.image as mpimg\n",
    "\n",
    "print(\"Original:\")\n",
    "plt.imshow(image, cmap=\"gray\");\n",
    "plt.axis('off');\n",
    "plt.show()\n",
    "\n",
    "gs1 = gridspec.GridSpec(10, 10)\n",
    "gs1.update(wspace=0.01, hspace=0.02)\n",
    "plt.figure(figsize=(12,12))\n",
    "\n",
    "print(\"Generated images:\")\n",
    "for i in range(10):\n",
    "    ax1 = plt.subplot(gs1[i])\n",
    "    ax1.set_xticklabels([])\n",
    "    ax1.set_yticklabels([])\n",
    "    ax1.set_aspect('equal')\n",
    "    img = transform_image(image,30,4,4)\n",
    "\n",
    "    plt.subplot(10,10,i+1)\n",
    "    plt.imshow(img, cmap=\"gray\")\n",
    "    plt.axis('off')\n",
    "\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-----\n",
    "## Image Generation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "\"\"\"\n",
    "\n",
    "This loop looks at each label in the data set and generates images up to the feature_count_goal.\n",
    "\n",
    "\"\"\"\n",
    "\n",
    "goal_number_of_features = 7500   # features per label\n",
    "ang_range = 15   # Range of angles for rotation\n",
    "shear_range = 2   # Range of values to apply affine transform to\n",
    "trans_range = 2   # Range of values to apply translations over.\n",
    "\n",
    "print(\"Generating additional features.\")\n",
    "print(\"Bins\", np.bincount(y_train))\n",
    "\n",
    "from pandas.io.parsers import read_csv\n",
    "signnames = read_csv(\"signnames.csv\").values[:, 1]\n",
    "unique_labels = np.unique(y_train)\n",
    "\n",
    "print(len(X_train))\n",
    "    \n",
    "for label_id in range(len(unique_labels)):\n",
    "    unique_labels = np.unique(y_train)\n",
    "    #Print update to feature tracking.\n",
    "    print(\"Current label name: \", signnames[label_id])\n",
    "    print(\"Current label id: \", label_id)\n",
    "    \n",
    "    #Print feature currently being generate    \n",
    "    y_labels = np.where(y_train == label_id)\n",
    "    \n",
    "    number_of_features = len(X_train[y_labels])\n",
    "    print(\"Number of features: \", number_of_features)\n",
    "    feature_difference = goal_number_of_features - number_of_features\n",
    "    \n",
    "    # Set features to generate to 0 if less than 0\n",
    "    if feature_difference > 0:\n",
    "        features_to_be_generated = feature_difference\n",
    "    else:\n",
    "        features_to_be_generated = 0\n",
    "    print(\"features_to_be_generated: \", features_to_be_generated)\n",
    "    \n",
    "    # Graceful handling if no features to be generated\n",
    "    if features_to_be_generated > 0:\n",
    "        \n",
    "        print(\"Generating images for \", signnames[label_id])\n",
    "        new_features = []\n",
    "        new_labels = []\n",
    "        \n",
    "        # Start actually generated features while there are features to be generated\n",
    "        while i <= features_to_be_generated:\n",
    "            for feature in X_train[y_labels]:\n",
    "                \n",
    "                # Graceful stopping if > 1 passes through loop\n",
    "                if features_to_be_generated == 0: \n",
    "                    break\n",
    "                \n",
    "                else:\n",
    "                    # generate image\n",
    "                    new_image = transform_image(feature,ang_range,shear_range,trans_range)\n",
    "                    \n",
    "                    new_features.append(new_image)\n",
    "                    new_labels.append(label_id)\n",
    "                    \n",
    "                    features_to_be_generated = features_to_be_generated - 1\n",
    "        i = i + 1\n",
    "\n",
    "        # Append image to data\n",
    "        # IMPORTANT axis=0 must be set or strange issues even though supposedly default is axis=0\n",
    "        X_train = np.append(X_train, new_features, axis=0)\n",
    "        y_train = np.append(y_train, new_labels, axis=0)\n",
    "        \n",
    "    else:\n",
    "        print(\"Passing, no images to generate\")\n",
    "        \n",
    "    # update y labels\n",
    "    y_labels = np.where(y_train == label_id)\n",
    "    x = np.array(y_labels)\n",
    "    x_min = x[0, -200]\n",
    "    x_max = x[0, -1]\n",
    "    random_index = random.sample(range(x_min, x_max), 10)\n",
    "    \n",
    "    # graphing function concepts from http://navoshta.com/traffic-signs-classification/\n",
    "    fig = plt.figure(figsize = (6, 1))\n",
    "    fig.subplots_adjust(left = 0, right = 1, bottom = 0, top = 1, hspace = 0.05, wspace = 0.05)\n",
    "    \n",
    "    for i in range(10):\n",
    "        axis = fig.add_subplot(1, 10, i + 1, xticks=[], yticks=[])\n",
    "        axis.imshow(X_train[random_index[i]], cmap=\"gray\")\n",
    "    plt.show()\n",
    "    print(\"-----------------------------------------------------\\n\")\n",
    "\n",
    "\n",
    "bins = np.bincount(y_train)\n",
    "print(\"Bins\", bins)\n",
    "print(\"Complete\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "plt.bar( np.arange( 43 ), bins, align='center' )\n",
    "plt.xlabel('Class')\n",
    "plt.ylabel('Number of training examples')\n",
    "plt.xlim([-1, 43])\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Normalize features\n",
    "print('Scale features to be in 0 to 1')\n",
    "X_train = (X_train / 255.).astype(np.float32)\n",
    "\n",
    "inputs_per_class = np.bincount(y_train)\n",
    "print(inputs_per_class)\n",
    "n_train = len(X_train)\n",
    "\n",
    "print(\"Number of training examples =\", n_train)\n",
    "\n",
    "# TODO store data in a pickle file"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from sklearn.utils import shuffle\n",
    "X_train, y_train = shuffle(X_train, y_train)\n",
    "print(\"Complete\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Validation set creation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# split into 80% for train and 20% for validation\n",
    "\n",
    "seed = 54645\n",
    "from sklearn.cross_validation import train_test_split\n",
    "X_train, X_validation, y_train, y_validation = train_test_split(X_train, y_train, test_size=0.20, random_state=seed, stratify=y_train)\n",
    "# Credit to someone in slack channel I think helped with sk.learn and to use stratify=y_train  80/20 split is common\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "n_train_new = len(X_train)\n",
    "print(\"Number of training examples =\", n_train_new)\n",
    "\n",
    "n_validation = len(X_validation)\n",
    "print(\"Number of validation examples =\", n_validation)\n",
    "\n",
    "print(\"Complete\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "#save data\n",
    "import h5py\n",
    "def savedata():\n",
    "    f = h5py.File('Train_and_Validation3.h5','w')   #创建一个h5文件，文件指针是f  \n",
    "    f['X_train'] = X_train                 #将数据写入文件的主键data下面  \n",
    "    f['X_validation'] = X_validation           #将数据写入文件的主键labels下面  \n",
    "    f[\"y_train\"]=y_train\n",
    "    f['y_validation']=y_validation\n",
    "    f.close()\n",
    "# savedata()  \n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Program Files\\Anaconda3\\envs\\tf\\lib\\site-packages\\h5py\\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\n",
      "  from ._conv import register_converters as _register_converters\n"
     ]
    }
   ],
   "source": [
    "import h5py\n",
    "f = h5py.File('C://Train_and_Validation3.h5','r')   #打开h5文件\n",
    "X_train = list(f['X_train'])\n",
    "X_validation=list(f['X_validation'])\n",
    "y_train=list(f['y_train'])\n",
    "y_validation=list(f['y_validation'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[40, 39, 24, 0, 42, 30, 32, 18, 18]"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "y_train[1:10]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Question 1  and  Question 2\n",
    "\n",
    "_Describe how you preprocessed the data. Why did you choose that technique?_\n",
    "\n",
    "_Describe how you set up the training, validation and testing data for your model. **Optional**: If you generated additional data, how did you generate the data? Why did you generate the data? What are the differences in the new dataset (with generated data) from the original dataset?_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Answer for 1 and 2:**\n",
    "\n",
    "From reading LeCun's paper on traffic sign recongition he suggested generating additional data, ie by rotating, scaling, and shifting data. I then researched options to do so.\n",
    "\n",
    "I had been used to using a cross validation set to test the network's performance prior to using the real test data. I researched ways to implmenet a validation set.\n",
    "\n",
    "I used 5000 images per label, I split the data as described here:\n",
    "\n",
    "    Original number of training examples = 39,209\n",
    "\n",
    "    Number of training examples = 258,000\n",
    "    Number of validation examples = 64,500\n",
    "\n",
    "    Total generated images: 283,291\n",
    "\n",
    "It seemed like a good idea to try and balance out the images. I feel further research is required on this however. For example, if a sign is less likely to appear in real life, perhaps it's ok if the network has a lower bias against it.\n",
    "\n",
    "Based on suggestion from LeCun's paper and Udacity, I tried converting to grascale. Again, I'm not convinced this is actually better. I feel like there is an oppportunity for the network to learn colour - especially as many signs use colour to communicate meaning.\n",
    "\n",
    "There is a feature normalization. Essentially instead of the workspace being in the 1-255 range, this brings it to the 0-1 range.\n",
    "\n",
    "I'm also using a random seed and shuffule functions to help it start in a decent place for both training and validation set creation.\n",
    "\n",
    "For generating the images I spent a lot of time on the loop functions. Essentially the key control is the target number of features per class. With this single control, the system builds images up to your desired amount, regardless of number of existing images in class. It perfectly balances it. But again, to above, I'm still not sure if that's the right path.\n",
    "\n",
    "The code could use re-factoring. However, I think it's the start of a fairly flexible model. I could easily add in other functions to each image. I haven't benchmarked performance difference between this and batching, but intuitively I prefer the concept of running a while loop for this type of stuff then having to worry about batching. There are many things I would like to build into this, including checkpointing, dumping into a pickle file, tracking/estimating time to run, etc. \n",
    ".\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "----\n",
    "\n",
    "# Step 2: Model Architecture\n",
    "\n",
    "Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "comopleted\n"
     ]
    }
   ],
   "source": [
    "import tensorflow as tf\n",
    "\n",
    "def separable_Conv_Pooling(inputs,filters,name):\n",
    "    x = tf.layers.separable_conv2d(inputs,filters=filters,kernel_size=[3,3],padding='SAME',name=name+\"separable_conv2d1\")\n",
    "    x = tf.layers.batch_normalization(x,name=name+\"batch_norm1\")\n",
    "    x = tf.nn.relu(x,name=name+\"relu2\")\n",
    "    x = tf.layers.separable_conv2d(inputs=x,filters=filters,kernel_size=[3,3],padding='SAME',name=name+\"separable_conv2d2\")\n",
    "    x = tf.layers.batch_normalization(x,name=name+\"batch_norm2\")\n",
    "    x = tf.layers.max_pooling2d(inputs=x,pool_size=[3,3],strides=[2,2],name=\"maxpooling2d1\",padding=\"SAME\")\n",
    "    return x\n",
    "    \n",
    "    \n",
    "def separable_batch_norm(inputs,filters,kernel_size=[3,3],padding=\"VAlID\",name=\"\"):\n",
    "    x = tf.layers.separable_conv2d(inputs,filters=filters,kernel_size=kernel_size,name=name+\"separable_conv2d1\",padding=padding)\n",
    "    x = tf.layers.batch_normalization(x,name=name+\"batch1\")\n",
    "    return x\n",
    "\n",
    "def max_pooling(x,pool_size=[3,3],strides=[2,2],name=\"\"):\n",
    "    return tf.layers.max_pooling2d(inputs=x,pool_size=pool_size,strides=strides,name=name+\"maxpooling2d\")\n",
    "\n",
    "\n",
    "# \n",
    "def middle_flow_one_time(inputs,name):\n",
    "    net=tf.nn.relu(inputs,name=name+\"relu1\")\n",
    "    net=tf.layers.separable_conv2d(inputs=net,filters=728,kernel_size=[3,3],name=name+\"separable_conv2d1\",padding=\"SAME\")\n",
    "    net = tf.layers.batch_normalization(net,name=name+\"batch_norm1\")\n",
    "    \n",
    "    net=tf.nn.relu(net,name=name+\"relu2\")\n",
    "    net=tf.layers.separable_conv2d(inputs=net,filters=728,kernel_size=[3,3],name=name+\"separable_conv2d2\",padding=\"SAME\")\n",
    "    net = tf.layers.batch_normalization(net,name=name+\"batch_norm2\")\n",
    "    \n",
    "    net=tf.nn.relu(net,name=name+\"relu3\")\n",
    "    net=tf.layers.separable_conv2d(inputs=net,filters=728,kernel_size=[3,3],name=name+\"separable_conv2d3\",padding=\"SAME\")\n",
    "    net = tf.layers.batch_normalization(net,name=name+\"batch_norm3\")\n",
    "    \n",
    "    middle_flow_output=tf.add(inputs,net,name=name+\"add1\")\n",
    "    return middle_flow_output\n",
    "\n",
    "def Xception(x):\n",
    "    \n",
    "    \n",
    "    with tf.name_scope('Xception'):\n",
    "    #preprocess image  input shape should 299*299*3\n",
    "    \n",
    "    #Todo： reshape inputs to  299*299*3\n",
    "        input_layer=tf.reshape(x, [-1,32,32,1])\n",
    "        \n",
    "        mu = 0\n",
    "        sigma = 0.1\n",
    "\n",
    "    \n",
    "    #Entry flow\n",
    "    \n",
    "        #block 1\n",
    "        net =tf.layers.conv2d(inputs=input_layer,filters=32,kernel_size=[3,3],strides=[2,2],padding='VALID',name=\"block1_conv1\")\n",
    "        net = tf.layers.batch_normalization(net,name=\"block1_batch_norm1\")\n",
    "        net = tf.nn.relu(net,name='block1_relu1')\n",
    "            \n",
    "        \n",
    "        net = tf.layers.conv2d(inputs=net,filters=64,kernel_size=[3,3],name=\"block1_conv2\",padding='VALID')\n",
    "        net = tf.layers.batch_normalization(net,name=\"block1_batch_norm2\")\n",
    "        net = tf.nn.relu(net,name=\"block1_relu2\")\n",
    "        \n",
    "        \n",
    "          #block2\n",
    "        res = tf.layers.conv2d(inputs=net,filters=128,kernel_size=[1,1],strides=[2,2],name=\"block2_res1\",padding=\"VALID\")\n",
    "        net=separable_Conv_Pooling(inputs=net,filters=128,name=\"block2_\")\n",
    "        net=tf.add(res,net,name=\"block2_add1\")\n",
    "        \n",
    "        \n",
    "            #block3\n",
    "        res=tf.layers.conv2d(inputs=net,filters=256,kernel_size=[1,1],strides=[2,2],name=\"block3_res1\")\n",
    "        res=tf.layers.batch_normalization(res,name=\"block3_res1_batch_norm1\")\n",
    "        net = tf.nn.relu(net,name=\"block3_relu1\")\n",
    "        net= separable_Conv_Pooling(inputs=net,filters=256,name=\"block3_\")    \n",
    "        net=tf.add(res,net,name=\"block2_add1\")\n",
    "        \n",
    "        #block4\n",
    "        res=tf.layers.conv2d(inputs=net,filters=728,kernel_size=[1,1],strides=[2,2],name=\"block4_res\")\n",
    "        res=tf.layers.batch_normalization(res,name=\"block4_res1_batch_norm1\")\n",
    "        net=tf.nn.relu(net,name=\"block4_relu1\")\n",
    "        net=separable_Conv_Pooling(inputs=net,filters=728,name=\"block4_\")    \n",
    "        net=tf.add(res,net,name=\"block4_add1\")\n",
    "        \n",
    "        #output feature 19*19*128\n",
    "        \n",
    "        #middle flow\n",
    "        for i in range(8):\n",
    "            block_prefix=\"block\"+str(i+5)+\"_\"\n",
    "            net=middle_flow_one_time(net,name=block_prefix)\n",
    "      \n",
    "        \n",
    "        #output feature 19*19*128\n",
    "        \n",
    "        #exit flow\n",
    "        res=tf.layers.conv2d(inputs=net,filters=1024,kernel_size=[1,1],strides=[2,2],name=\"block12_res1\")\n",
    "        res=tf.layers.batch_normalization(res,name=\"block12_res1_batch_norm1\")\n",
    "        \n",
    "        net=tf.nn.relu(net,name=\"block12_relu1\")\n",
    "        net=separable_batch_norm(net,filters=728,kernel_size=[3,3],name=\"block12_1\",padding=\"SAME\")\n",
    "        \n",
    "        net = tf.nn.relu(net,name=\"block12_relu2\")\n",
    "        net = separable_batch_norm(net,filters=1024,kernel_size=[3,3],name=\"block12_2\",padding=\"SAME\")\n",
    "        net = tf.layers.max_pooling2d(inputs=net,pool_size=[3,3],strides=[2,2],name=\"block12_maxpooling2d\",padding=\"SAME\")\n",
    "        net= tf.add(res,net,name=\"block12_add\")\n",
    "        \n",
    "        #block13\n",
    "        net=separable_batch_norm(net,filters=1536,kernel_size=[3,3],name=\"block13_1\",padding=\"SAME\")\n",
    "        net = tf.nn.relu(net,name=\"block13_relu1\")\n",
    "        \n",
    "        net=separable_batch_norm(net,filters=2048,kernel_size=[3,3],name=\"block13_2\",padding=\"SAME\")\n",
    "        net = tf.nn.relu(net,name=\"block13_relu2\")\n",
    "        \n",
    "    #     globalAvaragePooling\n",
    "        net= tf.reduce_mean(net, [1,2])\n",
    "        \n",
    "        #output 2048-dimensional vectors\n",
    "        \n",
    "        #  Fully Connected. Input = 2048. Output = 43.\n",
    "        fully_connected_weights = tf.Variable(tf.truncated_normal(shape=(2048,43), mean=mu, stddev=sigma))\n",
    "        fully_connected_bias = tf.Variable(tf.zeros(43))\n",
    "        logits = tf.matmul(net, fully_connected_weights) + fully_connected_bias\n",
    "        \n",
    "\n",
    "        return logits\n",
    "\n",
    "print(\"comopleted\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "258000"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "len(X_train)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Question 3\n",
    "\n",
    "What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Answer:**\n",
    "\n",
    "I primarily used the LetNet Architecture.\n",
    "\n",
    "I also added drop_out as implemented by TensorFlow. I added dropout only to the last fully connected layer.\n",
    "\n",
    "\n",
    "    The input to the model is the 32x32 images. \n",
    "\n",
    "As they are pre-processed to be gray scale there is only the single colour channel.\n",
    "\n",
    "    The filter size in the first convolutional neural network is 5x5.\n",
    "\n",
    "I used valid padding as it was the most clear to me.\n",
    "\n",
    "    Max pooling strides are 2x2\n",
    "\n",
    "    I'm using the built in tf.nn.conv2d to flatten from 4d to 2d.\n",
    "    \n",
    " \n",
    "I plan to experiment a much further with various architectures but felt I should start by getting a firm handle on what this one was working. I have commented more below on next steps.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Complete\n"
     ]
    }
   ],
   "source": [
    "EPOCHS = 20\n",
    "BATCH_SIZE = 216\n",
    "LEARNING_RATE = 0.001\n",
    "\n",
    "### Train your model here.\n",
    "tf.reset_default_graph()\n",
    "x = tf.placeholder(tf.float32, (None, 32, 32))\n",
    "# x = tf.placeholder(tf.float32, (None, 299, 299))\n",
    "\n",
    "y = tf.placeholder(tf.int32, (None))\n",
    "keep_prob = tf.placeholder(tf.float32)\n",
    "\n",
    "# added this to fix bug CUDA_ERROR_ILLEGAL_ADDRESS / kernal crash\n",
    "# with tf.device('/cpu:0'):\n",
    "#     one_hot_y = tf.one_hot(y, 43)\n",
    "# with tf.device('/gpu:0'):\n",
    "one_hot_y = tf.one_hot(y, 43)    \n",
    "\n",
    "# logits = LeNet(x, keep_prob)\n",
    "logits=Xception(x)\n",
    "cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=one_hot_y)\n",
    "loss_operation = tf.reduce_mean(cross_entropy)\n",
    "optimizer = tf.train.RMSPropOptimizer(learning_rate=LEARNING_RATE,momentum=0.9,decay=0.9)\n",
    "\n",
    "training_operation = optimizer.minimize(loss_operation)\n",
    "\n",
    "print(\"Complete\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Complete\n"
     ]
    }
   ],
   "source": [
    "correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))\n",
    "accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n",
    "saver = tf.train.Saver()\n",
    "\n",
    "\n",
    "def evaluate(X_data, y_data):\n",
    "    num_examples = len(X_data)\n",
    "    total_accuracy = 0\n",
    "    sess = tf.get_default_session()\n",
    "    for offset in range(0, num_examples, BATCH_SIZE):\n",
    "        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]\n",
    "        accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})\n",
    "        total_accuracy += (accuracy * len(batch_x))\n",
    "    return total_accuracy / num_examples\n",
    "\n",
    "print(\"Complete\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Training"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "from sklearn.utils import shuffle \n",
    "tf.device('/cpu:0')\n",
    "\n",
    "with tf.Session() as sess:\n",
    "    sess.run(tf.global_variables_initializer())\n",
    "    writer = tf.summary.FileWriter(\"/summray/\", sess.graph)\n",
    "    writer.close()\n",
    "    num_examples = len(X_train)\n",
    "    \n",
    "    print(\"Training...\")\n",
    "    print()\n",
    "    for i in range(EPOCHS):\n",
    "        X_train, y_train = shuffle(X_train, y_train)\n",
    "#         for offset in range(0, num_examples, BATCH_SIZE):\n",
    "        for offset in range(0, 2000, BATCH_SIZE):\n",
    "\n",
    "            print(\"offset\"+str(offset)+\"/\"+str(2000))\n",
    "            end = offset + BATCH_SIZE\n",
    "            batch_x, batch_y = X_train[offset:end], y_train[offset:end]\n",
    "            \n",
    "            sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: .5})\n",
    "            \n",
    "#         writer.add_summary(summary, i)\n",
    "        validation_accuracy = evaluate(X_validation, y_validation)\n",
    "        print(\"EPOCH {} ...\".format(i+1))\n",
    "        print(\"Validation Accuracy = {:.3f}\".format(validation_accuracy))\n",
    "        \n",
    "    saver.save(sess, '.\\summary\\Xception')\n",
    "    print(\"Model saved\")\n",
    "    \n",
    "print(\"Complete\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Evaluation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# pre processing\n",
    "# TODO abstract this better...\n",
    "\n",
    "X_test_gray = np.zeros([X_test.shape[0], X_test.shape[1], X_test.shape[2]])\n",
    "\n",
    "for feature in range(len(X_test)):\n",
    "    #print(X_train[feature].dtype)\n",
    "    X_test_gray[feature] = to_gray_scale(X_test[feature])\n",
    "X_test = X_test_gray\n",
    "X_test = (X_test / 255.).astype(np.float32)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "#Test data\n",
    "\n",
    "with tf.Session() as sess:\n",
    "    saver.restore(sess, tf.train.latest_checkpoint('.'))\n",
    "\n",
    "    test_accuracy = evaluate(X_test, y_test)\n",
    "    print(\"Test Accuracy = {:.3f}\".format(test_accuracy))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Question 4\n",
    "\n",
    "_How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)_\n",
    "\n",
    "**Answer:**\n",
    "\n",
    "I used the AdamOptimizer from pervious lesson.  \n",
    "\n",
    "I generally found anything above .0001 was too fast, in general it appears the higher the learning rate the greater the risk of over fitting.\n",
    "\n",
    "    LEARNING_RATE = 0.00005\n",
    "\n",
    "    Dropout is set to 50% as recommended by Mr. Hinton's paper. \n",
    "\n",
    "    One Hot is set at 43 as there are 43 classes.\n",
    "\n",
    "    There are 150 epochs with a batch size of 256\n",
    "\n",
    "I believe the model is overfitting the data given the ~5% delta in validation vs test. \n",
    "\n",
    "Lots of opportunity to improve here."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Question 5\n",
    "\n",
    "\n",
    "_What approach did you take in coming up with a solution to this problem? It may have been a process of trial and error, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think this is suitable for the current problem._"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Answer:**\n",
    "\n",
    "\n",
    "I started with LeNet, added dropout, added validation set, added binning comboed with jittery functions, and generating extra data. At each step I generally tested to at least 20 EPOCHs to see results, and mapped out results.\n",
    "\n",
    "Beyond the apparently decent accuracy results, I don't know if it's suitable or not I simply don't know enough yet to comment effectively here.\n",
    " \n",
    "The network has both convolutional and fully conected layers. The convolutional layers \"scan\" and help reduce duplciation of features. An example is of a red bike, a blue car, and a pink bunny. It would try to learn the concept of the colours seperately from the concepts of the items, so as to have 6 features instead of 9.\n",
    "\n",
    "LeNet includes ReLu units as activations to create non-linearity in the model.  \n",
    "\n",
    "To be clear this is the LeNet model, with dropout added, and various settings and filters tuned to the task at hand.\n",
    "\n",
    "    Learn more about LeNet here http://yann.lecun.com/exdb/lenet/\n",
    "    \n",
    "Generally I followed an iterative process, adding features, than running the network to look at results."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "# Step 3: Test on new input\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## New input evaulation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "from scipy.misc import imresize\n",
    "new_label_sign_ids = read_csv(\"new_labels.csv\").values[:, 0]\n",
    "print(new_label_sign_ids)\n",
    "\n",
    "imgs = ['1.jpg', '2.jpg', '3.jpg', '4.jpg', '5.jpg']\n",
    "new_input = []\n",
    "\n",
    "for imgname in imgs:\n",
    "    image = mpimg.imread('extra-images/' + imgname)\n",
    "    image = imresize(image, (32,32))\n",
    "    new_input.append(image)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Combine images into pickle data format following help from forums\n",
    "\n",
    "imagesTogether = []\n",
    "new_labels = []\n",
    "\n",
    "for image in new_input:\n",
    "    image = cv2.resize(image, (32, 32))\n",
    "    print(image.shape)\n",
    "    \n",
    "    imagesTogether.append(image) \n",
    "    #print(len(imagesTogether))\n",
    "    \n",
    "    imagesTogetherNP=np.asarray(imagesTogether)\n",
    "    #print(imagesTogetherNP.shape)\n",
    "    \n",
    "for i in new_label_sign_ids:\n",
    "    \n",
    "    new_labels.append(i)\n",
    "    #print(len(new_labels))\n",
    "    \n",
    "    new_labelsNP=np.asarray(new_labels)\n",
    "    print(new_labelsNP.shape)\n",
    "\n",
    "print(new_labels)\n",
    "\n",
    "for image in imagesTogetherNP:\n",
    "    plt.figure(figsize=(1,1))\n",
    "    plt.imshow(image)\n",
    "    plt.show()\n",
    "\n",
    "print(\"Number of training examples =\", len(imagesTogetherNP))\n",
    "print(\"Number of labels =\", len(new_labels))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "X_input_gray = np.zeros([imagesTogetherNP.shape[0], imagesTogetherNP.shape[1], imagesTogetherNP.shape[2]])\n",
    "\n",
    "for feature in range(len(imagesTogetherNP)):\n",
    "    #print(X_train[feature].dtype)\n",
    "    X_input_gray[feature] = to_gray_scale(imagesTogetherNP[feature])\n",
    "    \n",
    "imagesTogetherNP = X_input_gray"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "with tf.Session() as sess:\n",
    "    saver.restore(sess, tf.train.latest_checkpoint('.'))\n",
    "    test_accuracy = evaluate(imagesTogetherNP, new_labels)\n",
    "    print(\"Test Accuracy = {:.3f}\".format(test_accuracy))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Question 6\n",
    "\n",
    "Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Answer:**\n",
    "\n",
    "The stop sign is taken from google streetview near my house. It's low resolution to start with which may make it challenging.\n",
    "\n",
    "The children walking and man working signs are at angles which may make them harder.\n",
    "\n",
    "The no entry sign is a small part of overall image and is a different style slightly from training data.\n",
    "\n",
    "The one that will likely be difficult is the animal crossing. It's a different shape, colour, and overall style from the training data. The only similar feature is the animal image in the centre. I suspect it will be quite difficult for the system to classify.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Question 7\n",
    "\n",
    "Is your model able to perform equally well on captured pictures when compared to testing on the dataset?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Answer:**\n",
    "\n",
    "My model scored 60% on the limited new input set. A previous run with a faster learning rate scored 80%. I believe it held up fairly well overall. I plan to continue testing on more images as I make the code more robost.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "new_input_32 = np.float32(imagesTogetherNP)\n",
    "\n",
    "#print(X_test.dtype)\n",
    "#print(imagesTogetherNP.dtype)\n",
    "#print(new_input_32.dtype)\n",
    "\n",
    "predictSoftmax = tf.nn.softmax(logits)\n",
    "\n",
    "with tf.Session() as sess:\n",
    "    saver.restore(sess, tf.train.latest_checkpoint('.'))\n",
    "    print(\"Model restored\")\n",
    "    \n",
    "    softmaxProb  = sess.run(predictSoftmax, feed_dict={x: new_input_32, keep_prob: 1.0})\n",
    "         \n",
    "    top5 = sess.run(tf.nn.top_k(tf.constant(softmaxProb), k=5, sorted=True))\n",
    "    \n",
    "    # calculate certainty of prediction vs next best one\n",
    "    difference = softmaxProb[0] - softmaxProb[1]\n",
    "    \n",
    "    print(difference)\n",
    "    print(top5)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "new_label_sign_names = read_csv(\"new_labels.csv\").values[:, 1]\n",
    "\n",
    "\n",
    "# Credit https://github.com/navoshta/traffic-signs/blob/master/Traffic_Signs_Recognition.ipynb\n",
    "def plot_image_statistics(predictions, index):\n",
    "    plt.subplot2grid((2, 2), (0, 1), colspan=1, rowspan=2)\n",
    "    plt.barh(np.arange(5)+.5, predictions[0][index], align='center')\n",
    "    plt.yticks(np.arange(5)+.5, signnames[predictions[1][index].astype(int)])\n",
    "    plt.tick_params(axis='both', which='both', labelleft='off', labelright='on', labeltop='off', labelbottom='off')\n",
    "    plt.show()\n",
    "    \n",
    "    \n",
    "for i in range(5):\n",
    "    print(\"Actual class: \", new_label_sign_names[i])\n",
    "    plot_image_statistics(top5, i)\n",
    "    print(\"----------------------------------------------------------\\n\")\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Question 8\n",
    "\n",
    "*Use the model's softmax probabilities to visualize the **certainty** of its predictions, [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.html#top_k) could prove helpful here. Which predictions is the model certain of? Uncertain? If the model was incorrect in its initial prediction, does the correct prediction appear in the top k? (k should be 5 at most)*\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Answer:**\n",
    "\n",
    "My system seems to be indicating with near complete certainty that the category selected is correct. \n",
    "\n",
    "I believe this is further indication the model is overfitting. \n",
    "\n",
    "I plan to continue doing further research in this category.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "-----\n",
    "## Reflections"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Overall this has been a very good learning experience.\n",
    "\n",
    "My research direction on the applied side will be focusing on:\n",
    "\n",
    "[] de convolutions and ways to better understand hidden layers\n",
    "\n",
    "[] better understanding of plotting options data visualization\n",
    "\n",
    "[] general refactoring to make the code here more robust, ie pre processing pipeline, new image inpute\n",
    "\n",
    "[] deeper research into some of the functions I feel less comfortable with\n",
    "\n",
    "\n",
    "I think the best moment in the project was when it correctly classified the stopsign I uploaded. That was the \"WOW this works!\" call.\n"
   ]
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "kernelspec": {
   "display_name": "Python [conda env:tf]",
   "language": "python",
   "name": "conda-env-tf-py"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
