{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "RC Car End-to-End Image Regression with CNNs (RGB camera).ipynb",
      "provenance": [],
      "collapsed_sections": [],
      "toc_visible": true,
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python2",
      "display_name": "Python 2"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/wilselby/diy_driverless_car_ROS/blob/ml-model/rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7Ob-YPAvx7Nr",
        "colab_type": "text"
      },
      "source": [
        "# Development of an End-to-End ML Model for Navigating an RC car with a Camera"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "uPcv0EFav6LH",
        "colab_type": "text"
      },
      "source": [
        "<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
        "  <td>\n",
        "    <a target=\"_blank\" href=\"https://colab.research.google.com/github/wilselby/diy_driverless_car_ROS/blob/ml-model/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb\">\n",
        "    <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n",
        "    Run in Google Colab</a>\n",
        "  </td>\n",
        "  <td>\n",
        "    <a target=\"_blank\" href=\"https://github.com/wilselby/diy_driverless_car_ROS/blob/ml-model/rover_ml/colab/RC_Car_End_to_End_Image_Regression_with_CNNs_(RGB_camera).ipynb\">\n",
        "    <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n",
        "    View source on GitHub</a>\n",
        "  </td>\n",
        "</table>"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EPxyR5SargY9",
        "colab_type": "text"
      },
      "source": [
        "#Environment Setup\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lZbZCuOD2JLH",
        "colab_type": "text"
      },
      "source": [
        "## Import Dependencies"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "AqUpKnu52L5S",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "import os\n",
        "import csv\n",
        "import cv2\n",
        "import matplotlib.pyplot as plt\n",
        "import random\n",
        "import pprint\n",
        "\n",
        "import numpy as np\n",
        "from numpy import expand_dims\n",
        "\n",
        "%tensorflow_version 1.x\n",
        "import tensorflow as tf\n",
        "tf.logging.set_verbosity(tf.logging.ERROR)\n",
        "\n",
        "from keras import backend as K\n",
        "from keras.models import Model, Sequential\n",
        "from keras.models import load_model\n",
        "from keras.layers import Dense, GlobalAveragePooling2D, MaxPooling2D, Lambda, Cropping2D\n",
        "from keras.layers.convolutional import Convolution2D\n",
        "from keras.layers.core import Flatten, Dense, Dropout, SpatialDropout2D\n",
        "from keras.optimizers import Adam\n",
        "from keras.callbacks import ModelCheckpoint, TensorBoard\n",
        "from keras.callbacks import EarlyStopping, ReduceLROnPlateau\n",
        "from keras.preprocessing.image import ImageDataGenerator\n",
        "from keras.preprocessing.image import load_img\n",
        "from keras.preprocessing.image import img_to_array \n",
        "   \n",
        "from google.colab.patches import cv2_imshow\n",
        "  \n",
        "import sklearn\n",
        "from sklearn.model_selection import train_test_split\n",
        "import pandas as pd\n",
        "\n",
        "print(\"Tensorflow Version:\",tf.__version__)\n",
        "print(\"Tensorflow Keras Version:\",tf.keras.__version__)\n",
        "print(\"Eager mode: \", tf.executing_eagerly())\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "7vy5iiwR19nJ",
        "colab_type": "text"
      },
      "source": [
        "## Confirm TensorFlow can see the GPU \n",
        "\n",
        "Simply select \"GPU\" in the Accelerator drop-down in Notebook Settings (either through the Edit menu or the command palette at cmd/ctrl-shift-P)."
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "_0h6Afcy2E9l",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "device_name = tf.test.gpu_device_name()\n",
        "\n",
        "if device_name != '/device:GPU:0':\n",
        "  #raise SystemError('GPU device not found')\n",
        "  print('GPU device not found')\n",
        "else:\n",
        "  print('Found GPU at: {}'.format(device_name))\n",
        "  \n",
        "  #GPU count and name\n",
        "  !nvidia-smi -L"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "H2sPUxOR3FlN",
        "colab_type": "text"
      },
      "source": [
        "# Load the Dataset"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "8YsJN12w3aug",
        "colab_type": "text"
      },
      "source": [
        "## Download and Extract the Dataset"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "SfRnrsuuRXBL",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Download the dataset\n",
        "!curl -O https://selbystorage.s3-us-west-2.amazonaws.com/research/office_2/office_2.tar.gz"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "HV6z9-T4hd-G",
        "colab_type": "code",
        "cellView": "both",
        "colab": {}
      },
      "source": [
        "data_set = 'office_2'\n",
        "tar_file = data_set + '.tar.gz'\n",
        "\n",
        "# Unzip the .tgz file\n",
        "# -x for extract\n",
        "# -v for verbose \n",
        "# -z for gnuzip\n",
        "# -f for file (should come at last just before file name)\n",
        "# -C to extract the zipped contents to a different directory\n",
        "!tar -xvzf $tar_file"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9FogCRzr3sUF",
        "colab_type": "text"
      },
      "source": [
        "## Parse the CSV File"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "KeQ8c-9s3v8y",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Define path to csv file\n",
        "csv_path = data_set + '/interpolated.csv'\n",
        "\n",
        "# Load the CSV file into a pandas dataframe\n",
        "df = pd.read_csv(csv_path, sep=\",\")\n",
        "\n",
        "# Print the dimensions\n",
        "print(\"Dataset Dimensions:\")\n",
        "print(df.shape)\n",
        "\n",
        "# Print the first 5 lines of the dataframe for review\n",
        "print(\"\\nDataset Summary:\")\n",
        "df.head(5)\n",
        "  "
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "3BnBpo6-F2oR",
        "colab_type": "text"
      },
      "source": [
        "# Clean and Pre-process the Dataset"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ub8CkShSJEkV",
        "colab_type": "text"
      },
      "source": [
        "## Remove Unneccessary Columns"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "PDth_K-3JINP",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Remove 'index' and 'frame_id' columns \n",
        "df.drop(['index','frame_id'],axis=1,inplace=True)\n",
        "\n",
        "# Verify new dataframe dimensions\n",
        "print(\"Dataset Dimensions:\")\n",
        "print(df.shape)\n",
        "\n",
        "# Print the first 5 lines of the new dataframe for review\n",
        "print(\"\\nDataset Summary:\")\n",
        "df.head(5)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "o-zof8fUDz2C",
        "colab_type": "text"
      },
      "source": [
        "## Detect Missing Data"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ffOXGmmQD2om",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Detect Missing Values\n",
        "print(\"Any Missing Values?: {}\".format(df.isnull().values.any()))\n",
        "\n",
        "# Total Sum\n",
        "print(\"\\nTotal Number of Missing Values: {}\".format(df.isnull().sum().sum()))\n",
        "\n",
        "# Sum Per Column\n",
        "print(\"\\nTotal Number of Missing Values per Column:\")\n",
        "print(df.isnull().sum())"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "cKBJ4sODIFOC",
        "colab_type": "text"
      },
      "source": [
        "## Remove Zero Throttle Values"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "QAk-fsbkIJrh",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Determine if any throttle values are zeroes\n",
        "print(\"Any 0 throttle values?: {}\".format(df['speed'].eq(0).any()))\n",
        "\n",
        "# Determine number of 0 throttle values:\n",
        "print(\"\\nNumber of 0 throttle values: {}\".format(df['speed'].eq(0).sum()))\n",
        "\n",
        "# Remove rows with 0 throttle values\n",
        "if df['speed'].eq(0).any():\n",
        "  df = df.query('speed != 0')\n",
        "  \n",
        "  # Reset the index\n",
        "  df.reset_index(inplace=True,drop=True)\n",
        "  \n",
        "# Verify new dataframe dimensions\n",
        "print(\"\\nNew Dataset Dimensions:\")\n",
        "print(df.shape)\n",
        "df.head(5)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "6XZJLwqKCbE7",
        "colab_type": "text"
      },
      "source": [
        "## View Label Statistics"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "AgOG94fnCeDB",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Steering Command Statistics\n",
        "print(\"\\nSteering Command Statistics:\")\n",
        "print(df['angle'].describe())\n",
        "\n",
        "print(\"\\nThrottle Command Statistics:\")\n",
        "# Throttle Command Statistics\n",
        "print(df['speed'].describe())"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "nae5TmmFFJ5T",
        "colab_type": "text"
      },
      "source": [
        "## View Histogram of Steering Commands"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Bh3DSasZQKCi",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "#@title Select the number of histogram bins\n",
        "\n",
        "num_bins = 25 #@param {type:\"slider\", min:5, max:50, step:1}\n",
        "\n",
        "hist, bins = np.histogram(df['angle'], num_bins)\n",
        "center = (bins[:-1]+ bins[1:]) * 0.5\n",
        "plt.bar(center, hist, width=0.05)\n",
        "#plt.plot((np.min(df['angle']), np.max(df['angle'])), (samples_per_bin, samples_per_bin))"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "cwunrGrDQweC",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "# Normalize the histogram (150-300 for RBG)\n",
        "#@title Normalize the Histogram { run: \"auto\" }\n",
        "hist = True #@param {type:\"boolean\"}\n",
        "\n",
        "remove_list = []\n",
        "samples_per_bin = 200\n",
        "\n",
        "if hist:\n",
        "  for j in range(num_bins):\n",
        "    list_ = []\n",
        "    for i in range(len(df['angle'])):\n",
        "      if df.loc[i,'angle'] >= bins[j] and df.loc[i,'angle'] <= bins[j+1]:\n",
        "        list_.append(i)\n",
        "    random.shuffle(list_)\n",
        "    list_ = list_[samples_per_bin:]\n",
        "    remove_list.extend(list_)\n",
        "\n",
        "  print('removed:', len(remove_list))\n",
        "  df.drop(df.index[remove_list], inplace=True)\n",
        "  df.reset_index(inplace=True)\n",
        "  df.drop(['index'],axis=1,inplace=True)\n",
        "  print('remaining:', len(df))\n",
        "  \n",
        "  hist, _ = np.histogram(df['angle'], (num_bins))\n",
        "  plt.bar(center, hist, width=0.05)\n",
        "  plt.plot((np.min(df['angle']), np.max(df['angle'])), (samples_per_bin, samples_per_bin))"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "1uqfxZ4uGoNX",
        "colab_type": "text"
      },
      "source": [
        "## View a Sample Image"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "L2nwHnC4Gq1m",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# View a Single Image \n",
        "index = random.randint(0,df.shape[0]-1)\n",
        "\n",
        "img_name = data_set + '/' + df.loc[index,'filename']\n",
        "angle = df.loc[index,'angle']\n",
        "\n",
        "center_image = cv2.imread(img_name)\n",
        "center_image_mod = cv2.resize(center_image, (320,180))\n",
        "center_image_mod = cv2.cvtColor(center_image_mod,cv2.COLOR_RGB2BGR)\n",
        "\n",
        "# Crop the image\n",
        "height_min = 75 \n",
        "height_max = center_image_mod.shape[0]\n",
        "width_min = 0\n",
        "width_max = center_image_mod.shape[1]\n",
        "\n",
        "crop_img = center_image_mod[height_min:height_max, width_min:width_max]\n",
        "\n",
        "plt.subplot(2,1,1)\n",
        "plt.imshow(center_image_mod)\n",
        "plt.grid(False)\n",
        "plt.xlabel('angle: {:.2}'.format(angle))\n",
        "plt.show() \n",
        "\n",
        "plt.subplot(2,1,2)\n",
        "plt.imshow(crop_img)\n",
        "plt.grid(False)\n",
        "plt.xlabel('angle: {:.2}'.format(angle))\n",
        "plt.show() "
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gTQywtOyGvLv",
        "colab_type": "text"
      },
      "source": [
        "## View Multiple Images"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ODmdWWpsGxK2",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Number of Images to Display\n",
        "num_images = 4\n",
        "\n",
        "# Display the images\n",
        "i = 0\n",
        "for i in range (i,num_images):\n",
        "    index = random.randint(0,df.shape[0]-1)\n",
        "    image_path = df.loc[index,'filename']\n",
        "    angle = df.loc[index,'angle']\n",
        "    img_name = data_set + '/' + image_path\n",
        "    image = cv2.imread(img_name)\n",
        "    image = cv2.resize(image, (320,180))\n",
        "    image = cv2.cvtColor(image,cv2.COLOR_RGB2BGR)\n",
        "    plt.subplot(num_images/2,num_images/2,i+1)\n",
        "    plt.xticks([])\n",
        "    plt.yticks([])\n",
        "    plt.grid(False)\n",
        "    plt.imshow(image, cmap=plt.cm.binary)\n",
        "    plt.xlabel('angle: {:.3}'.format(angle))\n",
        "    i += 1"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BgFvPAZl9vfP",
        "colab_type": "text"
      },
      "source": [
        "# Split the Dataset"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "clqhrZpXtYQy",
        "colab_type": "text"
      },
      "source": [
        "## Define an ImageDataGenerator to Augment Images\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "FcAb3NLiten6",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Create image data augmentation generator and choose augmentation types\n",
        "datagen = ImageDataGenerator(\n",
        "                             #rotation_range=20,\n",
        "                             zoom_range=0.15,\n",
        "                             #width_shift_range=0.1,\n",
        "                             #height_shift_range=0.2,\n",
        "                             #shear_range=10,\n",
        "                             brightness_range=[0.5,1.0],\n",
        "                          \t #horizontal_flip=True,\n",
        "                             #vertical_flip=True,\n",
        "                             #channel_shift_range=100.0,\n",
        "                             fill_mode=\"reflect\")"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "m0iJWp1YE-u5",
        "colab_type": "text"
      },
      "source": [
        "## View Image Augmentation Examples"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "fxKea6xhVO2T",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# load the image\n",
        "index = random.randint(0,df.shape[0]-1)\n",
        "\n",
        "img_name = data_set + '/' + df.loc[index,'filename']\n",
        "original_image = cv2.imread(img_name)\n",
        "original_image = cv2.cvtColor(original_image,cv2.COLOR_RGB2BGR)\n",
        "original_image = cv2.resize(original_image, (320,180))\n",
        "label = df.loc[index,'angle']\n",
        "\n",
        "# convert to numpy array\n",
        "data = img_to_array(original_image)\n",
        "\n",
        "# expand dimension to one sample\n",
        "test = expand_dims(data, 0)\n",
        "\n",
        "# prepare iterator\n",
        "it = datagen.flow(test, batch_size=1)\n",
        "\n",
        "# generate batch of images\n",
        "batch = it.next()\n",
        "\n",
        "# convert to unsigned integers for viewing\n",
        "image_aug = batch[0].astype('uint8')\n",
        "\n",
        "print(\"Augmenting a Single Image: \\n\")\n",
        "\n",
        "plt.subplot(2,1,1)\n",
        "plt.imshow(original_image)\n",
        "plt.grid(False)\n",
        "plt.xlabel('angle: {:.2}'.format(label))\n",
        "plt.show() \n",
        "\n",
        "plt.subplot(2,1,2)\n",
        "plt.imshow(image_aug)\n",
        "plt.grid(False)\n",
        "plt.xlabel('angle: {:.2}'.format(label))\n",
        "plt.show() \n",
        "\n",
        "print(\"Multiple Augmentations: \\n\")\n",
        "# generate samples and plot\n",
        "for i in range(0,num_images):\n",
        "\t# define subplot\n",
        "\tplt.subplot(num_images/2,num_images/2,i+1)\n",
        "\t# generate batch of images\n",
        "\tbatch = it.next()\n",
        "\t# convert to unsigned integers for viewing\n",
        "\timage = batch[0].astype('uint8')\n",
        "\t# plot raw pixel data\n",
        "\tplt.imshow(image)\n",
        "# show the figure\n",
        "plt.show()\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "LUE19jPetzBs",
        "colab_type": "text"
      },
      "source": [
        "## Define a Data Generator"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "TgtIMpoSt1LW",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "def generator(samples, batch_size=32, aug=0):\n",
        "    num_samples = len(samples)\n",
        "\n",
        "    while 1:  # Loop forever so the generator never terminates\n",
        "        for offset in range(0, num_samples, batch_size):\n",
        "            batch_samples = samples[offset:offset + batch_size]\n",
        "\n",
        "            #print(batch_samples)\n",
        "            images = []\n",
        "            angles = []\n",
        "            for batch_sample in batch_samples:\n",
        "                if batch_sample[5] != \"filename\":\n",
        "                    name = data_set + '/' + batch_sample[3]\n",
        "                    center_image = cv2.imread(name)\n",
        "                    center_image = cv2.cvtColor(center_image,cv2.COLOR_RGB2BGR)\n",
        "                    center_image = cv2.resize(\n",
        "                        center_image,\n",
        "                        (320, 180))  #resize from 720x1280 to 180x320\n",
        "                    angle = float(batch_sample[4])\n",
        "                    if not aug:\n",
        "                      images.append(center_image)\n",
        "                      angles.append(angle)\n",
        "                    else:\n",
        "                        data = img_to_array(center_image)\n",
        "                        sample = expand_dims(data, 0)\n",
        "                        it = datagen.flow(sample, batch_size=1)\n",
        "                        batch = it.next()\n",
        "                        image_aug = batch[0].astype('uint8')\n",
        "                        if random.random() < .5:\n",
        "                          image_aug = np.fliplr(image_aug)\n",
        "                          angle = -1 * angle\n",
        "                        images.append(image_aug)\n",
        "                        angles.append(angle)\n",
        "\n",
        "            X_train = np.array(images)\n",
        "            y_train = np.array(angles)\n",
        "\n",
        "            yield sklearn.utils.shuffle(X_train, y_train)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "PAVmOpT8HEg0",
        "colab_type": "text"
      },
      "source": [
        "## Split the Dataset"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "lZWDPKBnvGZI",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "samples = []\n",
        "\n",
        "samples = df.values.tolist()\n",
        "\n",
        "sklearn.utils.shuffle(samples)\n",
        "train_samples, validation_samples = train_test_split(samples, test_size=0.2)\n",
        "\n",
        "print(\"Number of traing samples: \", len(train_samples))\n",
        "print(\"Number of validation samples: \", len(validation_samples))"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "g5u_LimyvOkv",
        "colab_type": "text"
      },
      "source": [
        "## Define Training and Validation Data Generators"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "ux8mS7YRpQaX",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "batch_size_value = 32\n",
        "img_aug = 0\n",
        "\n",
        "train_generator = generator(train_samples, batch_size=batch_size_value, aug=img_aug)\n",
        "validation_generator = generator(\n",
        "    validation_samples, batch_size=batch_size_value, aug=0)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-_WW4C27_4HO",
        "colab_type": "text"
      },
      "source": [
        "# Compile and Train the Model"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Z62Wkaj4ADbB",
        "colab_type": "text"
      },
      "source": [
        "## Build the Model"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "AK578kaYAE1_",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Initialize the model\n",
        "model = Sequential()\n",
        "\n",
        "# trim image to only see section with road\n",
        "# (top_crop, bottom_crop), (left_crop, right_crop)\n",
        "model.add(Cropping2D(cropping=((height_min,0), (width_min,0)), input_shape=(180,320,3)))\n",
        "\n",
        "# Preprocess incoming data, centered around zero with small standard deviation\n",
        "model.add(Lambda(lambda x: (x / 255.0) - 0.5))\n",
        "\n",
        "# Nvidia model\n",
        "model.add(Convolution2D(24, (5, 5), activation=\"relu\", name=\"conv_1\", strides=(2, 2)))\n",
        "model.add(Convolution2D(36, (5, 5), activation=\"relu\", name=\"conv_2\", strides=(2, 2)))\n",
        "model.add(Convolution2D(48, (5, 5), activation=\"relu\", name=\"conv_3\", strides=(2, 2)))\n",
        "model.add(SpatialDropout2D(.5, dim_ordering='default'))\n",
        "\n",
        "model.add(Convolution2D(64, (3, 3), activation=\"relu\", name=\"conv_4\", strides=(1, 1)))\n",
        "model.add(Convolution2D(64, (3, 3), activation=\"relu\", name=\"conv_5\", strides=(1, 1)))\n",
        "\n",
        "model.add(Flatten())\n",
        "\n",
        "model.add(Dense(1164))\n",
        "model.add(Dropout(.5))\n",
        "model.add(Dense(100, activation='relu'))\n",
        "model.add(Dropout(.5))\n",
        "model.add(Dense(50, activation='relu'))\n",
        "model.add(Dropout(.5))\n",
        "model.add(Dense(10, activation='relu'))\n",
        "model.add(Dropout(.5))\n",
        "model.add(Dense(1))\n",
        "\n",
        "model.compile(loss='mse', optimizer=Adam(lr=0.001), metrics=['mse','mae','mape','cosine'])\n",
        "\n",
        "# Print model sumamry\n",
        "model.summary()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "USKQYVmhAMaJ",
        "colab_type": "text"
      },
      "source": [
        "## Setup Checkpoints"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "BS6R3FVoAOPc",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# checkpoint\n",
        "model_path = './model'\n",
        "\n",
        "!if [ -d $model_path ]; then echo 'Directory Exists'; else mkdir $model_path; fi\n",
        "\n",
        "filepath = model_path + \"/weights-improvement-{epoch:02d}-{val_loss:.2f}.hdf5\"\n",
        "checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='auto', period=1)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ImbsdKhSOKw3",
        "colab_type": "text"
      },
      "source": [
        "## Setup Early Stopping to Prevent Overfitting"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "Rhi1wb2yOLvY",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# The patience parameter is the amount of epochs to check for improvement\n",
        "early_stop = EarlyStopping(monitor='val_loss', patience=10)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ZDwl4ZB9boi2",
        "colab_type": "text"
      },
      "source": [
        "## Reduce Learning Rate When a Metric has Stopped Improving"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "m1xTmpV-bywp",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,\n",
        "                              patience=5, min_lr=0.001)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "BVIGONgjSy7V",
        "colab_type": "text"
      },
      "source": [
        "## Setup Tensorboard"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "-YWTAEeyS0tw",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Clear any logs from previous runs\n",
        "!rm -rf ./Graph/ \n",
        "\n",
        "# Launch Tensorboard\n",
        "!pip install -U tensorboardcolab\n",
        "\n",
        "from tensorboardcolab import *\n",
        "\n",
        "tbc = TensorBoardColab()\n",
        "\n",
        "# Configure the Tensorboard Callback\n",
        "tbCallBack = TensorBoard(log_dir='./Graph', \n",
        "                        histogram_freq=1,\n",
        "                        write_graph=True,\n",
        "                        write_grads=True,\n",
        "                        write_images=True,\n",
        "                        batch_size=batch_size_value,\n",
        "                        update_freq='epoch')\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "v83cZRQMxBQi",
        "colab_type": "text"
      },
      "source": [
        "## Load Existing Model"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "acaXlXpUxDxM",
        "colab_type": "code",
        "cellView": "form",
        "colab": {}
      },
      "source": [
        "load = True #@param {type:\"boolean\"}\n",
        "\n",
        "if load:\n",
        "  # Returns a compiled model identical to the previous one\n",
        "  !curl -O https://selbystorage.s3-us-west-2.amazonaws.com/research/office_2/model.h5\n",
        "  !mv model.h5 model/\n",
        "  model_path_full = model_path + '/' + 'model.h5'\n",
        "  model = load_model(model_path_full)\n",
        "  print(\"Loaded previous model: {} \\n\".format(model_path_full))\n",
        "else:\n",
        "  print(\"No previous model loaded \\n\")"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "71O-pWk9AQy3",
        "colab_type": "text"
      },
      "source": [
        "\n",
        "## Train the Model"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "3nkossmrAUo2",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Define step sizes\n",
        "STEP_SIZE_TRAIN = len(train_samples) / batch_size_value\n",
        "STEP_SIZE_VALID = len(validation_samples) / batch_size_value\n",
        "\n",
        "# Define number of epochs\n",
        "n_epoch = 5\n",
        "\n",
        "# Define callbacks\n",
        "# callbacks_list = [TensorBoardColabCallback(tbc)]\n",
        "# callbacks_list = [TensorBoardColabCallback(tbc), early_stop]\n",
        "# callbacks_list = [TensorBoardColabCallback(tbc), early_stop, checkpoint]\n",
        "callbacks_list = [TensorBoardColabCallback(tbc), early_stop, checkpoint, reduce_lr]\n",
        "\n",
        "# Fit the model\n",
        "history_object = model.fit_generator(\n",
        "    generator=train_generator,\n",
        "    steps_per_epoch=STEP_SIZE_TRAIN,\n",
        "    validation_data=validation_generator,\n",
        "    validation_steps=STEP_SIZE_VALID,\n",
        "    callbacks=callbacks_list,\n",
        "    use_multiprocessing=True,\n",
        "    epochs=n_epoch)\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OdaGfWNxBT4T",
        "colab_type": "text"
      },
      "source": [
        "## Save the Model"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "jWE--9r4BZEk",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Save model\n",
        "model_path_full = model_path + '/'\n",
        "\n",
        "model.save(model_path_full + 'model.h5')\n",
        "with open(model_path_full + 'model.json', 'w') as output_json:\n",
        "    output_json.write(model.to_json())"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Y-pCmYO1A89_",
        "colab_type": "text"
      },
      "source": [
        "# Evaluate the Model"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "O7o-6SbBD9zx",
        "colab_type": "text"
      },
      "source": [
        "## Plot the Training Results"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "t-M9fEvWD_ZJ",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Plot the training and validation loss for each epoch\n",
        "print('Generating loss chart...')\n",
        "plt.plot(history_object.history['loss'])\n",
        "plt.plot(history_object.history['val_loss'])\n",
        "plt.title('model mean squared error loss')\n",
        "plt.ylabel('mean squared error loss')\n",
        "plt.xlabel('epoch')\n",
        "plt.legend(['training set', 'validation set'], loc='upper right')\n",
        "plt.savefig(model_path + '/model.png')\n",
        "\n",
        "# Done\n",
        "print('Done.')"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "fl5JppdwCRWD",
        "colab_type": "text"
      },
      "source": [
        "## Print Performance Metrics"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "podNBS9cBRNW",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "scores = model.evaluate_generator(validation_generator, STEP_SIZE_VALID, use_multiprocessing=True)\n",
        "\n",
        "metrics_names = model.metrics_names\n",
        "\n",
        "for i in range(len(model.metrics_names)):\n",
        "  print(\"Metric: {} - {}\".format(metrics_names[i],scores[i]))\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "OHL1otavPhX0",
        "colab_type": "text"
      },
      "source": [
        "## Compute Prediction Statistics"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "xlv7PUnPeSYd",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Define image loading function\n",
        "def load_images(dataframe):\n",
        "  \n",
        "  # initialize images array\n",
        "  images = []\n",
        "  \n",
        "  for i in dataframe.index.values:\n",
        "    name = data_set + '/' + dataframe.loc[i,'filename']\n",
        "    center_image = cv2.imread(name)\n",
        "    center_image = cv2.resize(center_image, (320,180))\n",
        "    images.append(center_image)\n",
        "    \n",
        "  return np.array(images)\n",
        "  \n",
        "# Load images \n",
        "test_size = 200\n",
        "df_test = df.sample(frac=1).reset_index(drop=True)\n",
        "df_test = df_test.head(test_size)\n",
        "\n",
        "test_images = load_images(df_test)\n",
        "\n",
        "batch_size = 32\n",
        "preds = model.predict(test_images, batch_size=batch_size, verbose=1)\n",
        "\n",
        "#print(\"Preds: {} \\n\".format(preds))\n",
        "\n",
        "testY = df_test.iloc[:,4].values\n",
        "\n",
        "#print(\"Labels: {} \\n\".format(testY))\n",
        "\n",
        "df_testY = pd.Series(testY)\n",
        "df_preds = pd.Series(preds.flatten)\n",
        "\n",
        "# Replace 0 angle values\n",
        "if df_testY.eq(0).any():\n",
        "  df_testY.replace(0, 0.0001,inplace=True)\n",
        "\n",
        "# Calculate the difference\n",
        "diff = preds.flatten() - df_testY\n",
        "percentDiff = (diff / testY) * 100\n",
        "absPercentDiff = np.abs(percentDiff)\n",
        "\n",
        "# compute the mean and standard deviation of the absolute percentage\n",
        "# difference\n",
        "mean = np.mean(absPercentDiff)\n",
        "std = np.std(absPercentDiff)\n",
        "print(\"[INFO] mean: {:.2f}%, std: {:.2f}%\".format(mean, std))\n",
        "\n",
        "# Compute the mean and standard deviation of the difference\n",
        "print(diff.describe())\n",
        "\n",
        "# Plot a histogram of the prediction errors\n",
        "num_bins = 25\n",
        "hist, bins = np.histogram(diff, num_bins)\n",
        "center = (bins[:-1]+ bins[1:]) * 0.5\n",
        "plt.bar(center, hist, width=0.05)\n",
        "plt.title('Historgram of Predicted Error')\n",
        "plt.xlabel('Steering Angle')\n",
        "plt.ylabel('Number of predictions')\n",
        "plt.xlim(-2.0, 2.0)\n",
        "plt.plot(np.min(diff), np.max(diff))"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "pyalDh4uIvIT",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Plot a Scatter Plot of the Error\n",
        "plt.scatter(testY, preds)\n",
        "plt.xlabel('True Values ')\n",
        "plt.ylabel('Predictions ')\n",
        "plt.axis('equal')\n",
        "plt.axis('square')\n",
        "plt.xlim([-1.75,1.75])\n",
        "plt.ylim([-1.75,1.75])\n",
        "plt.plot([-1.75, 1.75], [-1.75, 1.75], color='k', linestyle='-', linewidth=.1)"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "5jVE6WfNGDEM",
        "colab_type": "text"
      },
      "source": [
        "## Plot a Prediction"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "GSJ08rg3QDP7",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Plot the image with the actual and predicted steering angle\n",
        "index = random.randint(0,df_test.shape[0]-1)\n",
        "img_name = data_set + '/' + df_test.loc[index,'filename']\n",
        "center_image = cv2.imread(img_name)\n",
        "center_image = cv2.cvtColor(center_image,cv2.COLOR_RGB2BGR)\n",
        "center_image_mod = cv2.resize(center_image, (320,180)) #resize from 720x1280 to 180x320\n",
        "plt.imshow(center_image_mod)\n",
        "plt.grid(False)\n",
        "plt.xlabel('Actual: {:.2f} Predicted: {:.2f}'.format(df_test.loc[index,'angle'],float(preds[index])))\n",
        "plt.show() \n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Nw3xUyrWU0_Q",
        "colab_type": "text"
      },
      "source": [
        "#Visualize the Network\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "IDmn0c0AU3w_",
        "colab_type": "text"
      },
      "source": [
        "##Show the Model Summary"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "8VTkX6ofU3Ko",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "model.summary()"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "KsDr5kvMVfut",
        "colab_type": "text"
      },
      "source": [
        "##Access Individual Layers"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "4b70eI0mVnH3",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Creating a mapping of layer name ot layer details \n",
        "# We will create a dictionary layers_info which maps a layer name to its charcteristics\n",
        "layers_info = {}\n",
        "for i in model.layers:\n",
        "    layers_info[i.name] = i.get_config()\n",
        "\n",
        "# Here the layer_weights dictionary will map every layer_name to its corresponding weights\n",
        "layer_weights = {}\n",
        "for i in model.layers:\n",
        "    layer_weights[i.name] = i.get_weights()\n",
        "\n",
        "pprint.pprint(layers_info['conv_5'])"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zrqmP6unW99W",
        "colab_type": "text"
      },
      "source": [
        "##Visualize the filters"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "lgACkEzLXAw3",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# Visualize the first filter of each convolution layer\n",
        "layers = model.layers\n",
        "layer_ids = [2,3,4,6,7]\n",
        "\n",
        "#plot the filters\n",
        "fig,ax = plt.subplots(nrows=1,ncols=5)\n",
        "for i in range(5):\n",
        "    ax[i].imshow(layers[layer_ids[i]].get_weights()[0][:,:,:,0][:,:,0],cmap='gray')\n",
        "    ax[i].set_title('Conv'+str(i+1))\n",
        "    ax[i].set_xticks([])\n",
        "    ax[i].set_yticks([])"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "R1SFhmyiuh29",
        "colab_type": "text"
      },
      "source": [
        "##Visualize the Saliency Map\n"
      ]
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "sQmQVfJoVfTK",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "!pip install -I scipy==1.2.*\n",
        "!pip install git+https://github.com/raghakot/keras-vis.git -U\n",
        "\n",
        "# import specific functions from keras-vis package\n",
        "from vis.utils import utils\n",
        "from vis.visualization import visualize_saliency, visualize_cam, overlay"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "code",
      "metadata": {
        "id": "cWwL58SWup5M",
        "colab_type": "code",
        "colab": {}
      },
      "source": [
        "# View a Single Image \n",
        "index = random.randint(0,df.shape[0]-1)\n",
        "img_name = data_set + '/' + df.loc[index,'filename']\n",
        "\n",
        "sample_image = cv2.imread(img_name)\n",
        "sample_image = cv2.cvtColor(sample_image,cv2.COLOR_RGB2BGR)\n",
        "sample_image_mod = cv2.resize(sample_image, (320,180))\n",
        "plt.imshow(sample_image_mod)\n",
        " \n",
        "layer_idx = utils.find_layer_idx(model, 'conv_5')\n",
        "\n",
        "grads = visualize_saliency(model, \n",
        "                           layer_idx, \n",
        "                           filter_indices=None, \n",
        "                           seed_input=sample_image_mod,\n",
        "                           grad_modifier='absolute',\n",
        "                           backprop_modifier='guided')\n",
        "\n",
        "plt.imshow(grads, alpha = 0.6)\n"
      ],
      "execution_count": 0,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "QMGEoYbVHnaJ",
        "colab_type": "text"
      },
      "source": [
        "# References:\n",
        "[Keras, Regression, and CNNs](https://www.pyimagesearch.com/2019/01/28/keras-regression-and-cnns/)\n",
        "\n",
        "[Regression with Keras](https://www.pyimagesearch.com/2019/01/21/regression-with-keras/)\n",
        "\n",
        "[How to use Keras fit and fit_generator](https://www.pyimagesearch.com/2018/12/24/how-to-use-keras-fit-and-fit_generator-a-hands-on-tutorial/)\n",
        "\n",
        "[Image Classification with Convolutional Neural Networks](https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l04c01_image_classification_with_cnns.ipynb#scrollTo=7MqDQO0KCaWS)\n",
        "\n",
        "[Keras Image Processing Documentation](https://keras.io/preprocessing/image/)\n",
        "\n",
        "[Attribution.ipynb](https://colab.research.google.com/github/idealo/cnn-exposed/blob/master/notebooks/Attribution.ipynb#scrollTo=jqSOW0pQniCw)\n",
        "\n",
        "[A Guide to Understanding Convolutional Neural Networks (CNNs) using Visualization](https://www.analyticsvidhya.com/blog/2019/05/understanding-visualizing-neural-networks/)\n",
        "\n",
        "[Visualizing attention on self driving car](https://github.com/raghakot/keras-vis/blob/master/applications/self_driving/visualize_attention.ipynb)\n",
        "\n",
        "[Exploring Image Data Augmentation with Keras and Tensorflow](https://towardsdatascience.com/exploring-image-data-augmentation-with-eras-and-tensorflow-a8162d89b844\n",
        ")\n",
        "\n",
        "[Tensorboard Documentation](https://keras.io/callbacks/#tensorboard)"
      ]
    }
  ]
}