{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Ridecell: Camera and LIDAR Calibration and Visualization in ROS\n",
    "#### By Munir Jojo-Verge (June 2018 )\n",
    "---\n",
    "\n",
    "## Assignment Description\n",
    "This assignment is given to test your skills in ROS, PCL, OpenCV etc. \n",
    "\n",
    "There are 2 tasks to perform:\n",
    "* Task 1:  Calculate (using code/script) the camera calibration, and use it to rectify the image as shown here http://wiki.ros.org/image_proc \n",
    "\n",
    "* Task 2: Calculate (using code/script)  translation and rotation offset between camera and lidar, and wire static transform accordingly and show overlay in rviz. \n",
    "\n",
    "\n",
    "Submit videos of screen or pictures and code ( as zip files or github link)\n",
    "\n",
    "Link to  ROS Bag file http://gofile.me/6qNOh/5XdKNtJ5n\n",
    "\n",
    "***The checkboard pattern used __5x7 inside corners__ and size of each square 5cm***\n",
    "\n",
    "\n",
    "## Goals\n",
    "\n",
    "The goals / steps of this project are the following:\n",
    "* Inspect & play the bag file\n",
    "* Compute the camera calibration matrix and distortion coefficients given:\n",
    "    * The ROS bag, and\n",
    "    * a set of images (in this case extracted from the bag)\n",
    "* If time permits, compare the 2 calibration values and proof that both methos should be \"good\" (as in less than 5% difference).\n",
    "* Apply a distortion correction to raw images: Create a \"corrected\" ROS bag \n",
    "* \n",
    "---\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# ROS Nano-introduction\n",
    "\n",
    "ROS provides a powerful build and package management system called Catkin.\n",
    "A Catkin workspace is essentially a directory where Catkin packages are built, modified and installed.\n",
    "\n",
    "Typically when you're developing a ROS based robot or project, you will be working out of a single workspace.\n",
    "\n",
    "This singular workspace will hold a wide variety of Catkin packages.\n",
    "\n",
    "All ROS software components are organized into and distributed as Catkin packages.\n",
    "Similar to workspaces, Catkin packages are nothing more than directories containing a variety of resources which,\n",
    "when considered together constitute some sort of useful module.\n",
    "\n",
    "Catkin packages may contain source code for nodes,useful scripts, configuration files and more.\n",
    "\n",
    "We will start by creating a new catkin workspace, and getting all necessary packages, solving all dependencies, and in general getting everything ready for this assignment.\n",
    "\n",
    "My Virtual Machine wasn't ready for a 3.2G ROS bag so I had to extend the physical and logical drives and partitions and spend some time getting all  ready to work.\n",
    "\n",
    "Our \"workspace\" and all the assignment files will be located on the \"ridecell\" folder (catkin worksapce) on:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "cd \"/media/robond/e2507505-dfde-40e2-9c5d-a7ecc505e0f0/ridecell\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "this folder was initialized as our catkin workspace using the following command.\n",
    "\n",
    "```shell\n",
    "$ catkin_init_workspace\n",
    "```\n",
    "and built with\n",
    "\n",
    "```shell\n",
    "$ catkin_make\n",
    "```\n",
    "\n",
    "The entire workspace structure looks like:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "!ls"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A ROS system usually consists of many running nodes.\n",
    "\n",
    "Running all of the nodes by hand though can be torturous.\n",
    "\n",
    "This is where the roslaunch command comes to save the day.\n",
    "\n",
    "Roslaunch allows you to:\n",
    "* launch multiple nodes with one simple command,\n",
    "* set default parameters in the pram server,\n",
    "* automatically respond processes that have died and \n",
    "* much more.\n",
    "\n",
    "To use roslaunch, you must first make sure that your __work space has been built and sourced.__\n",
    "\n",
    "```shell\n",
    "$ source devel/setup.bash\n",
    "```\n",
    "With our workspace built and sourced we can now start solving this task by creating the ncessary srcripts and launching all the necessary nodes."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Inspecting & Playing the bag file\n",
    "\n",
    "#### What does the bag file contain?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "path:        2016-11-22-14-32-13_test.bag\r\n",
      "version:     2.0\r\n",
      "duration:    1:53s (113s)\r\n",
      "start:       Nov 22 2016 14:32:14.41 (1479853934.41)\r\n",
      "end:         Nov 22 2016 14:34:07.88 (1479854047.88)\r\n",
      "size:        3.1 GB\r\n",
      "messages:    5975\r\n",
      "compression: none [1233/1233 chunks]\r\n",
      "types:       sensor_msgs/CameraInfo  [c9a58c1b0b154e0e6da7578cb991d214]\r\n",
      "             sensor_msgs/Image       [060021388200f6f0f447d0fcd9c64743]\r\n",
      "             sensor_msgs/PointCloud2 [1158d486dd51d683ce2f1be655c3c181]\r\n",
      "topics:      /sensors/camera/camera_info   2500 msgs    : sensor_msgs/CameraInfo \r\n",
      "             /sensors/camera/image_color   1206 msgs    : sensor_msgs/Image      \r\n",
      "             /sensors/velodyne_points      2269 msgs    : sensor_msgs/PointCloud2\r\n"
     ]
    }
   ],
   "source": [
    "!rosbag info 2016-11-22-14-32-13_test.bag"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you run sucessfully this command you should get something like:\n",
    "\n",
    "```shell\n",
    "path:        2016-11-22-14-32-13_test.bag\n",
    "version:     2.0\n",
    "duration:    1:53s (113s)\n",
    "start:       Nov 22 2016 14:32:14.41 (1479853934.41)\n",
    "end:         Nov 22 2016 14:34:07.88 (1479854047.88)\n",
    "size:        3.1 GB\n",
    "messages:    5975\n",
    "compression: none [1233/1233 chunks]\n",
    "types:       sensor_msgs/CameraInfo  [c9a58c1b0b154e0e6da7578cb991d214]\n",
    "             sensor_msgs/Image       [060021388200f6f0f447d0fcd9c64743]\n",
    "             sensor_msgs/PointCloud2 [1158d486dd51d683ce2f1be655c3c181]\n",
    "topics:      /sensors/camera/camera_info   2500 msgs    : sensor_msgs/CameraInfo \n",
    "             /sensors/camera/image_color   1206 msgs    : sensor_msgs/Image      \n",
    "             /sensors/velodyne_points      2269 msgs    : sensor_msgs/PointCloud2\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### To play the video we can use the \"play\" argument as follow:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "!rosbag play 2016-11-22-14-32-13_test.bag"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "!rosbag play -r 0.5 2016-11-22-14-32-13_test.bag"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Task 1: Camera Calibration (`cameracalibrator.py`)\n",
    "\n",
    "The first step will be to read in calibration images of a chessboard. During my Self-Driving Car Nanodegree lectures, it was recommeded to use at least 20 images to get a reliable calibration. Since I didn't get a hold of the ros bag file inmediatelly, I used a different set of images for illustration & research purposes, althogh the distortion correction was performed with the calibration obtained from the data file provided. My own set of chessboard images is located on \"myChessboard\" folder and each chessboard image has nine by six inside corners to detect.\n",
    "\n",
    "After I got the ros bag, the first step was to inspect it and see what did it contain.\n",
    "\n",
    "The camera calibration can be done in 2 different ways: \n",
    "\n",
    "**Note:** As mentioned on the assignment description, the checker board pattern used 5 x 7 inside corners and size of each square 5 cm.\n",
    "\n",
    "## Calibration through a Video (ROS bag)\n",
    "\n",
    "\n",
    "The following series of commands will \"play\" the bag file (run on one terminal) and run the \"camera_calibrarion\" ROS node on a separare terminal to collect enough images to cover the X, Y, Size, and Skew parameter spaces needed for correction.\n",
    "I tried to play the bag file at different speeds (1, 0.5 and 0.2 of the normal speed) to see if I could collect more images while going slower and therefore improving the correction values. The results were exactly the same. I collected 23 images.\n",
    "\n",
    "To learn how to use \"camera_calibrarion\" the perfect tutorial is [ROS Wiki](http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration), which I relied on heaviy to develop this task.\n",
    "\n",
    "suncessful instalation of \"camera_calibrarion\" required, on my setup, the installation of ROS kinetic (http://wiki.ros.org/kinetic/Installation/Ubuntu).\n",
    "\n",
    "After the entire Ros Kinetic library was installed I proceded to install and compile the \"camera_calibration\" dependecies.\n",
    "\n",
    "This method, as mentioned before requires 2 terminals: One playing the bag and another one capturing and gathering the calibration parameters.\n",
    "\n",
    "Here's the last set of commands needed to perform this Calibration:\n",
    "\n",
    "``` shell\n",
    "$ rosdep install camera_calibration\n",
    "\n",
    "$ rosmake camera_calibration\n",
    "\n",
    "$ rosbag play -r 0.5 2016-11-22-14-32-13_test.bag\n",
    "\n",
    "```\n",
    "and on a separete terminal you shuould run:\n",
    "\n",
    "``` shell\n",
    "$ rosrun camera_calibration cameracalibrator.py --size=5x7 --square=0.050 image:=/sensors/camera/image_color camera:=/sensors/camera/camera_info  --no-service-check\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As the tutorial clearly states, as the video plays and the checkerboard moves around you will see three bars on the calibration sidebar increase in length. When the CALIBRATE button lights, you have enough data for calibration and can click CALIBRATE to see the results.\n",
    "After running the entire bag and pressing \"Calibrate\" you will see the calibration results in the terminal and the calibrated image in the calibration window. \n",
    "\n",
    "When you click on the \"Save\" button after a succesfull calibration, the data (calibration data and images used for calibration) will be written to __/tmp/calibrationdata.tar.gz__. Below, when using the second method, we will see that the calibration get's aslo saven in the same place and with the same file name.\n",
    "\n",
    "### The Results\n",
    "\n",
    "``` shell\n",
    "('D = ', [-0.20046456284402592, 0.06947530966095249, 0.003302010137310338, 0.00021698698103442295, 0.0])\n",
    "('K = ', [485.07003816979477, 0.0, 457.19389875599717, 0.0, 485.4215104101991, 365.2938207194185, 0.0, 0.0, 1.0])\n",
    "('R = ', [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0])\n",
    "('P = ', [427.07855224609375, 0.0, 461.2431237551573, 0.0, 0.0, 433.6468505859375, 369.92239138540754, 0.0, 0.0, 0.0, 1.0, 0.0])\n",
    "None\n",
    "#oST version 5.0 parameters\n",
    "\n",
    "[image]\n",
    "\n",
    "width\n",
    "964\n",
    "\n",
    "height\n",
    "724\n",
    "\n",
    "[narrow_stereo]\n",
    "\n",
    "camera matrix\n",
    "485.070038 0.000000 457.193899\n",
    "0.000000 485.421510 365.293821\n",
    "0.000000 0.000000 1.000000\n",
    "\n",
    "distortion\n",
    "-0.200465 0.069475 0.003302 0.000217 0.000000\n",
    "\n",
    "rectification\n",
    "1.000000 0.000000 0.000000\n",
    "0.000000 1.000000 0.000000\n",
    "0.000000 0.000000 1.000000\n",
    "\n",
    "projection\n",
    "427.078552 0.000000 461.243124 0.000000\n",
    "0.000000 433.646851 369.922391 0.000000\n",
    "0.000000 0.000000 1.000000 0.000000\n",
    "```\n",
    "\n",
    "Let's move __/tmp/calibrationdata.tar.gz__ into 'ridecell/Results'\n",
    "\n",
    "Let's now open the file and extract the \"ost.yalm\" and for clarity let's rename this file \"calibrationdata1.yalm\" "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Calibration through a set of images\n",
    "\n",
    "To perform a calibration using a set of images there are 2 steps:\n",
    "* Extract and store images from the ros bag video that contain the chessboard in a variaty of locations\n",
    "* Run the calibrator through the images in the same fashion we did run it directly over the bag file."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Extracting images from the video\n",
    "\n",
    "One of the best ways to do this is to use `image_view`  & right click to save screenshot on the desired spots.\n",
    "To do that we have to, in a similar fashion as before, run 2 terminals: one playing the ROS bag and the the other one running `image_view` node as follow:\n",
    "\n",
    "__$ rosrun image_view image_view image:=/sensors/camera/image_color__\n",
    "\n",
    "A great resource for this is:\n",
    "\n",
    "https://coderwall.com/p/qewf6g/how-to-extract-images-from-a-rosbag-file-and-convert-them-to-video\n",
    "\n",
    "Once we capture at least 20 \"good\" images, we can proceed to the next step. I captured 30 images located on /cal_images\n",
    "\n",
    "### Run the calibrator\n",
    "\n",
    "The script that will go through all 30 images and use them to obtain the camera calibration parameters is located in:\n",
    "\n",
    "/ridecell/scripts\n",
    "\n",
    "and it's called \"calibrate_using_imgs.py\"\n",
    "\n",
    "``` shell\n",
    "import cv2\n",
    "from camera_calibration.calibrator import MonoCalibrator, ChessboardInfo\n",
    "\n",
    "numImages = 30\n",
    "\n",
    "images = [ cv2.imread( 'cal_images/frame{:04d}.jpg'.format( i ) ) for i in range( numImages ) ]\n",
    "\n",
    "board = ChessboardInfo()\n",
    "board.n_cols = 7\n",
    "board.n_rows = 5\n",
    "board.dim = 0.050\n",
    "\n",
    "mc = MonoCalibrator( [ board ], cv2.CALIB_FIX_K3 )\n",
    "mc.cal( images )\n",
    "print( mc.as_message() )\n",
    "\n",
    "mc.do_save()\n",
    "```\n",
    "\n",
    "On a terminal, navigate to __\"/ridecell/scripts\"__ and execute it. Make sure the folder \"cal_images\" exists and contains 30 images.\n",
    "\n",
    "``` shell\n",
    "$ python calibrate_using_imgs.py\n",
    "```\n",
    "\n",
    "__You should get the following result__\n",
    "\n",
    "``` shell\n",
    "header: \n",
    "  seq: 0\n",
    "  stamp: \n",
    "    secs: 0\n",
    "    nsecs:         0\n",
    "  frame_id: ''\n",
    "height: 724\n",
    "width: 964\n",
    "distortion_model: \"plumb_bob\"\n",
    "D: [-0.1960379472535176, 0.062400458910675256, 0.0021788417878449524, 0.0003577732109733861, 0.0]\n",
    "K: [485.7634663808253, 0.0, 457.009020484456, 0.0, 485.24260310773263, 369.0660063296169, 0.0, 0.0, 1.0]\n",
    "R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]\n",
    "P: [419.1184387207031, 0.0, 460.51112901293527, 0.0, 0.0, 432.627685546875, 372.659509382589, 0.0, 0.0, 0.0, 1.0, 0.0]\n",
    "binning_x: 0\n",
    "binning_y: 0\n",
    "roi: \n",
    "  x_offset: 0\n",
    "  y_offset: 0\n",
    "  height: 0\n",
    "  width: 0\n",
    "  do_rectify: False\n",
    "('Wrote calibration data to', '/tmp/calibrationdata.tar.gz')\n",
    "```\n",
    "\n",
    "As you can see in the last line of your result, the calibration data is located in:\n",
    "\n",
    "__/tmp/calibrationdata.tar.gz__\n",
    "\n",
    "Let's move this file into __'ridecell/Results'__\n",
    "\n",
    "Let's now open the file and extract the \"ost.yalm\" and for clarity let's rename this file __\"calibrationdata2.yalm\"__ \n",
    "\n",
    "Just by looking at both shell results on the terminal and comparing a few of the calibration values we can see that there the difference is about 2-3% which is close value. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Apply a distortion correction\n",
    "\n",
    "### Adding calibration information to bag files\n",
    "To apply a distortion correction over the ROS bag, we can use \"change_camera_info.py\" included as part of the \"bag_tools\"\n",
    "http://wiki.ros.org/bag_tools\n",
    "\n",
    "It turns out that installing \"bag_tools\" it's been souranded by issues since the package is broken as it lacks the executables that need to be compiled from c++. So you need to build and install it from source [https://github.com/srv/srv_tools/tree/kinetic/bag_tools], which worked out without problems.\n",
    "\n",
    "By looking at the tutorial, we noticed that we are only interested in \"change_camera.py\". For this reason and for clarity and modularity, I decided to copy the latest \"change_camera.py\" on __\"ridecell/scripts\"__ \n",
    "\n",
    "For presentation purposes and since this script is short and simple to understand I decided to show you below the entire script:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#!/usr/bin/python\n",
    "\"\"\"\n",
    "Copyright (c) 2012,\n",
    "Systems, Robotics and Vision Group\n",
    "University of the Balearican Islands\n",
    "All rights reserved.\n",
    "Redistribution and use in source and binary forms, with or without\n",
    "modification, are permitted provided that the following conditions are met:\n",
    "    * Redistributions of source code must retain the above copyright\n",
    "      notice, this list of conditions and the following disclaimer.\n",
    "    * Redistributions in binary form must reproduce the above copyright\n",
    "      notice, this list of conditions and the following disclaimer in the\n",
    "      documentation and/or other materials provided with the distribution.\n",
    "    * Neither the name of Systems, Robotics and Vision Group, University of\n",
    "      the Balearican Islands nor the names of its contributors may be used to\n",
    "      endorse or promote products derived from this software without specific\n",
    "      prior written permission.\n",
    "THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n",
    "ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n",
    "WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n",
    "DISCLAIMED. IN NO EVENT SHALL <COPYRIGHT HOLDER> BE LIABLE FOR ANY\n",
    "DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n",
    "(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n",
    "LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n",
    "ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n",
    "(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n",
    "SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n",
    "\"\"\"\n",
    "\n",
    "\n",
    "PKG = 'bag_tools' # this package name\n",
    "\n",
    "import roslib; roslib.load_manifest(PKG)\n",
    "import rospy\n",
    "import rosbag\n",
    "import os\n",
    "import sys\n",
    "import argparse\n",
    "import yaml\n",
    "import sensor_msgs.msg\n",
    "\n",
    "def change_camera_info(inbag,outbag,replacements):\n",
    "  rospy.loginfo('      Processing input bagfile: %s', inbag)\n",
    "  rospy.loginfo('     Writing to output bagfile: %s', outbag)\n",
    "  # parse the replacements\n",
    "  maps = {}\n",
    "  for k, v in replacements.items():\n",
    "    rospy.loginfo('Changing topic %s to contain following info (header will not be changed):\\n%s',k,v)\n",
    "\n",
    "  outbag = rosbag.Bag(outbag,'w')\n",
    "  for topic, msg, t in rosbag.Bag(inbag,'r').read_messages():\n",
    "    if topic in replacements:\n",
    "      new_msg = replacements[topic]\n",
    "      new_msg.header = msg.header\n",
    "      msg = new_msg\n",
    "    outbag.write(topic, msg, t)\n",
    "  rospy.loginfo('Closing output bagfile and exit...')\n",
    "  outbag.close();\n",
    "\n",
    "def replacement(replace_string):\n",
    "  pair = replace_string.split('=', 1)\n",
    "  if len(pair) != 2:\n",
    "    raise argparse.ArgumentTypeError(\"Replace string must have the form /topic=calib_file.yaml\")\n",
    "  if pair[0][0] != '/':\n",
    "    pair[0] = '/'+pair[0]\n",
    "  stream = file(pair[1], 'r')\n",
    "  calib_data = yaml.load(stream)\n",
    "  cam_info = sensor_msgs.msg.CameraInfo()\n",
    "  cam_info.width = calib_data['image_width']\n",
    "  cam_info.height = calib_data['image_height']\n",
    "  cam_info.K = calib_data['camera_matrix']['data']\n",
    "  cam_info.D = calib_data['distortion_coefficients']['data']\n",
    "  cam_info.R = calib_data['rectification_matrix']['data']\n",
    "  cam_info.P = calib_data['projection_matrix']['data']\n",
    "  cam_info.distortion_model = calib_data['distortion_model']\n",
    "  return pair[0], cam_info\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "  rospy.init_node('change_camera_info')\n",
    "  parser = argparse.ArgumentParser(description='Change camera info messages in a bagfile.')\n",
    "  parser.add_argument('inbag', help='input bagfile')\n",
    "  parser.add_argument('outbag', help='output bagfile')\n",
    "  parser.add_argument('replacement', type=replacement, nargs='+', help='replacement in form \"TOPIC=CAMERA_INFO_FILE\", e.g. /stereo/left/camera_info=my_new_info.yaml')\n",
    "  args = parser.parse_args()\n",
    "  replacements = {}\n",
    "  for topic, calib_file in args.replacement:\n",
    "    replacements[topic] = calib_file\n",
    "  try:\n",
    "    change_camera_info(args.inbag, args.outbag, replacements)\n",
    "  except Exception:\n",
    "    import traceback\n",
    "traceback.print_exc()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Now, on your terminal you can run the script with the following parameters:__\n",
    "\n",
    "** Note: Make sure you go to /ridecell/scripts **\n",
    "\n",
    "*** Format: change_camera_info(inbag, outbag, calibrationdata) ***\n",
    "\n",
    "``` shell\n",
    "$ python change_camera_info.py ../2016-11-22-14-32-13_test.bag ../2016-11-22-14-32-13_test.task1.bag /sensors/camera/camera_info=../Results/calibrationdata.yaml\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "!python change_camera_info.py ../2016-11-22-14-32-13_test.bag ../2016-11-22-14-32-13_test.task1.bag /sensors/camera/camera_info=../Results/calibrationdata.yaml"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once it finish running the rectification over the entire ROS bag you should have an output like:\n",
    "\n",
    "``` shell\n",
    "[INFO] [1529788406.779433]:       Processing input bagfile: ../2016-11-22-14-32-13_test.bag\n",
    "[INFO] [1529788406.779660]:      Writing to output bagfile: ../2016-11-22-14-32-13_test.task1.bag\n",
    "[INFO] [1529788406.780132]: Changing topic /sensors/camera/camera_info to contain following info (header will not be changed):\n",
    "header: \n",
    "  seq: 0\n",
    "  stamp: \n",
    "    secs: 0\n",
    "    nsecs:         0\n",
    "  frame_id: ''\n",
    "height: 724\n",
    "width: 964\n",
    "distortion_model: \"plumb_bob\"\n",
    "D: [-0.196038, 0.0624, 0.002179, 0.000358, 0.0]\n",
    "K: [485.763466, 0.0, 457.00902, 0.0, 485.242603, 369.066006, 0.0, 0.0, 1.0]\n",
    "R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]\n",
    "P: [419.118439, 0.0, 460.511129, 0.0, 0.0, 432.627686, 372.659509, 0.0, 0.0, 0.0, 1.0, 0.0]\n",
    "binning_x: 0\n",
    "binning_y: 0\n",
    "roi: \n",
    "  x_offset: 0\n",
    "  y_offset: 0\n",
    "  height: 0\n",
    "  width: 0\n",
    "  do_rectify: False\n",
    "[INFO] [1529788670.096924]: Closing output bagfile and exit...\n",
    "```\n",
    "and you should have a new ROS bag __\"2016-11-22-14-32-13_test.task1.bag\"__\n",
    "\n",
    "To check if everything looks ok, you can opt for playing the new ROS bag."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Rectifiying the images\n",
    "\n",
    "All the way to this point we have managed to find the \"calibration\" parameters from the video recorded (given to us as a ROS bag) and add/change these calibration parameters to the ROS bag. But we haven't rectified the images yet.\n",
    "\n",
    "To do so, we need to continue looking at [http://wiki.ros.org/image_proc] and especifically to `image_proc` nodelets\n",
    "\n",
    "The main idea behind the following process is to:\n",
    "* play the bag file with the \"raw\" images, \n",
    "* rectify them and\n",
    "* save the result in a seprate video.   \n",
    "\n",
    "The `.launch` file to do this will contain the following script: \n",
    "\n",
    "```xml\n",
    "<launch>\n",
    "\t<node name=\"rosbag\" pkg=\"rosbag\" type=\"play\" args=\"../2016-11-22-14-32-13_test.task1.bag\"/>\n",
    "\t<node name=\"image_proc\" pkg=\"image_proc\" type=\"image_proc\" respawn=\"false\" ns=\"/sensors/camera\">\n",
    "\t\t<remap from=\"image_raw\" to=\"image_color\"/>\n",
    "\t</node>\n",
    "\t<node name=\"rect_video_recorder\" pkg=\"image_view\" type=\"video_recorder\" respawn=\"false\">\n",
    "\t\t<remap from=\"image\" to=\"/sensors/camera/image_rect_color\"/>\n",
    "\t</node>\n",
    "</launch>\n",
    "```\n",
    "In general, processes launched with roslaunch have a working directory in $ROS_HOME (default ~/.ros) so we need to make sure to pass a __full path__ to the bag file for it to be able to find the bag file.\n",
    "\n",
    "By default, `video_recorder` creates `output.avi` in `/home/ros/.ros` and that will take care of our last bullent point above. \n",
    "After running this launch file, the resulting `output.avi` was moved to the `/results/videos` directory and rename it as `rectified.avi`.\n",
    "\n",
    "The result after executing this command is:\n",
    "\n",
    "``` shell\n",
    "roslaunch task1-cameracalibrator-recordvideo.launch\n",
    "... logging to /home/robond/.ros/log/f210a1d4-7725-11e8-9fd4-000c294d9802/roslaunch-udacity-12906.log\n",
    "Checking log directory for disk usage. This may take awhile.\n",
    "Press Ctrl-C to interrupt\n",
    "Done checking log file disk usage. Usage is <1GB.\n",
    "\n",
    "started roslaunch server http://root:36987/\n",
    "\n",
    "SUMMARY\n",
    "========\n",
    "\n",
    "PARAMETERS\n",
    " * /rosdistro: kinetic\n",
    " * /rosversion: 1.12.13\n",
    "\n",
    "NODES\n",
    "  /sensors/camera/\n",
    "    image_proc (image_proc/image_proc)\n",
    "  /\n",
    "    rect_video_recorder (image_view/video_recorder)\n",
    "    rosbag (rosbag/play)\n",
    "\n",
    "ROS_MASTER_URI=http://localhost:11311\n",
    "\n",
    "process[rosbag-1]: started with pid [12923]\n",
    "process[sensors/camera/image_proc-2]: started with pid [12924]\n",
    "process[rect_video_recorder-3]: started with pid [12930]\n",
    "[rosbag-1] process has finished cleanly\n",
    "log file: /home/robond/.ros/log/f210a1d4-7725-11e8-9fd4-000c294d9802/rosbag-1*.log\n",
    "``` \n",
    "\n",
    "\n",
    "### Compare Calibration Results\n",
    "\n",
    "To be able to compare \"un-calibrated\" images or videos (as in this case) with their \"calibrated\" counterpart is ideal to \"stich\" them side-by-side. To do so, we can create an ideantical launch file as the one shown above for the original raw images (on the original ROS bag) and simply omit any rectification. That is done by eliminating the `image_proc` node on the launch file. The output in this case is also moved to `/results/videos` directory and rename it as `original.avi`.\n",
    "\n",
    "Then the 2 videos can be placed side by side using `ffmpeg` following this format:\n",
    "\n",
    "```shell\n",
    "ffmpeg \\\n",
    "  -i input1.mp4 \\\n",
    "  -i input2.mp4 \\\n",
    "  -filter_complex '[0:v]pad=iw*2:ih[int];[int][1:v]overlay=W/2:0[vid]' \\\n",
    "  -map [vid] \\\n",
    "  -c:v libx264 \\\n",
    "  -crf 23 \\\n",
    "  -preset veryfast \\\n",
    "  output.mp4\n",
    "```\n",
    "In our case the command is\n",
    "``` shell\n",
    "$ ffmpeg -i original.avi -i rectified.avi -filter_complex '[0:v]pad=iw*2:ih[int];[int][1:v]overlay=W/2:0[int2];[int2][2:v]overlay=2*W/2:0,drawtext=fontsize=60:fontcolor=#095C8D:fontfile=/usr/share/fonts/truetype/freefont/FreeSans.ttf:text='Original':x=W/6+100:y=25,drawtext=fontsize=60:fontcolor=#095C8D:fontfile=/usr/share/fonts/truetype/freefont/FreeSans.ttf:text='Rectified':x=2*W/6+100:y=25 -map [vid] -c:v libx264 -crf 23 -preset veryfast task1-compare.mp4\n",
    "```\n",
    "\n",
    "___This command kept \"hunging up\" and never producing the desired \"side-by-side\" video. Due to time constraint reasons I decided just to include both the original.avi and rectified.avi files and leave this obstacle to be solved later.___\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Task 2: Camera to LIDAR Offset Calculation\n",
    "\n",
    "## The Steps:\n",
    "\n",
    "The process to solve this task is:\n",
    "* To create a ROS package that holds all the scripts to run the trasformations (Translations and rotations).\n",
    "* Use `scipy.optimize.minimize` function to find the optimal translation and rotation between the camera frame and LIDAR frame. This function will take a CostFunction representing both, the rotation and translation errors/difference and try to minimize these errors by chosing and optimal rotation angle and translation parameters.\n",
    "* Create a composite OPTICAL-LIDAR image.\n",
    "\n",
    "## 1. Creating a new ROS Package and the .launch files required\n",
    "\n",
    "First we created a \"ridecell_pkg\" to hold scripts used to run the offeset calculation. To use them, first add `ridell_pkg` folder to `ROS_PACKAGE_PATH`\n",
    "\n",
    "```shell\n",
    "$ export ROS_PACKAGE_PATH=/media/robond/e2507505-dfde-40e2-9c5d-a7ecc505e0f0/ridecell/src/ridecell_pkg:$ROS_PACKAGE_PATH\n",
    "```\n",
    "In this folder we will create 'ridecell_pkg/launch' folder to hold all the launch files.\n",
    "\n",
    "The first launch file needed is to run 'lidar_camera_offset.py' which is the core of this task since it will calculate the minimum angle rotation and minimum translation required to \"fit\" both \"images\" that previously needed to be:\n",
    "1) put on the same reference frame\n",
    "2) converted from 3D into 2D.(A pin hole camera model was used to project the rotated 3D points into image coordinates)\n",
    "\n",
    "```shell\n",
    "$ roslaunch launch/task2-cameralidar-offset.launch\n",
    "```\n",
    "\n",
    "```xml\n",
    "<launch>\n",
    "\t<node name=\"rosbag\" pkg=\"rosbag\" type=\"play\" args=\"/media/robond/e2507505-dfde-40e2-9c5d-a7ecc505e0f0/ridecell/2016-11-22-14-32-13_test.task1.bag\"/>\n",
    "\t<node name=\"lidar_camera_offset\" pkg=\"ridecell_pkg\" type=\"lidar_camera_offset.py\" args=\"/media/robond/e2507505-dfde-40e2-9c5d-a7ecc505e0f0/ridecell/data/lidar_camera_calibration_data.json /media/robond/e2507505-dfde-40e2-9c5d-a7ecc505e0f0/ridecell/cal_images/lidar_offset_frame.jpg /media/robond/e2507505-dfde-40e2-9c5d-a7ecc505e0f0/ridecell/Results/Images/lidar_offset_output.jpg \" output=\"screen\">\n",
    "\t\t<remap from=\"camera\" to=\"/sensors/camera/camera_info\"/>\n",
    "\t</node>\n",
    "</launch>\n",
    "```\n",
    "\n",
    "The python script `ridecell/src/ridecell_pkg/lidar_camera_offset.py` requires a `.json` file containing point correspondences between 3D Points and 2D image coordinates. The point correspondences used to generate the results below can be found in `data/lidar_camera_calibration_data.json`. Optional parameters can be included to generate an image using the expected and generated image coordinates for the provided 3D points.\n",
    "\n",
    "```json\n",
    "{\n",
    "\t\"points\": [ \n",
    "\t\t[ 1.568, 0.159, -0.082, 1.0 ], // top left corner of grid\n",
    "\t\t[ 1.733, 0.194, -0.403, 1.0 ], // bottom left corner of grid\n",
    "\t\t[ 1.595, -0.375, -0.378, 1.0 ], // bottom right corner of grid\n",
    "\t\t[ 1.542, -0.379, -0.083, 1.0 ], // top right corner of grid\n",
    "\t\t[ 1.729, -0.173, 0.152, 1.0 ], // middle of face\n",
    "\t\t[ 3.276, 0.876, -0.178, 1.0 ] // corner of static object\n",
    "\t],\n",
    "\t\"uvs\": [\n",
    "\t\t[ 309, 315 ],\n",
    "\t\t[ 304, 433 ],\n",
    "\t\t[ 491, 436 ],\n",
    "\t\t[ 490, 321 ],\n",
    "\t\t[ 426, 286 ],\n",
    "\t\t[ 253, 401 ]\n",
    "\t],\n",
    "\t\"initialTransform\": [ 0.0, 0.0, 0.0, 0.0, 0.0, 0.0 ],\n",
    "\t\"bounds\": [\n",
    "\t\t[ -5, 5 ],\n",
    "\t\t[ -5, 5 ],\n",
    "\t\t[ -5, 5 ],\n",
    "\t\t[ 0, 6.28318530718 ], // 2 * pi\n",
    "\t\t[ 0, 6.28318530718 ], // 2 * pi\n",
    "\t\t[ 0, 6.28318530718 ] // 2 * pi\n",
    "\t]\n",
    "}\n",
    "```\n",
    "\n",
    "## Understanding how to find the best translation and rotation parameters\n",
    "\n",
    "The optimal offset calculation script relies on the `scipy.optimize.minimize` function to find the translation and rotation between the camera frame and LIDAR frame. `minimize` can perform bounded optimization to limit the state parameters. The translation along each axis is limited to ± 5.0 meters. The rotation angles are limited between 0 and 360 degrees (2 pi radians).\n",
    "\n",
    "The cost function to be minimized is the sum of the magnitudes of the error between expected Ego coordinates and those obtained by the state parameters at each step of the optimization. \n",
    "\n",
    "Some initial state vectors, including `[ 0, 0, 0, 0, 0, 0 ]`, has a positive gradient in the neighborhood surrounding it. This results in unsuccessful optimization. To counteract this, a new initial state vector is picked randomly within the bounds of each parameter. In order to find a minima closer to the unknown global minimum, new initial state vectors are also randomly picked until a successful optimization results in an error of less than 50 pixels. \n",
    "\n",
    "\n",
    "## Creating the composite Camera-LIDAR image\n",
    "\n",
    "Once the optimized state parameters are found by the previous step, the state vector can be added to the `static_transform_provider` node inside `Launch/task2-cameralidar.launch`.\n",
    "\n",
    "```xml\n",
    "<launch>\n",
    "\t<param name=\"use_sim_time\" value=\"true\" />\n",
    "\t<node name=\"rosbag\" pkg=\"rosbag\" type=\"play\" args=\"-r 0.25 --clock /media/robond/e2507505-dfde-40e2-9c5d-a7ecc505e0f0/ridecell/2016-11-22-14-32-13_test.task1.bag\"/>\n",
    "\t<node name=\"image_proc\" pkg=\"image_proc\" type=\"image_proc\" respawn=\"false\" ns=\"/sensors/camera\">\n",
    "\t\t<remap from=\"image_raw\" to=\"image_color\"/>\n",
    "\t</node>\n",
    "\t<node name=\"tf\" pkg=\"tf\" type=\"static_transform_publisher\" args=\"-0.05937507 -0.48187289 -0.26464405  5.41868013  4.49854285 2.46979746 world velodyne 10\"/>\n",
    "\t<node name=\"lidar_camera\" pkg=\"ridecell\" type=\"lidar_camera.py\" args=\"\">\n",
    "\t\t<remap from=\"image\" to=\"/sensors/camera/image_rect_color\"/>\n",
    "\t\t<remap from=\"image_lidar\" to=\"/sensors/camera/image_lidar\"/>\n",
    "\t\t<remap from=\"camera\" to=\"/sensors/camera/camera_info\"/>\n",
    "\t\t<remap from=\"velodyne\" to=\"/sensors/velodyne_points\"/>\n",
    "\t</node>\n",
    "\t<!--<node name=\"image_view\" pkg=\"image_view\" type=\"image_view\" args=\"\">\n",
    "\t\t<remap from=\"image\" to=\"/sensors/camera/image_lidar\"/>\n",
    "\t</node>-->\n",
    "\t<node name=\"rect_video_recorder\" pkg=\"image_view\" type=\"video_recorder\" respawn=\"false\">\n",
    "\t\t<remap from=\"image\" to=\"/sensors/camera/image_lidar\"/>\n",
    "\t</node>\n",
    "</launch>\n",
    "```\n",
    "\n",
    "This launch file provides the option to view the composite image in real-time through `image_view` or to record a video containing the images for the entire data stream. \n",
    "\n",
    "The image below shows an example of the composite image. \n",
    "\n",
    "![Camera LIDAR Composite Image](Results/Images/lidar_result.jpg)\n",
    "\n",
    "### How it works\n",
    "\n",
    "`lidar_camera.py` subscribes to the following data sources:\n",
    "\n",
    "* The rectified camera image: `/sensors/camera/image_rect_color`\n",
    "* The calibration transform: `/world/velodyne`\n",
    "* The camera calibration information for projecting the LIDAR points: `/sensors/camera/camera_info`\n",
    "* The Velodyne data scan: `/sensors/velodyne_points`\n",
    "\n",
    "As each LIDAR scan is received, the scan data is unpacked from the message structure using `struct.unpack`. Each scan point contains the x, y, and z coordinates in meters, and the intensity of the reflected laser beam.\n",
    "\n",
    "```python\n",
    "formatString = 'ffff'\n",
    "if data.is_bigendian:\n",
    "  formatString = '>' + formatString\n",
    "else:\n",
    "  formatString = '<' + formatString\n",
    "\n",
    "points = []\n",
    "for index in range( 0, len( data.data ), 16 ):\n",
    "  points.append( struct.unpack( formatString, data.data[ index:index + 16 ] ) )\n",
    "```\n",
    "\n",
    "This is needed because there are not officially supported Python libraries for Point Cloud Library. The `python_pcl` package has been created and is available [here](http://strawlab.github.io/python-pcl/). While this module was compiled and tested, the simplicity of unpacking the structure manually was chosen over importing an external module.\n",
    "\n",
    "As each image is received, `cv_bridge` is used to convert the ROS Image sensor message to an OpenCV compatible format. \n",
    "\n",
    "The `/world/velodyne` transform is obtained each frame. This proved useful during an attempt at manual calibration. This is converted into an affine transformation matrix containing the rotation and translation between frames. \n",
    "\n",
    "Each point of the laser scan was then transformed into the camera frame. Points that are more than 4.0 meters away from the camera were thrown out to aid in declutter the composite image. Points with negative z value were also thrown out as they represent scan points which are behind the camera's field of view.\n",
    "\n",
    "Red circles are rendered for each point which is projected inside the image bounds.   \n",
    "\n",
    "## Results\n",
    "\n",
    "Six points were picked for image calibration using `rviz`\n",
    "\n",
    "1. Top left corner of calibration grid\n",
    "2. Bottom left corner of calibration grid\n",
    "3. Bottom right corner of calibration grid\n",
    "4. Top right corner of calibration grid\n",
    "5. The center of the face of the person holding the calibration grid\n",
    "6. The corner of the static object on the left side of the image\n",
    "\n",
    "The optimized transform obtained was:\n",
    "\n",
    "```python\n",
    "# Position in meters, angles in radians\n",
    "( offsetX, offsetY, offsetZ, yaw, pitch, roll ) = [ -0.05937507, -0.48187289, -0.26464405, 5.41868013, 4.49854285, 2.46979746 ]\n",
    "\n",
    "# Angles in degrees\n",
    "( yawDeg, pitchDeg, rollDeg ) = [ 310.4675019812, 257.7475192644, 141.5089707105 ]\n",
    "``` \n",
    "\n",
    "The image below shows the expected image coordinates in blue and the points created by the optimized transform in red.\n",
    "\n",
    "![Camera LIDAR Calibration Comparison](Results/Images/lidar_offset_output.jpg)\n",
    "\n",
    "As you can see, the most error comes from the point on the face and the points on the right side of the calibration grid. However, the total error obtained is only about 35 pixels.\n",
    "\n",
    "Using this transform, a video was created to show how well all of the LIDAR points in the bagfile align to the image. Because this code is running in a virtual machine and the LIDAR scans at a higher frequency, the image and LIDAR scans are not in sync; however, when the person in the image stops for a moment, you can see how well the calibration worked. \n",
    "\n",
    "**Note:** This video was sped up to 2x speed to account for the slower rate the bagfile was played.\n",
    "\n",
    "```shell\n",
    "$ ffmpeg -i Results/Videos/task2-lidar-image.avi -filter:v \"setpts=0.5*PTS\" -c:v libx264 -crf 23 -preset veryfast output.mp4\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# APPENDIX\n",
    "### Some work done before I got the ROS bag and other image correction studies done in the past. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "import os\n",
    "import pickle\n",
    "import cv2\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import matplotlib.image as mpimg\n",
    "import glob\n",
    "%matplotlib qt\n",
    "\n",
    "from scipy.signal import find_peaks_cwt\n",
    "%matplotlib inline\n",
    "\n",
    "# Read in and made a list of the calibrartion images provided\n",
    "images = glob.glob('../myGoProCalibration/GOPR0*.jpg')\n",
    "#images = glob.glob('../camera_cal/calibration*.jpg')\n",
    "NumCalibrationImages = len(images)\n",
    "   \n",
    "if NumCalibrationImages > 0 :\n",
    "    print('Number of calibrarion images: ', NumCalibrationImages)    \n",
    "    print(' ******* For the sake of the exercise lets print them all  ******* ')\n",
    "    plt.figure(figsize=(30,200))\n",
    "    for i in range(NumCalibrationImages):    \n",
    "        img = cv2.imread(images[i])    \n",
    "        plt.subplot(NumCalibrationImages,4,i+1)    \n",
    "        plt.xticks([])\n",
    "        plt.yticks([])\n",
    "        plt.imshow(img)    \n",
    "        plt.title(images[i])\n",
    "        #plt.show()\n",
    "        #plt.title(\"Chessboard image without the corners detected\")\n",
    "        #plt.show()\n",
    "else:\n",
    "    print('No calibration images were found!!!')\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will start finding and plotting the inside corners of a randomly chosen chessboard image using the OpenCV function cv2.findChessboardCorners() which will need to be fed by Gray scale images. Therefore we will convert the image to grayscale first using the appropriate conversion (from RGB -> GRAY or from BGR -> GRAY depending on the format that we read the image."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Set the number of inside corners\n",
    "#__________________________________________\n",
    "nx = 8 # The number of inside corners in x direction\n",
    "ny = 6 # The number of inside corners in y direction\n",
    "#__________________________________________\n",
    "\n",
    "# For the sake of this test I'll load image 1 which is \"pretty\" to show once the corners have been found\n",
    "img = cv2.imread(images[3])\n",
    "\n",
    "# Convert to grayscale\n",
    "gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n",
    "\n",
    "# Find the chessboard corners\n",
    "ret, corners = cv2.findChessboardCorners(gray, (nx, ny), None)\n",
    "\n",
    "# If corners were found, draw corners on the image.\n",
    "if ret == True:\n",
    "    print('Num corners found: ', len(corners))\n",
    "    \n",
    "    # Visualize Origianl before we draw the corners\n",
    "    f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))\n",
    "    ax1.imshow(img)\n",
    "    ax1.set_title('Original Image', fontsize=15)\n",
    "    \n",
    "    # Draw and display the corners\n",
    "    cv2.drawChessboardCorners(img, (nx, ny), corners, ret)\n",
    "        \n",
    "    ax2.imshow(img)\n",
    "    ax2.set_title('Chessboard image with corners', fontsize=15)\n",
    "\n",
    "else:\n",
    "    print('No corners were found!!!')\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will map the coordinates of the corners in the 2D image (image points) to the 3D coordinates of the real and undistorted chessboard corners (Object points).\n",
    "We will start setting up 2 empty arrays that will hold all these points, image points and object points for all our 20 images.\n",
    "The the real world, the object points of a chessboard are all equally separated and flat. For simplicity, we will assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each calibration image. Therefore, we will create a template of these object points for one image/board and add it as the object points for all the images we read and process.\n",
    "The next step would be to do the same for all the calibration images so we can feed this values to the OpenCV calibration function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# Arrays to store object points and image points from all the images.\n",
    "objpoints = [] # 3d points in real world space for each and every image we will process. Since they are all images of the same chessboard, we will have all exact same values\n",
    "imgpoints = [] # 2d points in image plane.\n",
    "\n",
    "# prepare the object points template for all images of the same chessboard: (0,0,0), (1,0,0), (2,0,0) ....,(7,5,0)\n",
    "objp = np.zeros((nx*ny,3), np.float32)\n",
    "objp[:,:2] = np.mgrid[0:nx, 0:ny].T.reshape(-1,2)\n",
    "\n",
    "# Step through the list and search for chessboard corners\n",
    "for fname in images:    \n",
    "    img = cv2.imread(fname)\n",
    "    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n",
    "\n",
    "    # Find the chessboard corners\n",
    "    ret, corners = cv2.findChessboardCorners(gray, (nx,ny), None)\n",
    "\n",
    "    # If corners were found, add object points (ALWAYS THE SAME) and the image points\n",
    "    if ret == True:        \n",
    "        objpoints.append(objp)\n",
    "        imgpoints.append(corners)             \n",
    "    else:    \n",
    "        print('No corners were found on image: ', fname)\n",
    "        "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's now use the output objpoints and imgpoints to compute the camera calibration and distortion\n",
    "coefficients using the cv2.calibrateCamera() function. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# Perform the camera calibration given object points and image points\n",
    "ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img.shape[0:2], None, None)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "# 2. Image Distortion Correction\n",
    "\n",
    "We  will applied this distortion correction to the a test image using the cv2.undistort() function and show the result"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "test_img = cv2.imread('../myGoProCalibration/test_image.jpg')\n",
    "dst = cv2.undistort(test_img, mtx, dist, None, mtx)\n",
    "cv2.imwrite('../myGoProCalibration/test_image_undist.jpg',dst)\n",
    "\n",
    "# Save the camera calibration result for later use (we won't worry about rvecs / tvecs)\n",
    "dist_pickle = {}\n",
    "dist_pickle[\"mtx\"] = mtx\n",
    "dist_pickle[\"dist\"] = dist\n",
    "pickle.dump( dist_pickle, open( \"camera_calibration.p\", \"wb\" ) )\n",
    "\n",
    "# Visualize undistortion\n",
    "f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))\n",
    "ax1.imshow(test_img)\n",
    "ax1.set_title('Original Image', fontsize=15)\n",
    "ax2.imshow(dst)\n",
    "ax2.set_title('Undistorted Image', fontsize=15)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "---\n",
    "# 3. Color and Gradient Trasnformations\n",
    "### In the following scripts we will define the methods reviewed in the lectures (color transforms, gradients, etc.,) to create a thresholded binary image that will be used to enhance lane detection under different conditions.\n",
    "### We will present the effects of each method separately by tuning interactively the photos provided for this purpose with the  objective of narrowing down the threshold ranges that produce the best outcome for this application.\n",
    "### After testing and tuning each trasformation we will cobine all of them over single images and observe the effects.  Would the different transformations cobiened help each other to produce a better outcome or would they, somehow, interfer and counteract each other? \n",
    "#### The trasformation methods that will be studied are:\n",
    "* Gaussian Blurring to reduce noises\n",
    "* Sobel Operator: This is the gradient (or conceptually the difference in grayscale intensity - value- between neigbour pixels:\n",
    "    * Absolute value of the gradient on the x-direction or y-direction\n",
    "    * Magnitude of the Gradient as a combination of the gradient in both directions \n",
    "    * Direction of the Gradient as a combination of the gradient in both directions. (We are interested, mostly, in semi-vertical lines for lane detection) \n",
    "* Binary Noise Reduction: We will explore OpenCV filter function \"cv2.filter2D\" to filter out color tones\n",
    "* HLS Color Threshold: Using the HLS color space, we will explore the positive effect in lane detection of the S Channel."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define a function to threshold (binary) a specific channel (you pass, for example, the s channel = hls[:,:,2] and the theshold values)\n",
    "def binary_thresh(img_ch, thresh=(0, 100)):    \n",
    "    binary_output = np.zeros_like(img_ch)\n",
    "    binary_output[(img_ch > thresh[0]) & (img_ch <= thresh[1])] = 1    \n",
    "    # Return the binary image\n",
    "    return binary_output\n",
    "\n",
    "# Define a function to threshold (just therhold and not convert to binary) a specific channel (you pass, for example, the s channel = hls[:,:,2] and the theshold values)\n",
    "def color_thresh(img_ch, thresh=(0, 100)):    \n",
    "    binary_output = binary_thresh(img_ch, thresh)    \n",
    "    filtered_img = binary_output * img_ch\n",
    "    # Return a color image\n",
    "    return filtered_img\n",
    "\n",
    "# Define a function that applies Gaussian smoothing bluring to and image (1 to 3 channles)\n",
    "def gaussian_blur(img, kernel_size):\n",
    "    \"\"\"Applies a Gaussian Noise kernel\"\"\"\n",
    "    return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)\n",
    "\n",
    "# Define a function that takes an image (alredy converted into grayscale - to avoid not applying the right conversion -, \n",
    "# gradient orientation (x or y), the sobel kernel (max 31, min 3, only odd numbers) and threshold (min, max values).\n",
    "def abs_sobel_thresh(gray, orient='x', sobel_kernel=3, thresh=(0, 255)):\n",
    "    # Apply x or y gradient with the OpenCV Sobel() function\n",
    "    # and take the absolute value\n",
    "    if orient == 'x':\n",
    "        abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel))\n",
    "    if orient == 'y':\n",
    "        abs_sobel = np.absolute(cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel))\n",
    "    # Rescale back to 8 bit integer\n",
    "    scaled_sobel = np.uint8(255*abs_sobel/np.max(abs_sobel))\n",
    "    \n",
    "    # Create the binaty filtered image\n",
    "    binary_output = binary_thresh(scaled_sobel, thresh=thresh) \n",
    "\n",
    "    # Return the result\n",
    "    return binary_output\n",
    "\n",
    "# Define a function to return the magnitude of the gradient\n",
    "# for a given sobel kernel size and threshold values.\n",
    "# as before, the img passed should be already in grayscale to avoid not applying the right conversion\n",
    "# This is exactly the same as cv2.laplace but we can specify the kernel in this case\n",
    "def mag_thresh(gray, sobel_kernel=3, thresh=(0, 255)):\n",
    "    # Take both Sobel x and y gradients\n",
    "    sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel)\n",
    "    sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel)\n",
    "    # Calculate the gradient magnitude\n",
    "    gradmag = np.sqrt(sobelx**2 + sobely**2)\n",
    "    # Rescale to 8 bit\n",
    "    scale_factor = np.max(gradmag)/255 \n",
    "    gradmag = (gradmag/scale_factor).astype(np.uint8) \n",
    "    \n",
    "    # Create the binaty filtered image\n",
    "    binary_output = binary_thresh(gradmag, thresh=thresh)\n",
    "\n",
    "    # Return the binary image\n",
    "    return binary_output\n",
    "\n",
    "# Define a function to threshold an image for a given range and Sobel kernel\n",
    "def dir_thresh(gray, sobel_kernel=3, thresh=(0, np.pi/2)):\n",
    "    # Calculate the x and y gradients\n",
    "    sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0, ksize=sobel_kernel)\n",
    "    sobely = cv2.Sobel(gray, cv2.CV_64F, 0, 1, ksize=sobel_kernel)\n",
    "    # Take the absolute value of the gradient direction, \n",
    "    # apply a threshold, and create a binary image result\n",
    "    absgraddir = np.arctan2(np.absolute(sobely), np.absolute(sobelx))\n",
    "    \n",
    "    # Create the binaty filtered image\n",
    "    binary_output = binary_thresh(absgraddir, thresh=thresh) \n",
    "\n",
    "    # Return the binary image\n",
    "    return binary_output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now that we have defined the main techniques described in the lectures is time to use the various aspects of the __gradient measurements (x, y, magnitude, and direction)__ and also the __Color Transforms__ to isolate lane-line pixels. We will research how we can combine thresholds of the x and y gradients, the overall gradient magnitude, and the gradient direction; as well as the HLS and color thresholds to focus on pixels that are likely to be part of the lane lines.\n",
    "\n",
    "We will start with first with just the __gradient measurements (x, y, magnitude, direction)__"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Sobel Threshold and Color Threshold Tuning \n",
    "The entire Sobel set of funtions defined above are based on \"gray\" images since they calculate gradients. But what is really a grayscale image but some sort of averaging of the \"color\" channels we use!\n",
    "If we are using RGB format, each color pixel is described by a triple (R, G, B) of intensities for red, green, and blue, and the conversion to \"gray\" could use different ways of \"averaging\" these channels. The most common methods are:\n",
    "* The lightness method averages the most prominent and least prominent colors: (max(R, G, B) + min(R, G, B)) / 2.\n",
    "* The average method simply averages the values: (R + G + B) / 3.\n",
    "* The luminosity method is a more sophisticated version of the average method. It also averages the values, but it forms a weighted average to account for human perception. We’re more sensitive to green than other colors, so green is weighted most heavily. The formula for luminosity is 0.21 R + 0.72 G + 0.07 B.\n",
    "\n",
    "But what if we stack all the **\"relevant\"** (I will address this later) color spaces channels, instead of just the R,G and B, and apply a simple average or some other mathematical averaging method to obtain our own grayscale image? OR what if we chose these **\"relevant\"** channels and apply sobel function individually to them instead of averaging them. All these questions will be addressed next.\n",
    "\n",
    "What are the **\"relevant\"** channels for this application?\n",
    "\n",
    "**From the lectures we know that the R and G channels are the most useful channels on the RGB stack to detect white and yellow lines**, although they might lack in performance under different light & brightness conditions. R and G values get lower under shadow and don't consistenly recognize the lane lines under extreme brightness.\n",
    "We also know that on the \"HLS\" color space, the H and the S channels stay fairly consistent in shadow or excesive brighness and we should be able to detect different lane lines (usually yellow and white) more reliably than in RGB color space. \n",
    "This section is meant to investigate the combination of all these color channels and possibly others like the YUV that when applying the Sobel suit of functions described above, could produce the best outcome in different conditions (different test images)\n",
    "\n",
    "** ___---> NOTE <---___ **\n",
    "\n",
    "While developing the \".py\" files for the video generation, I came across this paper:\n",
    "\n",
    "ROBUST AND REAL TIME DETECTION OF CURVY LANES (CURVES) WITH DESIRED SLOPES FOR DRIVING ASSISTANCE AND AUTONOMOUS VEHICLES\n",
    "by Amartansh Dubey and K. M. Bhurchandi\n",
    "\n",
    "On this paper, the authors argue that one of the biggest hurdles for new autonomous vehicles is to detect curvy lanes, multiple lanes and lanes with a lot of discontinuity and noise. This paper presents very efficient and advanced algorithm for detecting curves having desired slopes (especially for detecting curvy lanes in real time) and detection of curves (lanes) with a lot of noise, discontinuity and disturbances. Overall aim is to develop robust method for this task which is applicable even in adverse conditions. \n",
    "They insist that even in some of most the famous and useful libraries like OpenCV and Matlab, there is no function available for detecting curves having desired slopes , shapes, discontinuities. Only few predefined shapes like circle, ellipse, etc, can be detected using presently available functions. \n",
    "They argue also that the proposed algorithm can not only detect curves with discontinuity, noise, desired slope but also it can perform shadow and illumination correction and detect/ differentiate between different curves.\n",
    "\n",
    "**How?, you may be wondering**\n",
    "\n",
    "In this algorithm, two very small Hough lines are taken on the curve, then weighted centroids of these Hough lines are calculated.\n",
    "\n",
    "I would have loved to try to replicate their results here but I really don't have the time!! :-(\n",
    "___________________________\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### Interactive Tool\n",
    "After some research we descovered a great suit of interactive tools that will allows as to try different configurations (active channels, ranges/thresholds, and sobel processing) in a fast and efficient way to try to extract yellow and white lane-lines under a multitude of light and road conditions."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from ipywidgets import widgets, interactive, FloatSlider, IntSlider, IntRangeSlider, FloatRangeSlider, RadioButtons, Select\n",
    " \n",
    "def combineAll(image_idx, use_sobelXY, sobel_kernel, sobelX_thresh, sobelY_thresh, use_MagDir_thresh, mag_thresh_range, dir_thresh_range, R,R_thresh, G,B, H,L,S, S_thresh, Y,U,V, blur):\n",
    "    # Assign the image from the already loaded images\n",
    "    RGB_img = RGB_images[image_idx]\n",
    "    HLS_img = cv2.cvtColor(RGB_img, cv2.COLOR_RGB2HLS)\n",
    "    YUV_img = cv2.cvtColor(RGB_img, cv2.COLOR_RGB2YUV)\n",
    "    YUV_img = 255 - YUV_img\n",
    "    \n",
    "    num_ch = sum([R,G,B,H,L,S,Y,U,V])\n",
    "    if num_ch == 0:\n",
    "        raise ValueError('You have to select at least one color channel')\n",
    "        \n",
    "    # This will be the image (width and height) with all the selected channels stacked in layers\n",
    "    img_stacked = np.zeros((*RGB_img.shape[:-1], num_ch))\n",
    "    ch_layer = 0 # <- at least one color channel. This is the first layer\n",
    "    \n",
    "    # Stacking RGB channels as selected\n",
    "    if R:\n",
    "        ch_filtered = color_thresh(RGB_img[:,:,0],R_thresh)\n",
    "        img_stacked[:,:,ch_layer] = ch_filtered \n",
    "        ch_layer += 1\n",
    "    if G:\n",
    "        img_stacked[:,:,ch_layer] = RGB_img[:,:,1]\n",
    "        ch_layer += 1\n",
    "    if B:\n",
    "        img_stacked[:,:,ch_layer] = RGB_img[:,:,2]\n",
    "        ch_layer += 1\n",
    "        \n",
    "    # Stacking HLS channels as selected\n",
    "    if H:        \n",
    "        img_stacked[:,:,ch_layer] = HLS_img[:,:,0] \n",
    "        ch_layer += 1\n",
    "    if L:\n",
    "        img_stacked[:,:,ch_layer] = HLS_img[:,:,1]\n",
    "        ch_layer += 1\n",
    "    if S:\n",
    "        ch_filtered = color_thresh(HLS_img[:,:,2],S_thresh)\n",
    "        img_stacked[:,:,ch_layer] = ch_filtered\n",
    "        ch_layer += 1\n",
    "        \n",
    "    # Stacking YUV channels as selected\n",
    "    if Y:        \n",
    "        img_stacked[:,:,ch_layer] = YUV_img[:,:,0] \n",
    "        ch_layer += 1\n",
    "    if U:\n",
    "        img_stacked[:,:,ch_layer] = YUV_img[:,:,1]\n",
    "        ch_layer += 1\n",
    "    if V:\n",
    "        img_stacked[:,:,ch_layer] = YUV_img[:,:,2]\n",
    "        ch_layer += 1\n",
    "    \n",
    "        \n",
    "    # Grayscale image needed for Sobel    \n",
    "    #gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)\n",
    "    # For simplicity let's take just the average of all the channel values. the HLS values are normalized between 0 255 also\n",
    "    gray = np.mean(img_stacked,2).astype(np.float32)/255    \n",
    "    \n",
    "    #Gaussian Blur to smooth the image         \n",
    "    gray = gaussian_blur(gray, blur)    \n",
    "    \n",
    "    # Apply each of the thresholding functions\n",
    "    gradx = abs_sobel_thresh(gray, orient='x', sobel_kernel=sobel_kernel, thresh=sobelX_thresh)\n",
    "    grady = abs_sobel_thresh(gray, orient='y', sobel_kernel=sobel_kernel, thresh=sobelY_thresh)\n",
    "    mag_binary = mag_thresh(gray, sobel_kernel=sobel_kernel, thresh=mag_thresh_range)\n",
    "    dir_binary = dir_thresh(gray, sobel_kernel=sobel_kernel, thresh=dir_thresh_range) # (.65, 1.05))\n",
    "    \n",
    "    # Combine all the thresholding information\n",
    "    combined = np.zeros_like(dir_binary)\n",
    "    combined[((gradx == 1) & (grady == 1))*use_sobelXY | ((mag_binary == 1) & (dir_binary == 1))*use_MagDir_thresh] = 1\n",
    "    \n",
    "    if np.all(combined == 0):\n",
    "        print('Image NOT Sobel processed or combined Sobel processing provided a black image!')\n",
    "        combined = gray\n",
    "    else:\n",
    "        print('Using Sobel processed image')\n",
    "\n",
    "    # Visualize \n",
    "    f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))\n",
    "    ax1.imshow(RGB_img)\n",
    "    ax1.set_title('Original Image', fontsize=15)\n",
    "    \n",
    "    ax2.imshow(combined, cmap='gray')\n",
    "    ax2.set_title('Processed Image', fontsize=15)\n",
    "    \n",
    "    #return combined\n",
    "\n",
    "def crop_area_interest(img):\n",
    "    # Defining vertices for marked area\n",
    "    imshape = img.shape\n",
    "    left_bottom = (100, imshape[0])\n",
    "    right_bottom = (imshape[1]-20, imshape[0])\n",
    "    apex1 = (610, 410)\n",
    "    apex2 = (680, 410)\n",
    "    inner_left_bottom = (310, imshape[0])\n",
    "    inner_right_bottom = (1150, imshape[0])\n",
    "    inner_apex1 = (700,480)\n",
    "    inner_apex2 = (650,480)\n",
    "    vertices = np.array([[left_bottom, apex1, apex2, \\\n",
    "                          right_bottom, inner_right_bottom, \\\n",
    "                          inner_apex1, inner_apex2, inner_left_bottom]], dtype=np.int32)\n",
    "    # Masked area\n",
    "    are_interest = region_of_interest(img, vertices)\n",
    "    #return are_interest\n",
    "    \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Let's test this!!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# Read in and made a list of the calibrartion images provided\n",
    "images_paths = glob.glob('../test_images/test*.jpg')\n",
    "RGB_images = []\n",
    "# Step through the list and search for chessboard corners\n",
    "for fname in images_paths:    \n",
    "    RGB_images.append(mpimg.imread(fname))\n",
    "    \n",
    "print('We have loaded', len(RGB_images))\n",
    "print('Image shape:',RGB_images[0].shape)\n",
    "\n",
    "# Parameters to feed the interactive tool\n",
    "#(image_idx, use_sobelXY, sobel_kernel, sobelX_thresh, sobelY_thresh, use_MagDir_thresh, mag_thresh_range, dir_thresh_range, \n",
    "# R,R_thresh, G,B, H,L,S, S_thresh, Y,U,V, blur):\n",
    "\n",
    "interactive(combineAll,\n",
    "            image_idx = IntSlider(min=1, max=len(images)-1, step=1, value=12),\n",
    "            use_sobelXY = True,\n",
    "            sobel_kernel=IntSlider(min=1, max=31, step=2, value=31),            \n",
    "            sobelX_thresh=IntRangeSlider(min=0, max=255, step=1,value=[5, 100]),            \n",
    "            sobelY_thresh=IntRangeSlider(min=0, max=255, step=1,value=[0, 255]),\n",
    "            use_MagDir_thresh = True,\n",
    "            mag_thresh_range=IntRangeSlider(min=0, max=255, step=1,value=[50, 200]),            \n",
    "            dir_thresh_range=FloatRangeSlider(min=0, max=np.pi / 2, step=0.01,value=[0.65, 1.05]),\n",
    "            R=False, \n",
    "            R_thresh=IntRangeSlider(min=0, max=255, step=1,value=[200, 255]),\n",
    "            G=False, B=False, H=False, L=False, \n",
    "            S=True,\n",
    "            S_thresh=IntRangeSlider(min=0, max=255, step=1,value=[170, 255]),\n",
    "            Y=True, U=True, V=False,\n",
    "            blur=IntSlider(min=1, max=37, step=2, value=1))            "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Color and Gradient Trasnformations Summary\n",
    "After playing for a while we can conclude that:\n",
    "* There is __NO one single combination__ that works perfectly for all scenarios.\n",
    "* Since we are very focused on detecting 2 main types of lane lines (yellow and white) under as many different conditions as we can, the two approaches that came to mind are:\n",
    "    * Create specific functions to extract/detect yellow and white lines on images under as many possible conditions as possible. Something very similar was done on assignment 1: Lane detection, that I called colorFilter().\n",
    "    * Create a tool that can pre-process the image and, some how, classifiy what set of thresholds (color and gradient) suit better for those conditions and apply them to hopefully extract the correct lines with a higher probability. As we saw above, shadows, brightness, even night images can have clearly different ranges that work best in each case.\n",
    "* Another approach is not lookig at absolute values for ranges but x% of the channel values. For instance, in shadow areas a yellow line might be actually almos gray..but it can be considered lighter/highlight compared to the neighbours. We know the Red channel is good for detecting yellow and white and we also played with the S channel. We can research in this path further.\n",
    "    \n",
    "We will have to explore these possibilities deeper to decide what to use for the final video lane detection problem "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's start with Option 1 (This is an exact copy from Assignment one)\n",
    "'''\n",
    "input:\n",
    "   image: in any format (RGB, HLS, YUV, or 1 single channel)\n",
    "   colorBounderies: Example in RGB.. a truple with the lower range and another truple with the higher range\n",
    "   colorBoundaries = [\n",
    "       ([174, 131, 0], [255, 255, 255])\n",
    "       ]\n",
    "Output:\n",
    "    filtered image bitwise masked, i.e. grayscale\n",
    "'''\n",
    "def colorFilter(image, colorBoundaries, blur=1):\n",
    "    img = gaussian_blur(image, blur)\n",
    "    # loop over the boundaries\n",
    "    for (lower, upper) in colorBoundaries:\n",
    "        # create NumPy arrays from the boundaries\n",
    "        lower = np.array(lower, dtype = \"uint8\")\n",
    "        upper = np.array(upper, dtype = \"uint8\")\n",
    "\n",
    "        # find the colors within the specified boundaries and apply\n",
    "        # the mask\n",
    "        mask = cv2.inRange(image, lower, upper)\n",
    "        #output = cv2.bitwise_and(image, image, mask = mask)\n",
    "               \n",
    "        return mask\n",
    "        \n",
    "        \n",
    "def colorFilterInteractive(image_idx, img_format='RGB', Ch1=(174, 255), Ch2=(131, 255), Ch3=(0, 255), blur=1):    \n",
    "        # Assign the image from the already loaded images\n",
    "        Original = RGB_images[image_idx]\n",
    "        img = Original\n",
    "        \n",
    "        # let's load the best channel bounderies for each image type we have found so far\n",
    "        if img_format == 'RGB':\n",
    "            img = Original\n",
    "            #Ch1=(174, 255)\n",
    "            #Ch2=(131, 255)\n",
    "            #Ch3=(0, 255)\n",
    "        else:    \n",
    "            if img_format == 'HLS':           \n",
    "                img = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)\n",
    "                #Ch1=(0, 25)\n",
    "                #Ch2=(100, 255)\n",
    "                #Ch3=(150, 255)\n",
    "            else:\n",
    "                if img_format == 'YUV':           \n",
    "                    img = cv2.cvtColor(img, cv2.COLOR_RGB2YUV)\n",
    "                    img = 255-img\n",
    "                else:\n",
    "                    raise ValueError('You have to select an Image format from the list!')\n",
    "        \n",
    "        img = gaussian_blur(img, blur)\n",
    "        \n",
    "        lower = np.array((Ch1[0], Ch2[0], Ch3[0]), dtype = \"uint8\")\n",
    "        upper =  np.array((Ch1[1], Ch2[1], Ch3[1]), dtype = \"uint8\")\n",
    "\n",
    "        # find the colors within the specified boundaries and apply\n",
    "        # the mask\n",
    "        mask = cv2.inRange(img, lower, upper)\n",
    "        output = cv2.bitwise_and(img, img, mask = mask)              \n",
    "              \n",
    "        # Visualize \n",
    "        f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))\n",
    "        ax1.imshow(Original)\n",
    "        ax1.set_title('Original Image', fontsize=15)\n",
    "\n",
    "        ax2.imshow(output, cmap='gray')\n",
    "        ax2.set_title('Processed Image', fontsize=15)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__To test this function and find the right boundery values for yellow and white, let's use again the interactive tool to facilitate this task__"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from ipywidgets import widgets, interactive, FloatSlider, IntSlider, IntRangeSlider, FloatRangeSlider, RadioButtons, Select\n",
    "\n",
    "# We should have the images already loaded from the cells above, but just to let us test this cell in isolation, let's\n",
    "# reload the images\n",
    "\n",
    "# Read in and made a list of the test images provided\n",
    "images_paths = glob.glob('../test_images/test*.jpg')\n",
    "RGB_images = []\n",
    "# Step through the list and search for chessboard corners\n",
    "for fname in images_paths:    \n",
    "    RGB_images.append(mpimg.imread(fname))\n",
    "    \n",
    "print('We have loaded', len(RGB_images))\n",
    "print('Image shape:',RGB_images[0].shape)\n",
    "\n",
    "# Parameters to feed the interactive tool\n",
    "# (image_idx, img_format='RGB', Ch1=(174, 255), Ch2=(131, 255), Ch3=(0, 255)):\n",
    "interactive(colorFilterInteractive,\n",
    "            image_idx = IntSlider(min=1, max=len(images)-1, step=1, value=1),\n",
    "            img_format = Select(\n",
    "            options=['RGB', 'HLS', 'YUV'],\n",
    "            value='HLS',\n",
    "            description='Image Format:',\n",
    "            disabled=False),\n",
    "            Ch1=IntRangeSlider(min=0, max=255, step=1,value=[18, 40]),\n",
    "            Ch2=IntRangeSlider(min=0, max=255, step=1,value=[45, 255]),\n",
    "            Ch3=IntRangeSlider(min=0, max=255, step=1,value=[150, 255]),\n",
    "            blur=IntSlider(min=1, max=37, step=2, value=1))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "After some playing around we have found that HLS is definitely more robust and the thresholds found for consistent yellow lane detection are:\n",
    "* Ch1 = H = (18, 40)\n",
    "* Ch2 = L = (45, 255)\n",
    "* Ch3 = S = (150, 255) and (45, 160)\n",
    "\n",
    "For shadow in front of the car:\n",
    "* Ch1 = H = (110, 140)\n",
    "* Ch2 = L = (0, 70)\n",
    "* Ch3 = S = (0, 30)\n",
    "\n",
    "For the white lanes:\n",
    "* Ch1 = H = (0, 40)\n",
    "* Ch2 = L = (100, 255)\n",
    "* Ch3 = S = (150, 255)\n",
    "\n",
    "For Yellow lanes YUV worked fine on this range, but really bad in very dark shadowed areas.\n",
    "* Ch1 = Y = (0, 255)\n",
    "* Ch2 = U = (0, 255)\n",
    "* Ch3 = V = (144, 255)\n",
    "\n",
    "Too much time consumed!\n",
    "\n",
    "Let's try to combine both again as we did above\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "WhiteYellowColorBoundaries = [\n",
    "       ([15, 45, 150], [40, 255, 255]),\n",
    "       ([110, 0, 0], [140, 70, 255]),\n",
    "       ([0, 100, 150], [40, 255, 255])  \n",
    "    ]    \n",
    "#WhiteYellowColorBoundaries = [([0, 45, 150], [40, 255, 255])]    \n",
    "    \n",
    "# Read in and made a list of the test images provided\n",
    "images_paths = glob.glob('../test_images/test*.jpg')\n",
    "# Step through the list and search for chessboard corners\n",
    "for fname in images_paths:\n",
    "    Org = mpimg.imread(fname)\n",
    "    img = cv2.cvtColor(Org, cv2.COLOR_RGB2HLS)\n",
    "    WhiteYellow_Img = colorFilter(img, WhiteYellowColorBoundaries)    \n",
    "    # Visualize \n",
    "    f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))\n",
    "    ax1.imshow(Org)\n",
    "    ax1.set_title('Original Image', fontsize=15)\n",
    "\n",
    "    ax2.imshow(WhiteYellow_Img, cmap='gray')\n",
    "    ax2.set_title('Processed Image', fontsize=15) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "These results are not bad, but they could definitely be better. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's now try the last bullet point mentioned above... and try to extract \"highlights\" in different channels by thresholding a certain percent of the values in that channnel."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy\n",
    "\n",
    "def extract_highlights(img, per=99.9):\n",
    "    \"\"\"\n",
    "    Generates an image mask selecting highlights.\n",
    "    Input Parameters:\n",
    "        img: image with pixels in range 0-255\n",
    "        per: percentile for highlight selection. default=99.9\n",
    "        \n",
    "    :return: Highlight 255 not highlight 0\n",
    "    \"\"\"\n",
    "    p = int(np.percentile(img, per) - 30)\n",
    "    mask = cv2.inRange(img, p, 255)\n",
    "    ##output = cv2.bitwise_and(img, img, mask = mask)\n",
    "    return mask\n",
    "\n",
    "def extract_highlightsInteractive(image_idx, Ch, Percent, NegativeImg=False):\n",
    "    rgb = RGB_images[image_idx]\n",
    "    yuv = cv2.cvtColor(rgb, cv2.COLOR_RGB2YUV)\n",
    "    yuv = 255 - yuv\n",
    "    hls = cv2.cvtColor(rgb, cv2.COLOR_RGB2HLS)\n",
    "    \n",
    "    if Ch=='R':\n",
    "        Img_Ch = rgb[:,:,0]\n",
    "    if Ch=='G':\n",
    "        Img_Ch = rgb[:,:,1]\n",
    "    if Ch=='B':\n",
    "        Img_Ch = rgb[:,:,2]\n",
    "        \n",
    "    if Ch=='Y':\n",
    "        Img_Ch = yuv[:,:,0]        \n",
    "    if Ch=='U':\n",
    "        Img_Ch = yuv[:,:,1]\n",
    "    if Ch=='V':\n",
    "        Img_Ch = yuv[:,:,2]\n",
    "        \n",
    "    if Ch=='H':\n",
    "        Img_Ch = yuv[:,:,0]        \n",
    "    if Ch=='L':\n",
    "        Img_Ch = yuv[:,:,1]\n",
    "    if Ch=='S':\n",
    "        Img_Ch = yuv[:,:,2]\n",
    "        \n",
    "    Highlights = extract_highlights(img=Img_Ch, per=Percent)\n",
    "    \n",
    "    if NegativeImg:\n",
    "        Highlights = numpy.invert(Highlights) \n",
    "    \n",
    "    # Visualize \n",
    "    f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))\n",
    "    ax1.imshow(rgb)\n",
    "    ax1.set_title('Original Image', fontsize=15)\n",
    "    \n",
    "    ax2.imshow(Highlights, cmap='gray')\n",
    "    ax2.set_title('Highlights Detected', fontsize=15)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Let's test the highlight extraction idea"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# Read in and made a list of the calibrartion images provided\n",
    "images_paths = glob.glob('../test_images/test*.jpg')\n",
    "RGB_images = []\n",
    "# Step through the list and search for chessboard corners\n",
    "for fname in images_paths:    \n",
    "    RGB_images.append(mpimg.imread(fname))\n",
    "    \n",
    "print('We have loaded', len(RGB_images))\n",
    "print('Image shape:',RGB_images[0].shape)\n",
    "\n",
    "# Parameters to feed the interactive tool\n",
    "#(image_idx, use_sobelXY, sobel_kernel, sobelX_thresh, sobelY_thresh, use_MagDir_thresh, mag_thresh_range, dir_thresh_range, \n",
    "# R,R_thresh, G,B, H,L,S, S_thresh, Y,U,V, blur):\n",
    "\n",
    "interactive(extract_highlightsInteractive,\n",
    "            image_idx = IntSlider(min=1, max=len(images)-1, step=1, value=11),\n",
    "            Ch = RadioButtons(\n",
    "                    options=['R', 'G', 'B', 'H', 'L','S','Y','U','V'],\n",
    "                    value='R',\n",
    "                    description='Channel:',\n",
    "                    disabled=False\n",
    "            ),\n",
    "            Percent=FloatSlider(min=0, max=100, step=0.01,value=99.0),\n",
    "            NegativeImg = False,\n",
    "            )            "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The Highlight extraction works remarkably well on the R channel at 99.0 in images with good light.\n",
    "The S channel prove also to highlght the yellow line on the shaded area on image 4 but it overwhelms the output on image 3, for instance, under the dark bridge.\n",
    "It's much better to output a dark/black image so we can use a different channel or different range of therehold than overwhelming the output (almost all white).\n",
    "\n",
    "After looking at these results we conclude that the appropriate way to succesfully extract the lane lines is to combine different extraction/filter thresholds and join their specific \"powers\" by bitwise OR them at the end. Let's try that."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "YellowBoundary = [([20, 50, 150], [40, 255, 255])]\n",
    "WhiteBoundary = [([175, 150, 200], [255, 255, 255])]\n",
    "\n",
    "# Let's load the test images\n",
    "images_paths = glob.glob('../test_images/test*.jpg')\n",
    "# Step through the list and search for chessboard corners\n",
    "for fname in images_paths:\n",
    "    Org = mpimg.imread(fname)\n",
    "    hls = cv2.cvtColor(Org, cv2.COLOR_RGB2HLS)\n",
    "    White_Highlights = colorFilter(Org, WhiteBoundary)\n",
    "    Yellow_Highlights = colorFilter(hls, YellowBoundary)\n",
    "    Highlights = extract_highlights(img=Org[:,:,0], per=99.0)\n",
    "    \n",
    "    out = np.zeros(Org.shape[:-1], dtype=np.uint8)\n",
    "\n",
    "    out[:, :][((White_Highlights==255) | (Yellow_Highlights==255) | (Highlights==255))] = 1\n",
    "\n",
    "    \n",
    "    # Visualize \n",
    "    f, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(1, 5, figsize=(20,10))\n",
    "    ax1.imshow(Org)\n",
    "    ax1.set_title('Original Image', fontsize=15)\n",
    "\n",
    "    ax2.imshow(White_Highlights, cmap='gray')\n",
    "    ax2.set_title('White Extracted Image', fontsize=15)\n",
    "    \n",
    "    ax3.imshow(Yellow_Highlights, cmap='gray')\n",
    "    ax3.set_title('Yellow Extracted Image', fontsize=15)\n",
    "    \n",
    "    ax4.imshow(Highlights, cmap='gray')\n",
    "    ax4.set_title('Highlights', fontsize=15)\n",
    "    \n",
    "    ax5.imshow(out, cmap='gray')\n",
    "    ax5.set_title('Combinied Image', fontsize=15)\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As you can imagine, we tested many possible combinations and finally decided to extract white-ish pixels using RGB, extract the yellow-ish pixels using HLS color space and finally use a \"highlight extractor\" filtering pixels above a certain percent on the Red Channel. The results are good enough for now.\n",
    "Also note that YUV (specifically Y and U channels) produce a very good output when using sobel fubcions. Therefore, the idea is to use RGB to extract white, HLS to extract yellow, Highligh extraction and R,S,Y,U to apply sobel. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# 4. Perspective Trasnformations: Bird's-Eye View\n",
    "\n",
    "As soon as we think about perspective trasformation, the first challenge that comes to mind is how to chose the source and the destination points in a semi-automated way or use constant values so there is less human intervention in this process. The intuition for this whole process is:\n",
    "* Camera Calibration\n",
    "* Distortion Correction\n",
    "* Perspective Trasnformation\n",
    "\n",
    "Since the images used to calibrate the camera have not been taken with the same camera than the images that we are using for testing the road, we will use just Perspective Transformation on those, but we will show the whole process on the chessboard ones. On those ones, it's very easy to choose the Source points since we have a funcion that will give as the inner corners detected (as we saw above) and we can use the 4 most outer corners as our Source points. The destination points, as we saw on the lectures, will be arbitrarily choosen to be a nice fit for displaying our warped result. \n",
    "       \n",
    "Let's begin by defining the function we will use for the chessboard images \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define a function that takes an image, number of x and y inner corner points, \n",
    "# camera matrix and distortion coefficients from above\n",
    "def warpImg(img, nx, ny, mtx, dist):\n",
    "    # Use the OpenCV undistort() function to remove distortion\n",
    "    undist = cv2.undistort(img, mtx, dist, None, mtx)\n",
    "    # Convert undistorted image to grayscale\n",
    "    gray = cv2.cvtColor(undist, cv2.COLOR_BGR2GRAY)\n",
    "    # Search for corners in the grayscaled image\n",
    "    ret, corners = cv2.findChessboardCorners(gray, (nx, ny), None)\n",
    "\n",
    "    if ret == True:\n",
    "        # If we found corners, draw them! (just for fun)\n",
    "        cv2.drawChessboardCorners(undist, (nx, ny), corners, ret)\n",
    "        # Choose offset from image corners to plot detected corners\n",
    "        # This should be chosen to present the result at the proper aspect ratio\n",
    "        # My choice of 100 pixels is not exact, but close enough for our purpose here\n",
    "        offset = 100 # offset for dst points\n",
    "        # Grab the image shape\n",
    "        img_size = (gray.shape[1], gray.shape[0])\n",
    "\n",
    "        # For source points I'm grabbing the outer four detected corners\n",
    "        src = np.float32([corners[0], corners[nx-1], corners[-1], corners[-nx]])\n",
    "        # For destination points, I'm arbitrarily choosing some points to be\n",
    "        # a nice fit for displaying our warped result \n",
    "        # again, not exact, but close enough for our purposes\n",
    "        dst = np.float32([[offset, offset], [img_size[0]-offset, offset], \n",
    "                                     [img_size[0]-offset, img_size[1]-offset], \n",
    "                                     [offset, img_size[1]-offset]])\n",
    "        # Given src and dst points, calculate the perspective transform matrix\n",
    "        M = cv2.getPerspectiveTransform(src, dst)\n",
    "        # Warp the image using OpenCV warpPerspective()\n",
    "        warped = cv2.warpPerspective(undist, M, img_size)\n",
    "\n",
    "        # Return the resulting image and matrix\n",
    "        return warped, M\n",
    "    else:\n",
    "        return img, 0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Read in the saved camera matrix and distortion coefficients\n",
    "# These are the arrays you calculated using cv2.calibrateCamera()\n",
    "dist_pickle = pickle.load( open( \"camera_calibration.p\", \"rb\" ) )\n",
    "mtx = dist_pickle[\"mtx\"]\n",
    "dist = dist_pickle[\"dist\"]\n",
    "\n",
    "# Let's load again the chessboard images\n",
    "# Read in and made a list of the calibrartion images provided\n",
    "images_paths = glob.glob('../myGoProCalibration/GOPR0*.jpg')\n",
    "#images = glob.glob('../camera_cal/calibration*.jpg')\n",
    "NumCalibrationImages = len(images_paths)\n",
    "\n",
    "\n",
    "for fname in images_paths:\n",
    "    img = mpimg.imread(fname)\n",
    "    \n",
    "    top_down, perspective_M = warpImg(img, nx, ny, mtx, dist)\n",
    "    if np.all(perspective_M) != 0:\n",
    "        f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))\n",
    "        f.tight_layout()\n",
    "        ax1.imshow(img)\n",
    "        ax1.set_title('Original Image', fontsize=50)\n",
    "        ax2.imshow(top_down)\n",
    "        ax2.set_title('Undistorted and Warped Image', fontsize=50)\n",
    "        plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Perspective Transformations on Road images\n",
    "Next, as we mentioned before, we will perform a perspective transformation on the road test images. We will not correct for distortions since we don't have checkboards images taken with the same camera."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's start defineing the Class that will hold the trasformations\n",
    "class perspective:\n",
    "    # Define the Properties and the Constructor\n",
    "    def __init__(self, src, dst):\n",
    "        self.src = src\n",
    "        self.dst = dst\n",
    "        self.M = cv2.getPerspectiveTransform(src, dst)\n",
    "        self.M_inv = cv2.getPerspectiveTransform(dst, src)\n",
    "    \n",
    "    # Methods\n",
    "    def warp(self, img):\n",
    "        img_size = (img.shape[1], img.shape[0])\n",
    "        return cv2.warpPerspective(img, self.M, img_size, flags=cv2.INTER_LINEAR)\n",
    "\n",
    "    def inv_warp(self, img):\n",
    "        img_size = (img.shape[1], img.shape[0])\n",
    "        return cv2.warpPerspective(img, self.M_inv, img_size, flags=cv2.INTER_LINEAR)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "After defining this simple but powerful class, we will define a function that will use it on every image on the Test set to...test it. We will keep using the useful \"interactive\" tool again for visual convinience.\n",
    "One of the key elements on this trasformation is, obviously, the selection of the source and destination poins. This part, if keept simpel, could be very similar to \"region of interest\" in assignment 1. In our case, we can assume that the camera will be always located facing forward on the car (usually on the top of the car or on the rear-view mirror). So we can take 2 points from the bottom of the image at the same Y (height) to avoid the hood, and a few pixels from each side to cover a wide area. For the next two points we can select them, in the same fashion as before, at teh same height (Y) from the top - we will play with this number to avoid the sky - and we will select the X values to follow an inverted V shape that will fit our perspective.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "'''\n",
    "               \n",
    "Top Values:     X[0]    X[1]   \n",
    "    Y[0] >________/________\\_________\n",
    "                 /          \\\n",
    "                /            \\\n",
    "               /              \\\n",
    "              /                \\\n",
    "             /                  \\\n",
    "    Y[1] >__/____________________\\____\n",
    "Bot Values X[0]                 X[1]\n",
    "\n",
    "'''\n",
    "def birdsEyeView(image_idx, offset, Ys, topXs, botXs):\n",
    "    SRC = np.float32([\n",
    "    (botXs[0], Ys[1]),\n",
    "    (botXs[1], Ys[1]),    \n",
    "    (topXs[0], Ys[0]),\n",
    "    (topXs[1], Ys[0])])\n",
    "   \n",
    "    DST = np.float32([\n",
    "        (SRC[0][0] + offset, SRC[0][1]),        \n",
    "        (SRC[-1][0] - offset, SRC[0][1]),\n",
    "        (SRC[0][0] + offset, 0),\n",
    "        (SRC[-1][0] - offset, 0)])\n",
    "        \n",
    "\n",
    "    aPerpespective = perspective(SRC, DST)\n",
    "    \n",
    "    # Assign the image from the already loaded images on RBG_images\n",
    "    Original = RGB_images[image_idx]\n",
    "    \n",
    "    # Let's take a look at hte birds-eye view over the original (RGB) image\n",
    "    biersEyeView_Org_Img = aPerpespective.warp(Original)\n",
    "    \n",
    "    # let's apply the threshold for the color spaces\n",
    "    YellowBoundary = [([20, 50, 150], [40, 255, 255])]\n",
    "    WhiteBoundary = [([175, 150, 200], [255, 255, 255])]\n",
    "\n",
    "    img = cv2.cvtColor(Original, cv2.COLOR_RGB2HLS)\n",
    "    White_Img_Binary = colorFilter(Original, WhiteBoundary)\n",
    "    Yellow_Img_Binary = colorFilter(img, YellowBoundary)\n",
    "    output = cv2.bitwise_or(White_Img_Binary, Yellow_Img_Binary)    \n",
    "    \n",
    "    biersEyeView_Thr_Img = aPerpespective.warp(output)\n",
    "    \n",
    "    \n",
    "    # Visualize \n",
    "    f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(24, 9))\n",
    "    f.tight_layout()\n",
    "    #f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10))\n",
    "    ax1.imshow(Original)\n",
    "    ax1.set_title('Original Image', fontsize=15)\n",
    "\n",
    "    ax2.imshow(biersEyeView_Org_Img)\n",
    "    ax2.set_title('Birds-Eye (Org) Image', fontsize=15)\n",
    "        \n",
    "    ax3.imshow(biersEyeView_Thr_Img, cmap='gray')\n",
    "    ax3.set_title('Birds-Eye (Thr) Image', fontsize=15)\n",
    "        \n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Let's load again the test images\n",
    "images_paths = glob.glob('../test_images/test*.jpg')\n",
    "RGB_images = []\n",
    "# Step through the list and search for chessboard corners\n",
    "for fname in images_paths:    \n",
    "    RGB_images.append(mpimg.imread(fname))\n",
    "    \n",
    "print('We have loaded', len(RGB_images))\n",
    "print('Image shape:',RGB_images[0].shape)\n",
    "    \n",
    "interactive(birdsEyeView,\n",
    "            image_idx = IntSlider(min=1, max=len(RGB_images)-1, step=1, value=5),\n",
    "            offset=IntSlider(min=0, max=1280/2, step=1,value=175),\n",
    "            Ys=IntRangeSlider(min=0, max=720, step=1,value=[450, 675]),\n",
    "            topXs=IntRangeSlider(min=0, max=1280, step=1,value=[540, 740]),\n",
    "            botXs=IntRangeSlider(min=0, max=1280, step=1,value=[132, 1147]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's now verify that the perspective transformation was working as expected by drawing the src and dst points onto\n",
    "a test image and its warped counterpart to verify that the lines appear parallel in the warped image.\n",
    "** First ** we will take a look at what we did on Assignment 1 and we will see how good/bad it performs on the test images (curves)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):\n",
    "    \"\"\"\n",
    "    `img` is the output of the hough_lines(), An image with lines drawn on it.\n",
    "    Should be a blank image (all black) with lines drawn on it.\n",
    "    \n",
    "    `initial_img` should be the image before any processing.\n",
    "    \n",
    "    The result image is computed as follows:\n",
    "    \n",
    "    initial_img * α + img * β + λ\n",
    "    NOTE: initial_img and img must be the same shape!\n",
    "    \"\"\"\n",
    "    return cv2.addWeighted(initial_img, α, img, β, λ)\n",
    "\n",
    "def InterpolateLanes(lines, imgShape, order):\n",
    "\n",
    "    # Arrays where we will store the points(X,Y) for each lane to be fitted\n",
    "    x_LeftLane = []\n",
    "    y_LeftLane = []\n",
    "    \n",
    "    x_RightLane = []\n",
    "    y_RightLane = []\n",
    "    \n",
    "    for line in lines:\n",
    "        for x1,y1,x2,y2 in line:                       \n",
    "            # Since we can't use all the points to Interpolate/Extrapolate\n",
    "            # We first put together all the points (x,y) that belong to each lane looking at their slope\n",
    "            if (x2-x1) != 0:\n",
    "                slope = ((y2-y1)/(x2-x1))\n",
    "                # left lane\n",
    "                if slope < 0:                 \n",
    "                    x_LeftLane.append(x1)\n",
    "                    y_LeftLane.append(y1)\n",
    "\n",
    "                    x_LeftLane.append(x2)\n",
    "                    y_LeftLane.append(y2)\n",
    "                else:             \n",
    "                    if slope > 0:                 \n",
    "                        x_RightLane.append(x1)\n",
    "                        y_RightLane.append(y1)\n",
    "\n",
    "                        x_RightLane.append(x2)\n",
    "                        y_RightLane.append(y2)\n",
    "\n",
    "    \n",
    "    # Interpolate the Left Lane\n",
    "    # 1) calculate polynomial (Not necesarly has to be all the time a line)\n",
    "    z_LeftLane = np.polyfit(x_LeftLane, y_LeftLane, order)\n",
    "    f_LeftLane = np.poly1d(z_LeftLane)\n",
    "    # Where does this lane start\n",
    "    x_LeftLaneStart = min(x_LeftLane)\n",
    "    # Where does this lane finish\n",
    "    x_LeftLaneEnd = max(x_LeftLane)\n",
    "    \n",
    "    \n",
    "    # Interpolate the Right Lane\n",
    "    # 1) calculate polynomial (Not necesarly has to be all the time a line)\n",
    "    z_RightLane = np.polyfit(x_RightLane, y_RightLane, order)\n",
    "    f_RightLane = np.poly1d(z_RightLane)\n",
    "    # Where does this lane start\n",
    "    x_RightLaneStart = min(x_RightLane)\n",
    "    # Where does this lane finish\n",
    "    x_RightLaneEnd = max(x_RightLane)\n",
    "    \n",
    "    \n",
    "    return f_LeftLane, f_RightLane, x_LeftLaneStart, x_RightLaneStart, x_LeftLaneEnd, x_RightLaneEnd\n",
    "    \n",
    "\n",
    "def draw_lanes(img, lines, color=[255, 0, 0], thickness=5):\n",
    "    \n",
    "    f_LeftLane, f_RightLane, x_LeftLaneStart, x_RightLaneStart, x_LeftLaneEnd, x_RightLaneEnd = InterpolateLanes(lines,img.shape,2) \n",
    "    for x in range(x_LeftLaneStart,x_LeftLaneEnd,10) :\n",
    "        cv2.line(img, (x, int(f_LeftLane(x))), (x+10, int(f_LeftLane(x+10))), color, thickness)\n",
    "            \n",
    "    for x in range(x_RightLaneStart,x_RightLaneEnd,10) :\n",
    "        cv2.line(img, (x, int(f_RightLane(x))), (x+10, int(f_RightLane(x+10))), color, thickness)\n",
    "        \n",
    "        \n",
    "def draw_lines(img, lines, color=[255, 0, 0], thickness=3):\n",
    "    \"\"\"\n",
    "    NOTE: this is the function you might want to use as a starting point once you want to \n",
    "    average/extrapolate the line segments you detect to map out the full\n",
    "    extent of the lane (going from the result shown in raw-lines-example.mp4\n",
    "    to that shown in P1_example.mp4).  \n",
    "    \n",
    "    Think about things like separating line segments by their \n",
    "    slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left\n",
    "    line vs. the right line.  Then, you can average the position of each of \n",
    "    the lines and extrapolate to the top and bottom of the lane.\n",
    "    \n",
    "    This function draws `lines` with `color` and `thickness`.    \n",
    "    Lines are drawn on the image inplace (mutates the image).\n",
    "    If you want to make the lines semi-transparent, think about combining\n",
    "    this function with the weighted_img() function below\n",
    "    \"\"\"\n",
    "  \n",
    "    for line in lines:\n",
    "        for x1,y1,x2,y2 in line:\n",
    "            cv2.line(img, (x1, y1), (x2, y2), color, thickness)\n",
    "            \n",
    "def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):\n",
    "    \"\"\"\n",
    "    `img` should be the output of a Canny transform.\n",
    "        \n",
    "    Returns an image with hough lines drawn.\n",
    "    \"\"\"\n",
    "    lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)\n",
    "    line_img = np.zeros((*img.shape, 3), dtype=np.uint8)\n",
    "    #draw_lines(line_img, lines)\n",
    "    draw_lanes(line_img, lines)\n",
    "    return line_img\n",
    "\n",
    "def canny(img, low_threshold, high_threshold):\n",
    "    \"\"\"Applies the Canny transform\"\"\"\n",
    "    return cv2.Canny(img, low_threshold, high_threshold)\n",
    "\n",
    "def region_of_interest(img, vertices):\n",
    "    \"\"\"\n",
    "    Applies an image mask.\n",
    "    \n",
    "    Only keeps the region of the image defined by the polygon\n",
    "    formed from `vertices`. The rest of the image is set to black.\n",
    "    \"\"\"\n",
    "    #defining a blank mask to start with\n",
    "    mask = np.zeros_like(img)   \n",
    "    \n",
    "    #defining a 3 channel or 1 channel color to fill the mask with depending on the input image\n",
    "    if len(img.shape) > 2:\n",
    "        channel_count = img.shape[2]  # i.e. 3 or 4 depending on your image\n",
    "        ignore_mask_color = (255,) * channel_count\n",
    "    else:\n",
    "        ignore_mask_color = 255\n",
    "        \n",
    "    #filling pixels inside the polygon defined by \"vertices\" with the fill color    \n",
    "    cv2.fillPoly(mask, vertices, ignore_mask_color)\n",
    "    \n",
    "    #returning the image only where mask pixels are nonzero\n",
    "    masked_image = cv2.bitwise_and(img, mask)\n",
    "    return masked_image\n",
    "\n",
    "def birdsEyeView(image, offset, Ys, topXs, botXs):\n",
    "    SRC = np.float32([\n",
    "    (botXs[0], Ys[1]),\n",
    "    (botXs[1], Ys[1]),    \n",
    "    (topXs[0], Ys[0]),\n",
    "    (topXs[1], Ys[0])])\n",
    "   \n",
    "    DST = np.float32([\n",
    "        (SRC[0][0] + offset, SRC[0][1]),        \n",
    "        (SRC[-1][0] - offset, SRC[0][1]),\n",
    "        (SRC[0][0] + offset, 0),\n",
    "        (SRC[-1][0] - offset, 0)])\n",
    "        \n",
    "\n",
    "    aPerpespective = perspective(SRC, DST)\n",
    "    \n",
    "    # Let's take a look at hte birds-eye view over the original (RGB) image           \n",
    "    biersEyeView_Org_Img = aPerpespective.warp(image)\n",
    "    \n",
    "    \n",
    "    # let's apply the threshold for the color spaces\n",
    "    YellowBoundary = [([20, 50, 150], [40, 255, 255])]\n",
    "    WhiteBoundary = [([175, 150, 200], [255, 255, 255])]\n",
    "\n",
    "    img_HLS = cv2.cvtColor(image, cv2.COLOR_RGB2HLS)\n",
    "    White_Img_Binary = colorFilter(image, WhiteBoundary)\n",
    "    Yellow_Img_Binary = colorFilter(img_HLS, YellowBoundary)\n",
    "    output = cv2.bitwise_or(White_Img_Binary, Yellow_Img_Binary)    \n",
    "    \n",
    "    biersEyeView_Thr_Img = aPerpespective.warp(output)\n",
    "    \n",
    "    return biersEyeView_Org_Img, biersEyeView_Thr_Img\n",
    "\n",
    "def laneDraw(image_idx):\n",
    "    # Let's first Load an example image    \n",
    "    Original = RGB_images[image_idx]\n",
    "    \n",
    "    # Let's now get the Region Of Interest\n",
    "    # This time we are defining a four sided polygon to mask\n",
    "    imshape = Original.shape\n",
    "    # vertices = np.array([[(0,imshape[0]),(abs(imshape[1]/2)-10, abs(imshape[0]/2)), (abs(imshape[1]/2)+10, abs(imshape[0]/2)), (imshape[1],imshape[0])]], dtype=np.int32)\n",
    "    vertices = np.array([[(0,imshape[0]),(abs(imshape[1]/2)-10, abs(imshape[0]/2)+45), (abs(imshape[1]/2)+10, abs(imshape[0]/2)+45), (imshape[1],imshape[0])]], dtype=np.int32)\n",
    "    img_region_of_interest = region_of_interest(Original,vertices)\n",
    "    \n",
    "    # Let's Filter/Extract/Find the white and the yellow lines\n",
    "    YellowBoundary = [([20, 50, 160], [40, 255, 255])]\n",
    "    WhiteBoundary = [([175, 150, 200], [255, 255, 255])]\n",
    "    \n",
    "    img = cv2.cvtColor(img_region_of_interest, cv2.COLOR_RGB2HLS)\n",
    "    White_Img_Binary = colorFilter(img_region_of_interest, WhiteBoundary)\n",
    "    Yellow_Img_Binary = colorFilter(img, YellowBoundary)\n",
    "    img_only_lane_lines = cv2.bitwise_or(White_Img_Binary, Yellow_Img_Binary)\n",
    "\n",
    "    # Define a kernel size and apply Gaussian smoothing\n",
    "    kernel_size = 1\n",
    "    img_blur = gaussian_blur(img_only_lane_lines,kernel_size)\n",
    "\n",
    "\n",
    "    # Define our parameters for Canny and apply\n",
    "    low_threshold = 1 # This values for high constrast video problem\n",
    "    high_threshold = 250 #low_threshold * 3\n",
    "    canny_edges = canny(img_blur, low_threshold, high_threshold)\n",
    "\n",
    "    # Define the Hough transform parameters\n",
    "    # Make a blank the same size as our image to draw on\n",
    "    rho = 2 #distance resolution in pixels of the Hough grid\n",
    "    theta = np.pi/180 # angular resolution in radians of the Hough grid\n",
    "    threshold = 50    # minimum number of votes (intersections in Hough grid cell)\n",
    "    min_line_len = 7 #minimum number of pixels making up a line\n",
    "    max_line_gap =   15  # maximum gap in pixels between connectable line segment\n",
    "\n",
    "    # Run Hough on edge detected image\n",
    "    line_image = hough_lines(canny_edges, rho, theta, threshold, min_line_len, max_line_gap)\n",
    "    #plt.imshow(line_image)\n",
    "\n",
    "    # Draw the lines on the original image\n",
    "    lines_edges = weighted_img(Original, line_image, α=0.8, β=1., λ=0.)\n",
    "    \n",
    "    #Get the Birds-eye View\n",
    "    birsEyeView_Org_img, birsEyeView_Thr_img = birdsEyeView(lines_edges, offset=136, Ys=(475,720), topXs=(540,720), botXs=(132, 1147))\n",
    "    \n",
    "    # Visualize \n",
    "    f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,10))\n",
    "    ax1.imshow(lines_edges)\n",
    "    ax1.set_title('Original Image', fontsize=15)\n",
    "\n",
    "    ax2.imshow(birsEyeView_Org_img)\n",
    "    ax2.set_title('Birds Eye View (Org) Image', fontsize=15)\n",
    "    \n",
    "    ax3.imshow(birsEyeView_Thr_img, cmap='gray')\n",
    "    ax3.set_title('Birds Eye View (Thr) Image', fontsize=15)\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# Let's load again the test images\n",
    "images_paths = glob.glob('../test_images/test*.jpg')\n",
    "RGB_images = []\n",
    "# Step through the list and search for chessboard corners\n",
    "for fname in images_paths:    \n",
    "    RGB_images.append(mpimg.imread(fname))\n",
    "    \n",
    "print('We have loaded', len(RGB_images))\n",
    "print('Image shape:',RGB_images[0].shape)\n",
    "\n",
    "#try:\n",
    "interactive(laneDraw,\n",
    "            image_idx = IntSlider(min=1, max=len(RGB_images)-1, step=1, value=1))\n",
    "#except:\n",
    "#    pass # <- This will allow is to jump to another picture in case we had an error"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Above is what we did on our asignment 1 which, as you can see it's not really that good for curved roads.\n",
    "Let's use the new technique tought in class.\n",
    "\n",
    "### Line Finding Method using peaks in a Histogram\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "image_idx = 5\n",
    "img = RGB_images[image_idx]\n",
    "\n",
    "#Get the Birds-eye View\n",
    "BEV_Org_img, BEV_Thr_img = birdsEyeView(img, offset=136, Ys=(475,720), topXs=(540,720), botXs=(132, 1147))\n",
    "    \n",
    "\n",
    "histogram = np.sum(BEV_Thr_img[BEV_Thr_img.shape[0]/2:,:], axis=0)\n",
    "plt.plot(histogram)\n",
    "\n",
    "# Visualize \n",
    "f, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(20,10))\n",
    "ax1.imshow(img)\n",
    "ax1.set_title('Original Image', fontsize=15)\n",
    "\n",
    "ax2.imshow(BEV_Org_img)\n",
    "ax2.set_title('Birds Eye View (Org) Image', fontsize=15)\n",
    "\n",
    "ax3.imshow(BEV_Thr_img, cmap='gray')\n",
    "ax3.set_title('Birds Eye View (Thr) Image', fontsize=15)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## For code organization and readibility I decided to move to \".py\" files.\n",
    "##### The Code files submited in conjuction with this notebook are:\n",
    "* __CameraCalibration.py__: Defines the camera class and all its methods to obtain:\n",
    "    * 1) Camera Matrix used for perfective\n",
    "    * 2) distortion coefficients\n",
    "    * 3) rotation vectors\n",
    "    * 4) Translation vectors\n",
    "* __ImageProcessingUtils.py__: This file defines ALL the functions described on this notebook (sobel, color thereholding, use Histograms for line fitting, and others to support the lane detection\n",
    "* __LaneDetector.py__: This file defines the LaneDetector Class and all the methods to support the Lane detetion discussed in class.\n",
    "    * 1) IsLane: Checks if two lines are likely to form a lane by comparing the curvature and distance.\n",
    "        Basically are they parallel and if so, is the distance between them a reasonable \"Lane size\"\n",
    "    * 2) Draw lane overlay and curvature information, etc..\n",
    "* __Line.py__: This file defines the Line Class that will try to \"fit\" a line found on the image. I investigated and tried several features to improve the performance of \"fitting\". Definetely this needs more work but is a good start.\n",
    "* __PerspectiveTrasformer.py__: This file defines the Perspective Class (properties and methods) exactly as we did above.\n",
    "* __VideoProcessing.py__:This file defines main running function to produce the video outputs we are required to present for this assignment. Goes through all the original videos and proccess them to overlay the detected lanes frame-by-frame\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. Describe how (and identify where in your code) you identified lane-line pixels and fit their positions with a polynomial?\n",
    "\n",
    "The entire process on how we identified lane-line pixels and fit their positions starts on __\"LaneFinder.py\"__. Under the class definition \"LaneFinder\" you'll see a method called __\"process_frame(self, frame)\"__ where everthing starts. We begin by making a copy of the original image and \"undistort\" the image. We follow by applying all the image processing techniques we learned (shown above) using __generate_lane_mask(frame, v_cutoff=400)__. After we \"warp\" the image and start the lane detection using __histogram_lane_detection(...)__. These 2 functions (and all other image processing ones) are defined on __ImageProcessingUtils.py__. After we have collected the coordinates for all the pixels that we extracted and we beleive they might belong to a lane line, we proceed by fiting them in a line and perform some checks to asses the likelyhood of beeing actualy part of the lane lines. This fitting and checking happens on __\"LaneLine.py\"__. in this file and under the class definition \"LaneLine\" you'll see a method called __\"update(self, x, y)\"__. This method tries to fit, check and compare lines from previous frames to increase the confidence in our \"finder\" results."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5. Describe how (and identify where in your code) you calculated the radius of curvature of the lane and the position of the vehicle with respect to center.\n",
    "The radious of curvature is calculated usign the following equation (that can be easily derived) obtained from \"http://mathworld.wolfram.com/RadiusofCurvature.html\"\n",
    "<figure>\n",
    " <img src=\"NumberedEquation3.gif\" width=\"200\" alt=\"Combined Image\" />\n",
    " <figcaption>\n",
    " <p></p> \n",
    " <p style=\"text-align: center;\"> Radious Of Curvature Equation </p> \n",
    " </figcaption>\n",
    "</figure>\n",
    " <p></p> \n",
    "The function that performs this calculation is on __LaneLine.py__ and it's called __calc_curvature(curve)__ . This function gets fed with a collection of points that represent the center of the lane (take a look at line 225 on __LaneFinder.py__). We take these points and do the conversion/scaling from pixels to meters and we fit the curve to a typical second order equation: y(x) = Ax^2 + Bx + C. The derivative of this curve is y'(x) = 2Ax + B and the second derivative y''(x) = 2A. After deriving the coeficients A and B (using np.polyfit) we calculate the RoC.\n",
    "\n",
    "As an addition, we tried to calculate the value for a confortable speed during a curve using a paper that identifies the threshold value of comfort for lateral accelerations ona vehicle as being 1.8 m/s2, with medium comfort and discomfort levels of 3.6 m/s2 and 5 m/s2, respectively \n",
    "\"W. J. Cheng, Study on the evaluation method of highway alignment comfortableness [M.S. thesis],\n",
    "Hebei University of Technology, Tianjin, China, 2007.\" \n",
    "The process is very simple. The radial acceleration equation (also very easy to derive) is:\n",
    "<figure>\n",
    " <img src=\"circacceqn.GIF\" width=\"100\" alt=\"Combined Image\" />\n",
    " <figcaption>\n",
    " <p></p> \n",
    " <p style=\"text-align: center;\"> Radial Acceleration </p> \n",
    " </figcaption>\n",
    "</figure>\n",
    " <p></p>\n",
    " \n",
    " So, having defined a therehold for a confortable radial acceleration and knowing the radious of curvature on the curve, we can easily proceed to calculate the desirible speed of the vehicle while taking the turn. Since the project video and the challenge video are mostly on a straight road, this feature has not too much value at this point.\n",
    "The function that performs this calculation is on __LaneLine.py__ and it's called __calc_desiredSpeed(roc)__.\n",
    "\n",
    "You can find how the drawing overlay over the original image and the \"Adding\" of the these information is performed at the end of the above mentioned __\"process_frame(self, frame)\"__ on the LaneFinder class (LineFinder.py)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6. Provide an example image of your result plotted back down onto the road such that the lane area is identified clearly.\n",
    "\n",
    "<figure>\n",
    " <img src=\"ProjectVideoFrame1.png\" width=\"600\" alt=\"Combined Image\" />\n",
    " <figcaption>\n",
    " <p></p> \n",
    " <p style=\"text-align: center;\"> Project Video 1 </p> \n",
    " </figcaption>\n",
    "</figure>\n",
    " <p></p>\n",
    "  _______________________\n",
    " <figure>\n",
    " <img src=\"ProjectVideoFrame3.png\" width=\"600\" alt=\"Combined Image\" />\n",
    " <figcaption>\n",
    " <p></p> \n",
    " <p style=\"text-align: center;\"> Project Video 2 </p> \n",
    " </figcaption>\n",
    "</figure>\n",
    " <p></p>\n",
    " _______________________\n",
    " <figure>\n",
    " <img src=\"ChallengeVideoFrame1.png\" width=\"600\" alt=\"Combined Image\" />\n",
    " <figcaption>\n",
    " <p></p> \n",
    " <p style=\"text-align: center;\"> Challenge Video </p> \n",
    " </figcaption>\n",
    "</figure>\n",
    " <p></p>\n",
    " _______________________\n",
    " <figure>\n",
    " <img src=\"HarderChallengeVideoFrame3.png\" width=\"600\" alt=\"Combined Image\" />\n",
    " <figcaption>\n",
    " <p></p> \n",
    " <p style=\"text-align: center;\"> Harder Challenge Video 1 </p> \n",
    " </figcaption>\n",
    "</figure>\n",
    " <p></p>\n",
    " _______________________\n",
    " <figure>\n",
    " <img src=\"HarderChallengeVideoFrame7.png\" width=\"600\" alt=\"Combined Image\" />\n",
    " <figcaption>\n",
    " <p></p> \n",
    " <p style=\"text-align: center;\"> Harder Challenge Video 2 </p> \n",
    " </figcaption>\n",
    "</figure>\n",
    " <p></p>\n",
    " _______________________\n",
    " <figure>\n",
    " <img src=\"HarderChallengeVideoFrame8.png\" width=\"600\" alt=\"Combined Image\" />\n",
    " <figcaption>\n",
    " <p></p> \n",
    " <p style=\"text-align: center;\"> Harder Challenge Video 3 </p> \n",
    " </figcaption>\n",
    "</figure>\n",
    " <p></p>\n",
    " _______________________"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "# Conclusion"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Produced 3 videos:\n",
    "* __project_video_MunirJojoVerge.mp4__: Good and acceptable results\n",
    "* __challenge_video_MunirJojoVerge.mp4__: Good and acceptable results\n",
    "* __harder_challenge_video_MunirJojoVerge.mp4__: Good try. Don't trust this algorithm!! :-)\n",
    "    \n",
    "## Challenges:\n",
    "All the challenges facing this assignment were discussed in each section, but here's the summary for those of you without the time to go through it.\n",
    "* Camera Calibration: After testing udacity images, we did NOT find inner corners on calibration images 1, 4 and 5. We decided to try a different set of images for illustration purposes, although the distortion correction on the road images were performed with the calibration obtained from udacity images.\n",
    "* Color and Gradient Transformations: \n",
    "    * Sobel Operator: For the 3 different sobel functions (Absolute, Magnitude and Direction) we found challenging to determine what \"gray-scale\" would produce the best outcomes. We tried isolated channels from different color spaces and also averaging (as the usual gray-scale) the number of channels used. This created an incredibly wide range of choices and made it difficult to assess the quality of the outputs (how good or bad the sobel operator was doing in comparison with other combination of channels or other sobel operators) and very time consuming. The ipython \"interactive\" tool proved to be very useful for this task.\n",
    "    * Color: After playing for a while we can conclude that:\n",
    "        * There is NO one single combination that works perfectly for all scenarios. It seems that the right approach must be a dynamic change (almost like a feedback loop that adjust the color thresholding depending on light conditions and speed)\n",
    "        * The research and testing over the 3 main color spaces (RGB, HLS and YUV) proved to be very challenging due to the fact that we really don't have a strict way to evaluate \"how good\" they perform when it comes to detect the lane lines. We should focus on standardizing this evaluation as well as using some sort of automatic/smart technique to explore all possible combinations of color spaces and thresholds to find the optimal one for this application. A CNN comes to mind with a large set of images where the labels might be the 2nd order coefficients of the lane lines (maybe??)\n",
    "        * Exploring other techniques that don't rely that much on color thresholding is probably a good idea. While exploring this path I found a paper exactly for that purpose. The details of this paper (authors, title, etc..) were presented above.\n",
    "\n",
    "* Perspective Transformations: On this topic, the main challenge was to decide the source and destination points. The reason for this challenge comes due to the fact that in the Hard Challenge video, the lane lines are not always where we would like them to be to be detected easily. Changing this source window proved to get much better results. From this improvement, we can conclude that a dynamic selection of this window, based probably on IMU data (speed and angular values and rates) could be used to improve dynamically the prediction.\n",
    "* Besides the previous points, the rest of the assignment challenges can be all included in the \"programmatic\" pack. How to do \"this\" on python - type of issue.  \n",
    "\n",
    "(Note: I used \"We\" in most on this notebook, but I'm the only one working on this. Just to make it specifically clear)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "anaconda-cloud": {},
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
