{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Argoverse Stereo Competition"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To support our first-ever Stereo Competition on [EvalAI](https://eval.ai/web/challenges/challenge-page/917/overview), we have released ground-truth depth for [Argoverse v1.1](https://www.argoverse.org/data.html), derived from lidar point cloud accumulation. We used our recent [scene flow method](https://arxiv.org/pdf/2011.00320.pdf) to accumulate lidar points from 11 frames and adopted evaluation metrics from the great [KITTI stereo challenge](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=stereo). However, in comparison to KITTI, our stereo images have 10 times the resolution, and we have 16 times as many training frames — making it a much larger and more robust dataset.\n",
    "\n",
    "Argoverse Stereo consists of rectified stereo images and ground truth disparity maps for 74 out of the 113 Argoverse 3D Tracking Sequences. The stereo images are (2056 x 2464 pixels) and sampled at 5 Hz. The dataset contains a total of 6,624 stereo pairs with ground truth depth, although we withhold the ground truth depth for the 15 sequence test set.\n",
    "\n",
    "So here is a notebook to get you started with our stereo dataset and stereo competition. Have fun!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Data setup\n",
    "The Argoverse Stereo dataset can be download from [here](https://www.argoverse.org/data.html). There are 2 packages from the **Argoverse Stereo v1.1** section you will need to download to get started with this tutorial: \n",
    "\n",
    "* Rectified stereo images ( train / val / test )\n",
    "* Disparity maps ( train / val )\n",
    "\n",
    "This tutorial assumes that you have already downloaded and extracted all necessary data into an specific folder and that you have the [Argoverse API](https://github.com/argoai/argoverse-api) up and running. For example, this is the directory structure you should have:\n",
    "\n",
    "```\n",
    "argoverse_stereo_v1.1\n",
    "└───disparity_maps_v1.1\n",
    "|   └───test\n",
    "|   └───train\n",
    "|   |    └───273c1883-673a-36bf-b124-88311b1a80be\n",
    "|   |        └───stereo_front_left_rect_disparity\n",
    "|   |        └───stereo_front_left_rect_objects_disparity\n",
    "|   └───val\n",
    "└───rectified_stereo_images_v1.1\n",
    "    └───test\n",
    "    └───train\n",
    "    |    └───273c1883-673a-36bf-b124-88311b1a80be\n",
    "    |        └───stereo_front_left_rect\n",
    "    |        └───stereo_front_right_rect\n",
    "    |            vehicle_calibration_stereo_info.json\n",
    "    └───val\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Installing the dependencies\n",
    "\n",
    "You will need to install four dependencies to run this tutorial:\n",
    "\n",
    "* **Open3D**: See instructions on how to install [here](https://github.com/intel-isl/Open3D).\n",
    "\n",
    "* **OpenCV contrib**:\n",
    "See instructions on how to install [here](https://pypi.org/project/opencv-contrib-python).\n",
    "\n",
    "* **Plotly**:\n",
    "See instructions on how to install [here](https://github.com/plotly/plotly.py).\n",
    "\n",
    "* **Disparity interpolation**:\n",
    "The evaluation algorithm might need to interpolate the predicted disparity image if its density is less than 100% (please see the **Evaluating the results** cell for more details). Therefore, you will need to install the [numba](http://numba.pydata.org/) package for just-in-time compilation of the function `interpolate_disparity` using the command below.\n",
    "\n",
    "```\n",
    "$ pip install numba\n",
    "```\n",
    "\n",
    "Once you get ready with the dataset and the dependencies you can run the cells below. Please make sure to change the path to the dataset accordingly."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "%matplotlib notebook\n",
    "\n",
    "import copy\n",
    "import json\n",
    "import shutil\n",
    "from pathlib import Path\n",
    "\n",
    "import cv2\n",
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "import open3d as o3d\n",
    "import plotly.graph_objects as go\n",
    "\n",
    "from argoverse.data_loading.stereo_dataloader import ArgoverseStereoDataLoader\n",
    "from argoverse.evaluation.stereo.eval import StereoEvaluator\n",
    "from argoverse.utils.calibration import get_calibration_config\n",
    "from argoverse.utils.camera_stats import RECTIFIED_STEREO_CAMERA_LIST\n",
    "\n",
    "STEREO_FRONT_LEFT_RECT = RECTIFIED_STEREO_CAMERA_LIST[0]\n",
    "STEREO_FRONT_RIGHT_RECT = RECTIFIED_STEREO_CAMERA_LIST[1]\n",
    "\n",
    "\n",
    "# Path to the dataset (please change accordingly).\n",
    "data_dir = \"/media/jpontes/datasets/stereo/argoverse-stereo_v1.1/\"\n",
    "\n",
    "# Choosing the data split: train, val, or test (note that we do not provide ground truth for the test set).\n",
    "split_name = \"train\"\n",
    "\n",
    "# Choosing a specific log id. For example, 273c1883-673a-36bf-b124-88311b1a80be.\n",
    "log_id = \"273c1883-673a-36bf-b124-88311b1a80be\"\n",
    "\n",
    "# Choosing an index to select a specific stereo image pair. You can always modify this to loop over all data.\n",
    "idx = 34\n",
    "\n",
    "# Creating the Argoverse Stereo data loader.\n",
    "stereo_data_loader = ArgoverseStereoDataLoader(data_dir, split_name)\n",
    "\n",
    "# Loading the left rectified stereo image paths for the chosen log.\n",
    "left_stereo_img_fpaths = stereo_data_loader.get_ordered_log_stereo_image_fpaths(\n",
    "    log_id=log_id,\n",
    "    camera_name=STEREO_FRONT_LEFT_RECT,\n",
    ")\n",
    "\n",
    "# Loading the right rectified stereo image paths for the chosen log.\n",
    "right_stereo_img_fpaths = stereo_data_loader.get_ordered_log_stereo_image_fpaths(\n",
    "    log_id=log_id,\n",
    "    camera_name=STEREO_FRONT_RIGHT_RECT,\n",
    ")\n",
    "\n",
    "# Loading the disparity map paths for the chosen log.\n",
    "disparity_map_fpaths = stereo_data_loader.get_ordered_log_disparity_map_fpaths(\n",
    "    log_id=log_id,\n",
    "    disparity_name=\"stereo_front_left_rect_disparity\",\n",
    ")\n",
    "\n",
    "# Loading the disparity map paths for foreground objects for the chosen log.\n",
    "disparity_obj_map_fpaths = stereo_data_loader.get_ordered_log_disparity_map_fpaths(\n",
    "    log_id=log_id,\n",
    "    disparity_name=\"stereo_front_left_rect_objects_disparity\",\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Stereo images and ground-truth disparity loading\n",
    "\n",
    "We provide rectified stereo image pairs, disparity maps for the left stereo images, and also disparity maps for foreground objects only."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# Loading the rectified stereo images.\n",
    "stereo_front_left_rect_image = stereo_data_loader.get_rectified_stereo_image(left_stereo_img_fpaths[idx])\n",
    "stereo_front_right_rect_image = stereo_data_loader.get_rectified_stereo_image(right_stereo_img_fpaths[idx])\n",
    "\n",
    "# Loading the ground-truth disparity maps. \n",
    "stereo_front_left_rect_disparity = stereo_data_loader.get_disparity_map(disparity_map_fpaths[idx])\n",
    "\n",
    "# Loading the ground-truth disparity maps for foreground objects only. \n",
    "stereo_front_left_rect_objects_disparity = stereo_data_loader.get_disparity_map(disparity_obj_map_fpaths[idx])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Visualization\n",
    "Let's visualize the stereo image pair and its ground-truth disparities."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# Dilating the disparity maps for a better visualization.\n",
    "stereo_front_left_rect_disparity_dil = cv2.dilate(\n",
    "    stereo_front_left_rect_disparity, \n",
    "    kernel=np.ones((2, 2), np.uint8), \n",
    "    iterations=7,\n",
    ")\n",
    "\n",
    "stereo_front_left_rect_objects_disparity_dil = cv2.dilate(\n",
    "    stereo_front_left_rect_objects_disparity,\n",
    "    kernel=np.ones((2, 2), np.uint8),\n",
    "    iterations=7,\n",
    ")\n",
    "\n",
    "plt.figure(figsize=(9, 9))\n",
    "plt.subplot(2, 2, 1)\n",
    "plt.title(\"Rectified left stereo image\")\n",
    "plt.imshow(stereo_front_left_rect_image)\n",
    "plt.axis(\"off\")\n",
    "plt.subplot(2, 2, 2)\n",
    "plt.title(\"Rectified right stereo image\")\n",
    "plt.imshow(stereo_front_right_rect_image)\n",
    "plt.axis(\"off\")\n",
    "plt.subplot(2, 2, 3)\n",
    "plt.title(\"Left disparity map\")\n",
    "plt.imshow(\n",
    "    stereo_front_left_rect_disparity_dil,\n",
    "    cmap=\"nipy_spectral\",\n",
    "    vmin=0,\n",
    "    vmax=192,\n",
    "    interpolation=\"none\",\n",
    ")\n",
    "plt.axis(\"off\")\n",
    "plt.subplot(2, 2, 4)\n",
    "plt.title(\"Left object disparity map\")\n",
    "plt.imshow(\n",
    "    stereo_front_left_rect_objects_disparity_dil,\n",
    "    cmap=\"nipy_spectral\",\n",
    "    vmin=0,\n",
    "    vmax=192,\n",
    "    interpolation=\"none\",\n",
    ")\n",
    "plt.axis(\"off\")\n",
    "plt.tight_layout()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Recovering and visualizing the true depth from the disparity map\n",
    "Here we use the following relationship to recover the depth from disparity: $z = \\frac{fB}{d}$, where $z$ is the depth in meters, $f$ is the focal length in pixels, $B$ is the baseline in meters, and $d$ is the disparity in pixels."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# First, we need to load the camera calibration. Specifically, we want the camera intrinsic parameters.\n",
    "calib_data = stereo_data_loader.get_log_calibration_data(log_id=log_id)\n",
    "camera_config = get_calibration_config(calib_data, camera_name=STEREO_FRONT_LEFT_RECT)\n",
    "\n",
    "# Getting the focal lenght and baseline. Note that the baseline is constant for the Argoverse stereo rig setup.\n",
    "focal_lenght = camera_config.intrinsic[0, 0]  # Focal length in pixels.\n",
    "BASELINE = 0.2986  # Baseline in meters.\n",
    "\n",
    "# We consider disparities greater than zero to be valid disparities.\n",
    "# A zero disparity corresponds to an infinite distance.\n",
    "valid_pixels = stereo_front_left_rect_disparity > 0\n",
    "\n",
    "# Using the stereo relationship previsouly described, we can recover the depth map by:\n",
    "stereo_front_left_rect_depth = \\\n",
    "    np.float32((focal_lenght * BASELINE) / (stereo_front_left_rect_disparity + (1.0 - valid_pixels)))\n",
    "\n",
    "# Recovering the colorized point cloud using Open3D.\n",
    "left_image_o3d = o3d.geometry.Image(stereo_front_left_rect_image)\n",
    "depth_o3d = o3d.geometry.Image(stereo_front_left_rect_depth)\n",
    "rgbd_image_o3d = o3d.geometry.RGBDImage.create_from_color_and_depth(\n",
    "    left_image_o3d, \n",
    "    depth_o3d, \n",
    "    convert_rgb_to_intensity=False, \n",
    "    depth_scale=1.0, \n",
    "    depth_trunc=200,\n",
    ")\n",
    "pinhole_camera_intrinsic = o3d.camera.PinholeCameraIntrinsic()\n",
    "pinhole_camera_intrinsic.intrinsic_matrix = camera_config.intrinsic[:3, :3]\n",
    "pinhole_camera_intrinsic.height = camera_config.img_height\n",
    "pinhole_camera_intrinsic.width = camera_config.img_width\n",
    "pcd = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd_image_o3d, pinhole_camera_intrinsic)\n",
    "\n",
    "# Showing the colorized point cloud using the interactive Plotly.\n",
    "points = np.asarray(pcd.points)\n",
    "colors = np.asarray(pcd.colors)\n",
    "\n",
    "fig = go.Figure(\n",
    "    data=[\n",
    "        go.Scatter3d(\n",
    "            x=points[:, 0],\n",
    "            y=points[:, 1],\n",
    "            z=points[:, 2],\n",
    "            mode=\"markers\",\n",
    "            marker=dict(size=1, color=colors),\n",
    "        )\n",
    "    ],\n",
    "    layout=dict(\n",
    "        scene=dict(\n",
    "            xaxis=dict(visible=False),\n",
    "            yaxis=dict(visible=False),\n",
    "            zaxis=dict(visible=False),\n",
    "            aspectmode=\"data\",\n",
    "        )\n",
    "    ),\n",
    ")\n",
    "fig.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Predicting the disparity map from a stereo pair image\n",
    "Here we provide a baseline to predict a disparity map given the left and right rectified stereo images.\n",
    "We choose the classic **Semi-Global Matching (SGM)** algorithm and used its OpenCV implementation.\n",
    "Please check the [OpenCV documentation](https://docs.opencv.org/3.4/d2/d85/classcv_1_1StereoSGBM.html) and the great [SGM paper](https://core.ac.uk/download/pdf/11134866.pdf), if you are interested in more details."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# Defining the SGM parameters (please check the OpenCV documentation for details).\n",
    "# We found this parameters empirically and based on the Argoverse Stereo data. \n",
    "max_disp = 192\n",
    "win_size = 10\n",
    "uniqueness_ratio = 15\n",
    "speckle_window_size = 200\n",
    "speckle_range = 2\n",
    "block_size = 11\n",
    "P1 = 8 * 3 * win_size ** 2\n",
    "P2 = 32 * 3 * win_size ** 2\n",
    "\n",
    "# Defining the Weighted Least Squares (WLS) filter parameters.\n",
    "lmbda = 0.1\n",
    "sigma = 1.0\n",
    "\n",
    "# Defining the SGM left matcher.\n",
    "left_matcher = cv2.StereoSGBM_create(\n",
    "    minDisparity=0,\n",
    "    numDisparities=max_disp,\n",
    "    blockSize=block_size,\n",
    "    P1=P1,\n",
    "    P2=P2,\n",
    "    disp12MaxDiff=max_disp,\n",
    "    uniquenessRatio=uniqueness_ratio,\n",
    "    speckleWindowSize=speckle_window_size,\n",
    "    speckleRange=speckle_range,\n",
    "    mode=cv2.STEREO_SGBM_MODE_SGBM_3WAY,\n",
    ")\n",
    "\n",
    "# Defining the SGM right matcher needed for the left-right consistency check in the WLS filter.\n",
    "right_matcher = cv2.ximgproc.createRightMatcher(left_matcher)\n",
    "\n",
    "# Defining the WLS filter.\n",
    "wls_filter = cv2.ximgproc.createDisparityWLSFilter(matcher_left=left_matcher)\n",
    "wls_filter.setLambda(lmbda)\n",
    "wls_filter.setSigmaColor(sigma)\n",
    "\n",
    "# Computing the disparity maps.\n",
    "left_disparity = left_matcher.compute(stereo_front_left_rect_image, stereo_front_right_rect_image)\n",
    "right_disparity = right_matcher.compute(stereo_front_right_rect_image, stereo_front_left_rect_image)\n",
    "\n",
    "# Applying the WLS filter.\n",
    "left_disparity_pred = wls_filter.filter(left_disparity, stereo_front_left_rect_image, None, right_disparity)\n",
    "\n",
    "# Recovering the disparity map.\n",
    "# OpenCV produces a disparity map as a signed short obtained by multiplying subpixel shifts with 16.\n",
    "# To recover the true disparity values, we need to divide the output by 16 and convert to float.\n",
    "left_disparity_pred = np.float32(left_disparity_pred) / 16.0\n",
    "\n",
    "# OpenCV will also set negative values for invalid disparities where matches could not be found.\n",
    "# Here we set all invalid disparities to zero.\n",
    "left_disparity_pred[left_disparity_pred < 0] = 0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Visualizing the results\n",
    "Here we plot the stereo image pair, the ground truth disparity, and the estimated disparity by SGM."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "plt.figure(figsize=(9, 9))\n",
    "plt.subplot(2, 2, 1)\n",
    "plt.title(\"Rectified left stereo image\")\n",
    "plt.imshow(stereo_front_left_rect_image)\n",
    "plt.axis(\"off\")\n",
    "plt.subplot(2, 2, 2)\n",
    "plt.title(\"Rectified right stereo image\")\n",
    "plt.imshow(stereo_front_right_rect_image)\n",
    "plt.axis(\"off\")\n",
    "plt.subplot(2, 2, 3)\n",
    "plt.title(\"Ground-truth left disparity map\")\n",
    "plt.imshow(\n",
    "    stereo_front_left_rect_disparity_dil,\n",
    "    cmap=\"nipy_spectral\",\n",
    "    vmin=0,\n",
    "    vmax=192,\n",
    "    interpolation=\"none\",\n",
    ")\n",
    "plt.axis(\"off\")\n",
    "plt.subplot(2, 2, 4)\n",
    "plt.title(\"Estimated left disparity map\")\n",
    "plt.imshow(\n",
    "    left_disparity_pred, \n",
    "    cmap=\"nipy_spectral\", \n",
    "    vmin=0, \n",
    "    vmax=192, \n",
    "    interpolation=\"none\"\n",
    ")\n",
    "plt.axis(\"off\")\n",
    "plt.tight_layout()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Recovering and visualizing the predicted point cloud from the disparity map\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# We consider disparities greater than zero to be valid disparities.\n",
    "# A zero disparity corresponds to an infinite distance.\n",
    "valid_pixels = left_disparity_pred > 0\n",
    "\n",
    "# Using the stereo relationship previsouly described, we can recover the predicted depth map by:\n",
    "left_depth_pred = \\\n",
    "    np.float32((focal_lenght * BASELINE) / (left_disparity_pred + (1.0 - valid_pixels)))\n",
    "\n",
    "# Recovering the colorized point cloud using Open3D.\n",
    "left_image_o3d = o3d.geometry.Image(stereo_front_left_rect_image)\n",
    "depth_o3d = o3d.geometry.Image(left_depth_pred)\n",
    "rgbd_image_o3d = o3d.geometry.RGBDImage.create_from_color_and_depth(\n",
    "    left_image_o3d, \n",
    "    depth_o3d, \n",
    "    convert_rgb_to_intensity=False, \n",
    "    depth_scale=1.0, \n",
    "    depth_trunc=200,\n",
    ")\n",
    "pinhole_camera_intrinsic = o3d.camera.PinholeCameraIntrinsic()\n",
    "pinhole_camera_intrinsic.intrinsic_matrix = camera_config.intrinsic[:3, :3]\n",
    "pinhole_camera_intrinsic.height = camera_config.img_height\n",
    "pinhole_camera_intrinsic.width = camera_config.img_width\n",
    "pcd = o3d.geometry.PointCloud.create_from_rgbd_image(rgbd_image_o3d, pinhole_camera_intrinsic)\n",
    "\n",
    "# Showing the colorized point cloud using the interactive Plotly.\n",
    "points = np.asarray(pcd.points)\n",
    "# Randomly sampling indices for faster rendering.\n",
    "indices = np.random.randint(len(points), size=100000)  \n",
    "points = points[indices]\n",
    "colors = np.asarray(pcd.colors)[indices]\n",
    "\n",
    "fig = go.Figure(\n",
    "    data=[\n",
    "        go.Scatter3d(\n",
    "            x=points[:, 0],\n",
    "            y=points[:, 1],\n",
    "            z=points[:, 2],\n",
    "            mode=\"markers\",\n",
    "            marker=dict(size=1, color=colors),\n",
    "        )\n",
    "    ],\n",
    "    layout=dict(\n",
    "        scene=dict(\n",
    "            xaxis=dict(visible=False),\n",
    "            yaxis=dict(visible=False),\n",
    "            zaxis=dict(visible=False),\n",
    "            aspectmode=\"data\",\n",
    "        ),\n",
    "    ),\n",
    ")\n",
    "fig.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Saving the predicted disparity map to disk\n",
    "We encode the disparity maps using the raster-graphics PNG file format for lossless data compression. The disparity images are saved as uint16 and its values values range is [0, 256].\n",
    "\n",
    "A zero \"0\" value indicates an invalid disparity/pixel. For s ground-truth disparity, zero means that no ground truth is available. \n",
    "\n",
    "To recover the real disparity value, we first convert the uint16 value to a float and then divide it by 256.0."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# Encoding the real disparity values to an uint16 data format to save as an uint16 PNG file.\n",
    "left_disparity_pred = np.uint16(left_disparity_pred * 256.0)\n",
    "\n",
    "timestamp = int(Path(disparity_map_fpaths[idx]).stem.split(\"_\")[-1])\n",
    "\n",
    "# Change the path to the directory you would like to save the result.\n",
    "# The log id must be consistent with the stereo images' log id.\n",
    "save_dir_disp = f\"/tmp/results/sgm/stereo_output/{log_id}/\"\n",
    "Path(save_dir_disp).mkdir(parents=True, exist_ok=True)\n",
    "\n",
    "# The predicted disparity filename must have the format: 'disparity_[TIMESTAMP OF THE LEFT STEREO IMAGE].png' \n",
    "filename = f\"{save_dir_disp}/disparity_{timestamp}.png\"\n",
    "\n",
    "# Writing the PNG file to disk.\n",
    "cv2.imwrite(filename, left_disparity_pred)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Evaluating the results\n",
    "Our evaluation algorithm computes the percentage of bad pixels averaged over all ground-truth pixels, similar to the [KITTI Stereo 2015](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=stereo) benchmark. \n",
    "\n",
    "We consider the disparity of a pixel to be correctly estimated if the absolute disparity error is less than a threshold **or** its relative error is less than 10% of its true value. We define three disparity error thresholds: 3, 5, and 10 pixels.\n",
    "\n",
    "Our [EvalAI leaderboard](https://eval.ai/web/challenges/challenge-page/917/leaderboard) ranks all methods according to the number of bad pixels using a threshold of 10 pixels (i.e. **all:10**). Some stereo matching methods such as SGM might provide sparse disparity maps, meaning that some pixels will not have valid disparity values. In those cases, we interpolate the predicted disparity map using a simple nearest neighbor interpolation as in the [KITTI Stereo 2015](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=stereo) benchmark to assure we can compare it to our semi-dense ground-truth disparity map. Current deep stereo matching methods normally predict disparity maps with 100% density. Thus, an interpolation step is not needed for the evaluation.\n",
    "\n",
    "**The disparity errors metrics are the following:**\n",
    "\n",
    "* **all**: Percentage of stereo disparity errors averaged over all ground-truth pixels in the reference frame (left stereo image).\n",
    "* **bg**: Percentage of stereo disparity errors averaged only over background regions.\n",
    "* **fg**: Percentage of stereo disparity errors averaged only over foreground regions.\n",
    "\n",
    "The **$\\mathbf{*}$** (asterisk) means that the evaluation is performed using only the algorithm predicted disparities. Even though the disparities might be sparse, they are not interpolated.\n",
    "\n",
    "We evaluate all metrics using three error thresholds: 3, 5, or 10 pixels. \n",
    "The notation is then: **all:3**, **all:5**, **all:10**, **fg:3**, **fg:5**, **fg:10**, and so on."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# Path to the predicted disparity maps.\n",
    "pred_dir = Path(save_dir_disp)\n",
    "\n",
    "# Path to the ground-truth disparity maps.\n",
    "gt_dir = Path(f\"{data_dir}/disparity_maps_v1.1/{split_name}/{log_id}\")\n",
    "\n",
    "# Path to save the disparity error image.\n",
    "save_figures_dir = Path(\"/tmp/results/sgm/figures/\")\n",
    "save_figures_dir.mkdir(parents=True, exist_ok=True)\n",
    "\n",
    "print(pred_dir)\n",
    "print(gt_dir)\n",
    "\n",
    "# Creating the stereo evaluator.\n",
    "evaluator = StereoEvaluator(\n",
    "    pred_dir,\n",
    "    gt_dir,\n",
    "    save_figures_dir,\n",
    "    save_disparity_error_image=True,\n",
    "    num_procs=-1,\n",
    ")\n",
    "\n",
    "# Running the stereo evaluation.\n",
    "metrics = evaluator.evaluate()\n",
    "\n",
    "# Printing the quantitative results (using json trick for organized printing).\n",
    "print(f\"{json.dumps(metrics, sort_keys=False, indent=4)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Plotting the disparity error image\n",
    "We compute the disparity error image as in the [KITTI Stereo 2015](http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=stereo) benchmark. The disparity error map uses a log colormap depicting correct estimates in blue and wrong estimates in red color tones. We define correct disparity estimates when the absolute disparity error is less than 10 pixels and the relative error is less than 10% of its true value."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# Reading the PNG disparity error image and converting it to RGB.\n",
    "disparity_error_image_path = f\"{save_figures_dir}/{log_id}/disparity_error_{timestamp}.png\"\n",
    "disparity_error_image = cv2.cvtColor(cv2.imread(disparity_error_image_path), cv2.COLOR_BGR2RGB)\n",
    "\n",
    "# Showing the disparity error image.\n",
    "plt.figure(figsize=(9, 9))\n",
    "plt.imshow(disparity_error_image)\n",
    "plt.axis(\"off\")\n",
    "plt.tight_layout()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Creating the submission file\n",
    "To submit the results from your stereo matching method for evaluation in our EvalAI server, you will need to run your method on the entire test set (15 sequences).\n",
    "\n",
    "This is the directory structure you should have:\n",
    "\n",
    "```\n",
    "stereo_output\n",
    "└───0f0d7759-fa6e-3296-b528-6c862d061bdd\n",
    "|       disparity_315974292602180504.png\n",
    "|       disparity_315974292801980264.png\n",
    "|                     .\n",
    "|                     .\n",
    "|                     .\n",
    "|  \n",
    "└───673e200e-944d-3b40-a447-f83353bd85ed\n",
    "└───764abf69-c7a0-32c3-97f5-330de68e13af\n",
    "                      .\n",
    "                      .\n",
    "                      .\n",
    "    \n",
    "```\n",
    "For each sequence log from the test set, you will save the disparity maps using the previously described PNG format and naming.\n",
    "\n",
    "Given you have all the generated output in the proper directory structure, you can then pack the **stereo_output** directory into a **.zip** package. Then, the submission file **stereo_output.zip** can be submitted for evaluation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# Example of a command to create the zip submission file.\n",
    "output_dir = f\"/tmp/results/sgm/stereo_output/\"\n",
    "shutil.make_archive(output_dir, \"zip\", output_dir)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Submitting results to EvalAI\n",
    "Here are some directions to submit your results to the EvalAI server.\n",
    "\n",
    "Your zip file will likely be large, in the orders of GB. For example, the SGM baseline zip file has about 2 GB. You will need to use the EvalAI command line interface (EvalAI-CLI) to submit such large files. Please follow the instructions described [here](https://github.com/Cloud-CV/evalai-cli) for installing it. In addition, you can follow the submission instructions in our [EvalAI Stereo Competition page](https://eval.ai/web/challenges/challenge-page/917/submission).\n",
    "\n",
    "Once you have the EvalAI-CLI up and running, you can submit your results using the following command. Please ensure to add as much details as possible about your method (e.g. a brief description, link to paper, link to code, etc.). **Note that you can only submit to EvalAI once a day**.\n",
    "\n",
    "```    \n",
    "$ evalai challenge 917 phase 1894 submit --file /tmp/results/sgm/stereo_output.zip --large\n",
    "```\n",
    "\n",
    "If everything goes well, you can check the status of your submission using the command:\n",
    "\n",
    "```\n",
    "$ evalai submission 'YOUR SUBMISSION ID'`\n",
    "```\n",
    "\n",
    "**The evaluation normally takes about 10 minutes to complete.** Once completed you can check the results in the [My Submissions](https://eval.ai/web/challenges/challenge-page/917/my-submission) session in the EvalAI web interface. Then, you can select to show your method in our [leaderboard](https://eval.ai/web/challenges/challenge-page/917/evaluation) and check how it compares against our baselines and others!\n",
    "\n",
    "**Well done completing this tutorial and good luck!**"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
