{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "cWt3ZGmf8zSv"
   },
   "source": [
    "## Learn About The Execution Envrionment\n",
    "\n",
    "Author : Lei Wang\n",
    "\n",
    "Date : May 20th, 2019\n",
    "\n",
    "\n",
    "\n",
    "We could use jupyter as an interface to communicate with remote machine by executing shell commands. Check $PROJECT_ROOT/Readme.md defined below to see how we preapre our executing envrionment.\n",
    "\n",
    "\n",
    "In colab, your content will not be persistent in the default hdd. Hence you need to use external stroage to persistent your conent. Here we use Google Driver to do that. \n",
    "\n",
    "Mask_RCNN trained with massive classes coco dataset is well developed for people detection. In the code, we limit our detection only to 'people' and output detection results as a video. \n",
    "\n",
    "We clone a popular Mask_RCNN implementation to our colab hdd and write python codes to run inference for each frame of an uploaded video.\n",
    "\n",
    "Examples of using opencv to generate detection results and a video is provided. We also show you how to use FFmpeg to generate video from generated images.\n",
    "\n",
    "Jupyter also provides us with UI components for further demonstration. Rendering point clouds using Three.js powered by webgl, or adding an HTML5 components to show a media cannot be simpler.\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "rQRC0G7bBFeA"
   },
   "source": [
    "##### Check the virtual machine running on"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 539
    },
    "colab_type": "code",
    "id": "nHrjbxz1UlQi",
    "outputId": "461bb82d-1071-4cc7-cd61-763b9d7b17ac"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[]\n",
      "Tue Mar 31 16:44:30 2020       \r\n",
      "+-----------------------------------------------------------------------------+\r\n",
      "| NVIDIA-SMI 440.64.00    Driver Version: 440.64.00    CUDA Version: 10.2     |\r\n",
      "|-------------------------------+----------------------+----------------------+\r\n",
      "| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\r\n",
      "| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\r\n",
      "|===============================+======================+======================|\r\n",
      "|   0  GeForce RTX 2070    On   | 00000000:01:00.0  On |                  N/A |\r\n",
      "| 43%   36C    P8    21W / 175W |    592MiB /  7959MiB |     22%      Default |\r\n",
      "+-------------------------------+----------------------+----------------------+\r\n",
      "                                                                               \r\n",
      "+-----------------------------------------------------------------------------+\r\n",
      "| Processes:                                                       GPU Memory |\r\n",
      "|  GPU       PID   Type   Process name                             Usage      |\r\n",
      "|=============================================================================|\r\n",
      "|    0       999      G   /usr/lib/xorg/Xorg                            38MiB |\r\n",
      "|    0      1063      G   /usr/bin/gnome-shell                          50MiB |\r\n",
      "|    0      1265      G   /usr/lib/xorg/Xorg                           328MiB |\r\n",
      "|    0      1394      G   /usr/bin/gnome-shell                          87MiB |\r\n",
      "|    0      5274      G   ...AgAAAAAAAAYAAAAAAAEAAAAAAAAAAAAAAAAAAAA    51MiB |\r\n",
      "+-----------------------------------------------------------------------------+\r\n"
     ]
    }
   ],
   "source": [
    "# switch to cpu device\n",
    "import os\n",
    "os.environ['CUDA_VISIBLE_DEVICES'] = '-1'\n",
    "\n",
    "# https://stackoverflow.com/questions/38559755/how-to-get-current-available-gpus-in-tensorflow\n",
    "import tensorflow as tf\n",
    "from tensorflow.python.client import device_lib\n",
    "\n",
    "def get_available_gpus():\n",
    "  local_device_protos = device_lib.list_local_devices()\n",
    "  return [device_proto for device_proto in local_device_protos if device_proto.device_type == 'GPU']\n",
    "\n",
    "print(get_available_gpus())\n",
    "\n",
    "!nvidia-smi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 2094
    },
    "colab_type": "code",
    "id": "JkXW3nmZRXl8",
    "outputId": "c6e7b19a-91c7-4672-b0b3-1f48b4d11b0a"
   },
   "outputs": [],
   "source": [
    "!cat /etc/os-release\n",
    "!cat /proc/cpuinfo\n",
    "!cat /proc/meminfo\n",
    "!echo \"check disk usage ...\"\n",
    "!df -h"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "m7rVcQ4B44Fr"
   },
   "source": [
    "##### Make our notebook an active page\n",
    "\n",
    "Open the development console, type following JS codes into the console and press \"Enter\"\n",
    "\n",
    "```js\n",
    "function RenewConn(Selector){\n",
    "  document.querySelector(Selector).click() \n",
    "}\n",
    "var query = \"colab-connect-button\"\n",
    "// run every 10 miniuts. Please use random time tickets in case that the risk control detector is depolyed to against robot behaviors.\n",
    "click_mock = 1000 * 60 * 10 + Math.random() * 1000 * 60\n",
    "\n",
    "global = this // window object\n",
    "global.prog_id = setInterval(RenewConn(query),click_mock)\n",
    "// keep prog_id in case that you want to close event dispatching branch by using clearInterval(prog_id) \n",
    "// The codes above either increase your activity scores or renew connections by sending signals to the remote\n",
    "```\n",
    "\n",
    "The code run asynchronously and prevents google colab page from closing connection to the remote. So you won't interrupted.\n",
    "\n",
    "Please see details of [Colaboratory Frequency Questions](https://research.google.com/colaboratory/faq.html#idle-timeouts) for details.\n",
    "\n",
    "If you really want stronger services, instead of abuse of free resources to the public, I recommend you to try [Colab Pro](https://colab.research.google.com/signup?utm_source=faq&utm_medium=link&utm_campaign=how_long_can_nbs_run)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "PkOYVNsRBXR9"
   },
   "source": [
    "##### Mount Google Driver as an external HDD"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 222
    },
    "colab_type": "code",
    "id": "O66HT0hHMTcs",
    "outputId": "da66f272-16af-488c-e87b-1707183b03be"
   },
   "outputs": [],
   "source": [
    "# mount google driver \n",
    "# see https://colab.research.google.com/notebooks/io.ipynb#scrollTo=c2W5A2px3doP\n",
    "\n",
    "!apt-get install -y -qq software-properties-common python-software-properties module-init-tools\n",
    "!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null\n",
    "!apt-get update -qq 2>&1 > /dev/null\n",
    "!apt-get -y install -qq google-drive-ocamlfuse fuse\n",
    "\n",
    "from google.colab import auth\n",
    "auth.authenticate_user()\n",
    "from oauth2client.client import GoogleCredentials\n",
    "creds = GoogleCredentials.get_application_default()\n",
    "import getpass\n",
    "\n",
    "!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL\n",
    "\n",
    "vcode = getpass.getpass()\n",
    "\n",
    "!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}\n",
    "!mkdir -p drive\n",
    "!google-drive-ocamlfuse drive"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "lRR05RU3R2xn"
   },
   "outputs": [],
   "source": [
    "HDD=\"/home/yiakwy\"\n",
    "ROOT=\"{hdd}/WorkSpace\".format(hdd=HDD)\n",
    "REPO=\"SEMANTIC_SLAM\"\n",
    "PROJECT_ROOT=\"{root}/Github/{repo}\".format(root=ROOT, repo=REPO)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 67
    },
    "colab_type": "code",
    "id": "kzhGYITuW684",
    "outputId": "23af6ddd-a4d5-4908-d18d-d468776cdcb5"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "cmake version 3.9.0\n",
      "\n",
      "CMake suite maintained and supported by Kitware (kitware.com/cmake).\n",
      "Python 3.6.10 :: Anaconda, Inc.\n",
      "tensorflow ver:  2.1.0\n",
      "WARNING:tensorflow:From /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages/tensorflow_core/python/compat/v2_compat.py:88: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "non-resource variables are not supported in the long term\n"
     ]
    }
   ],
   "source": [
    "!cmake --version\n",
    "!python --version\n",
    "\n",
    "import tensorflow as tf \n",
    "print(\"tensorflow ver: \", tf.__version__)\n",
    "\n",
    "import tensorflow.compat.v1 as tf\n",
    "tf.disable_v2_behavior()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "g3_asbAI29f8"
   },
   "source": [
    "## Install BA Solver\n",
    "\n",
    "Sorry I have some problems to build a g2o alike solver (mostly implemented in c++) in colab (long time to wait for returning)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 34
    },
    "colab_type": "code",
    "id": "4NQA6ceb3B_E",
    "outputId": "bceb638d-7971-46af-f721-d5bbf8ecdefd"
   },
   "outputs": [],
   "source": [
    "%%bash\n",
    "ROOT=\"/home/yiakwy/WorkSpace/Github\"\n",
    "Repo=\"g2opy\"\n",
    "if [ ! -d ${ROOT}/${Repo} ]; then\n",
    "git clone https://github.com/uoip/g2opy.git ${ROOT}/${Repo}\n",
    "fi\n",
    "\n",
    "cd ${ROOT}/${Repo}\n",
    "ls -h\n",
    "mkdir -p build\n",
    "cd build\n",
    "cmake -G\"Unix Makefiles\" .. #> ${ROOT}/${Repo}/out.log\n",
    "make -j8\n",
    "cd ..\n",
    "python setup.py install"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "h5XnGvn_bWpX"
   },
   "outputs": [],
   "source": [
    "%%bash\n",
    "ROOT=\"/content/drive/Workspace/Github\"\n",
    "Repo=\"SpatialPerceptron\"\n",
    "if [ ! -d ${ROOT}/${Repo} ]; then\n",
    "git clone https://github.com/yiakwy/SpatialPerceptron.git ${ROOT}/${Repo}\n",
    "fi\n",
    "\n",
    "notebooks_dir=\"notebooks\"\n",
    "if [ ! -d \"${ROOT}/${Repo}/${notebooks_dir}\" ]; then\n",
    "echo \"Making ${ROOT}/${Repo}/${notebooks_dir}\"\n",
    "mkdir -p ${ROOT}/${Repo}/${notebooks_dir}\n",
    "fi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 34
    },
    "colab_type": "code",
    "id": "s2hgloSMuC8S",
    "outputId": "55534557-745b-43e8-d6be-7f946e94871a"
   },
   "outputs": [],
   "source": [
    "%%bash\n",
    "ROOT=\"/home/yiakwy/WorkSpace/Github\"\n",
    "if [ ! -d ${ROOT} ]; then\n",
    "echo \"Making ${ROOT}\"\n",
    "mkdir -p ${ROOT};\n",
    "fi\n",
    "\n",
    "Repo=\"Mask_RCNN\"\n",
    "if [ ! -d \"${ROOT}/${Repo}\" ]; then\n",
    "git clone https://github.com/matterport/Mask_RCNN.git ${ROOT}/${Repo}\n",
    "fi\n",
    "\n",
    "pip install Cython\n",
    "\n",
    "Repo_Coco=\"coco\"\n",
    "if [ ! -d \"${ROOT}/${Repo_Coco}\" ]; then\n",
    "git clone https://github.com/waleedka/coco ${ROOT}/${Repo_Coco}\n",
    "fi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 1000
    },
    "colab_type": "code",
    "id": "epCSzmr3M8Rm",
    "outputId": "8a22a37e-5c06-4362-c809-3630b929ce8d"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Requirement already satisfied: folium==0.2.1 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (0.2.1)\n",
      "Requirement already satisfied: Jinja2 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from folium==0.2.1) (2.11.1)\n",
      "Requirement already satisfied: MarkupSafe>=0.23 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from Jinja2->folium==0.2.1) (1.1.1)\n",
      "Requirement already up-to-date: setuptools in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (46.1.3)\n",
      "Requirement already up-to-date: wheel in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (0.34.2)\n"
     ]
    }
   ],
   "source": [
    "# update on Feb 26 2020\n",
    "!pip install folium==0.2.1 # downgrade from 0.8.3 to 0.2.1\n",
    "!pip install -U setuptools\n",
    "!pip install -U wheel\n",
    "# !make install -C \"$ROOT/Github/coco/PythonAPI\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 54
    },
    "colab_type": "code",
    "id": "hT0NmvFeiLdg",
    "outputId": "09c562a7-7f7a-4338-fc97-7a22844da9fa"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Requirement already satisfied: PyYAML in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (5.3.1)\n",
      "Requirement already satisfied: keras in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (2.3.1)\n",
      "Requirement already satisfied: pyyaml in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from keras) (5.3.1)\n",
      "Requirement already satisfied: numpy>=1.9.1 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from keras) (1.18.1)\n",
      "Requirement already satisfied: scipy>=0.14 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from keras) (1.4.1)\n",
      "Requirement already satisfied: six>=1.9.0 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from keras) (1.14.0)\n",
      "Requirement already satisfied: keras-preprocessing>=1.0.5 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from keras) (1.1.0)\n",
      "Requirement already satisfied: h5py in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from keras) (2.10.0)\n",
      "Requirement already satisfied: keras-applications>=1.0.6 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from keras) (1.0.8)\n",
      "Requirement already satisfied: scikit-image in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (0.16.2)\n",
      "Requirement already satisfied: scipy>=0.19.0 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from scikit-image) (1.4.1)\n",
      "Requirement already satisfied: imageio>=2.3.0 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from scikit-image) (2.8.0)\n",
      "Requirement already satisfied: pillow>=4.3.0 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from scikit-image) (7.0.0)\n",
      "Requirement already satisfied: PyWavelets>=0.4.0 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from scikit-image) (1.1.1)\n",
      "Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from scikit-image) (3.2.1)\n",
      "Requirement already satisfied: networkx>=2.0 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from scikit-image) (2.4)\n",
      "Requirement already satisfied: numpy>=1.13.3 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from scipy>=0.19.0->scikit-image) (1.18.1)\n",
      "Requirement already satisfied: cycler>=0.10 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (0.10.0)\n",
      "Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (2.4.6)\n",
      "Requirement already satisfied: kiwisolver>=1.0.1 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (1.1.0)\n",
      "Requirement already satisfied: python-dateutil>=2.1 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image) (2.8.1)\n",
      "Requirement already satisfied: decorator>=4.3.0 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from networkx>=2.0->scikit-image) (4.4.2)\n",
      "Requirement already satisfied: six in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from cycler>=0.10->matplotlib!=3.0.0,>=2.0.0->scikit-image) (1.14.0)\n",
      "Requirement already satisfied: setuptools in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from kiwisolver>=1.0.1->matplotlib!=3.0.0,>=2.0.0->scikit-image) (46.1.3)\n",
      "Requirement already satisfied: imgaug in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (0.4.0)\n",
      "Requirement already satisfied: numpy>=1.15 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from imgaug) (1.18.1)\n",
      "Requirement already satisfied: matplotlib in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from imgaug) (3.2.1)\n",
      "Requirement already satisfied: six in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from imgaug) (1.14.0)\n",
      "Requirement already satisfied: Pillow in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from imgaug) (7.0.0)\n",
      "Requirement already satisfied: imageio in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from imgaug) (2.8.0)\n",
      "Requirement already satisfied: scikit-image>=0.14.2 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from imgaug) (0.16.2)\n",
      "Requirement already satisfied: opencv-python in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from imgaug) (4.2.0.32)\n",
      "Requirement already satisfied: scipy in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from imgaug) (1.4.1)\n",
      "Requirement already satisfied: Shapely in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from imgaug) (1.7.0)\n",
      "Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from matplotlib->imgaug) (2.4.6)\n",
      "Requirement already satisfied: python-dateutil>=2.1 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from matplotlib->imgaug) (2.8.1)\n",
      "Requirement already satisfied: cycler>=0.10 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from matplotlib->imgaug) (0.10.0)\n",
      "Requirement already satisfied: kiwisolver>=1.0.1 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from matplotlib->imgaug) (1.1.0)\n",
      "Requirement already satisfied: networkx>=2.0 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from scikit-image>=0.14.2->imgaug) (2.4)\n",
      "Requirement already satisfied: PyWavelets>=0.4.0 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from scikit-image>=0.14.2->imgaug) (1.1.1)\n",
      "Requirement already satisfied: setuptools in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from kiwisolver>=1.0.1->matplotlib->imgaug) (46.1.3)\n",
      "Requirement already satisfied: decorator>=4.3.0 in /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages (from networkx>=2.0->scikit-image>=0.14.2->imgaug) (4.4.2)\n"
     ]
    }
   ],
   "source": [
    "!pip install PyYAML\n",
    "!pip install keras\n",
    "!pip install scikit-image\n",
    "!pip install imgaug"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 34
    },
    "colab_type": "code",
    "id": "9ZHYPO8iNS19",
    "outputId": "003232d8-0335-484f-c325-65178a25ff37"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "loading /home/yiakwy/WorkSpace/Github/Mask_RCNN ...\n",
      "loading /home/yiakwy/WorkSpace/Github/Mask_RCNN/mrcnn ...\n"
     ]
    }
   ],
   "source": [
    "# alternatively you could add the path to sys.paths without entering into the directory and \n",
    "# build your own software system. See Synmantic Vision Supported Tracker implementation\n",
    "# for example\n",
    "# %cd \"$ROOT/Github/Mask_RCNN\"\n",
    "import os\n",
    "import sys\n",
    "\n",
    "def add_path(path):\n",
    "  path = os.path.abspath(path)\n",
    "  if path not in sys.path:\n",
    "    print(\"loading %s ...\" % path)\n",
    "    sys.path.insert(0, path)\n",
    "  else:\n",
    "    print(\"%s, already exits!\" % path)\n",
    "\n",
    "add_path(\"{}/Github/Mask_RCNN\".format(ROOT))\n",
    "add_path(\"{}/Github/Mask_RCNN/mrcnn\".format(ROOT))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 221
    },
    "colab_type": "code",
    "id": "DNt3kXe3XyT_",
    "outputId": "a26f6163-3f1b-4457-c815-cc1d02457b2e"
   },
   "outputs": [],
   "source": [
    "%%bash\n",
    "File=\"mask_rcnn_coco.h5\"\n",
    "ROOT=\"/home/yiakwy/WorkSpace/Github\"\n",
    "PROJECT_ROOT=\"${ROOT}/SEMANTIC_SLAM\"\n",
    "mkdir -p $PROJECT_ROOT/data/models/coco\n",
    "if [ ! -f $PROJECT_ROOT/data/models/coco/${File} ]; then\n",
    "# wget https://github.com/matterport/Mask_RCNN/releases/download/v2.0/${File} -P $PROJECT_ROOT/data/models/coco\n",
    "echo ${PROJECT_ROOT}\n",
    "fi\n",
    "ls -h"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Xil_lMXKA8XZ"
   },
   "source": [
    "## Load Mask_RCNN model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 96
    },
    "colab_type": "code",
    "id": "zvEl4LulX8G5",
    "outputId": "8bdac1b3-ff8e-4940-82af-abd282b1dc19"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "/home/yiakwy/WorkSpace/Github/Mask_RCNN, already exits!\n",
      "loading /home/yiakwy/WorkSpace/Github/Mask_RCNN/samples/coco ...\n",
      "loading /home/yiakwy/WorkSpace/Github/SEMANTIC_SLAM ...\n",
      "WARNING:tensorflow:From /home/yiakwy/WorkSpace/Github/Mask_RCNN/mrcnn/model.py:33: The name tf.keras.backend.set_session is deprecated. Please use tf.compat.v1.keras.backend.set_session instead.\n",
      "\n",
      "Device mapping:\n",
      "/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device\n",
      "\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import sys\n",
    "import random\n",
    "import math\n",
    "import numpy as np\n",
    "import skimage.io\n",
    "import matplotlib\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "%matplotlib inline\n",
    "\n",
    "Project_base = PROJECT_ROOT\n",
    "\n",
    "# Note the source code of MaskRCNN has been changed to use tf2.x to tf1.x compatible interface\n",
    "# see discussion https://github.com/matterport/Mask_RCNN/issues/1797\n",
    "\n",
    "# Import Mask RCNN\n",
    "MASK_RCNN_ROOT=\"{root}/Github/Mask_RCNN\".format(root=ROOT)\n",
    "MASK_RCNN_Dataset_Coco=\"{mask_rcnn_root}/samples/coco\".format(mask_rcnn_root=MASK_RCNN_ROOT)\n",
    "\n",
    "add_path(MASK_RCNN_ROOT)\n",
    "add_path(MASK_RCNN_Dataset_Coco)\n",
    "\n",
    "# Import WorkDir\n",
    "add_path(Project_base)\n",
    "\n",
    "from mrcnn import utils\n",
    "import mrcnn.model as Model\n",
    "from mrcnn import visualize\n",
    "\n",
    "import coco\n",
    "\n",
    "MODEL_DIR = os.path.join(Project_base, \"logs\")\n",
    "\n",
    "# Local path to trained weights file\n",
    "COCO_MODEL_PATH = os.path.join(Project_base, \"data/models/coco\", \"mask_rcnn_coco.h5\")\n",
    "if not os.path.exists(COCO_MODEL_PATH):\n",
    "  utils.download_trained_weights(COCO_MODEL_PATH)\n",
    "  \n",
    "IMAGE_DIR = os.path.join(Project_base, \"images\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "piFt2d1KuVWr"
   },
   "source": [
    "##### Model Configurations\n",
    "\n",
    "We'll be using a model trained on the MS-COCO dataset. The configurations of this model are in the CocoConfig class in coco.py.\n",
    "\n",
    "For inferencing, modify the configurations a bit to fit the task. To do so, sub-class the CocoConfig class and override the attributes you need to change."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 910
    },
    "colab_type": "code",
    "id": "XqwLnfb_t65n",
    "outputId": "1f64a8af-1599-4c87-c870-5780b88527b3"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Configurations:\n",
      "BACKBONE                       resnet101\n",
      "BACKBONE_STRIDES               [4, 8, 16, 32, 64]\n",
      "BATCH_SIZE                     1\n",
      "BBOX_STD_DEV                   [0.1 0.1 0.2 0.2]\n",
      "COMPUTE_BACKBONE_SHAPE         None\n",
      "DETECTION_MAX_INSTANCES        100\n",
      "DETECTION_MIN_CONFIDENCE       0.7\n",
      "DETECTION_NMS_THRESHOLD        0.3\n",
      "FPN_CLASSIF_FC_LAYERS_SIZE     1024\n",
      "GPU_COUNT                      1\n",
      "GRADIENT_CLIP_NORM             5.0\n",
      "IMAGES_PER_GPU                 1\n",
      "IMAGE_CHANNEL_COUNT            3\n",
      "IMAGE_MAX_DIM                  1024\n",
      "IMAGE_META_SIZE                93\n",
      "IMAGE_MIN_DIM                  800\n",
      "IMAGE_MIN_SCALE                0\n",
      "IMAGE_RESIZE_MODE              square\n",
      "IMAGE_SHAPE                    [1024 1024    3]\n",
      "LEARNING_MOMENTUM              0.9\n",
      "LEARNING_RATE                  0.001\n",
      "LOSS_WEIGHTS                   {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0}\n",
      "MASK_POOL_SIZE                 14\n",
      "MASK_SHAPE                     [28, 28]\n",
      "MAX_GT_INSTANCES               100\n",
      "MEAN_PIXEL                     [123.7 116.8 103.9]\n",
      "MINI_MASK_SHAPE                (56, 56)\n",
      "NAME                           coco\n",
      "NUM_CLASSES                    81\n",
      "POOL_SIZE                      7\n",
      "POST_NMS_ROIS_INFERENCE        1000\n",
      "POST_NMS_ROIS_TRAINING         2000\n",
      "PRE_NMS_LIMIT                  6000\n",
      "ROI_POSITIVE_RATIO             0.33\n",
      "RPN_ANCHOR_RATIOS              [0.5, 1, 2]\n",
      "RPN_ANCHOR_SCALES              (32, 64, 128, 256, 512)\n",
      "RPN_ANCHOR_STRIDE              1\n",
      "RPN_BBOX_STD_DEV               [0.1 0.1 0.2 0.2]\n",
      "RPN_NMS_THRESHOLD              0.7\n",
      "RPN_TRAIN_ANCHORS_PER_IMAGE    256\n",
      "STEPS_PER_EPOCH                1000\n",
      "TOP_DOWN_PYRAMID_SIZE          256\n",
      "TRAIN_BN                       False\n",
      "TRAIN_ROIS_PER_IMAGE           200\n",
      "USE_MINI_MASK                  True\n",
      "USE_RPN_ROIS                   True\n",
      "VALIDATION_STEPS               50\n",
      "WEIGHT_DECAY                   0.0001\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "class InferenceConfig(coco.CocoConfig):\n",
    "    # Set batch size to 1 since we'll be running inference on\n",
    "    # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU\n",
    "    GPU_COUNT = 1\n",
    "    IMAGES_PER_GPU = 1\n",
    "\n",
    "class InferenceRTX2070Config(coco.CocoConfig):pass\n",
    "    \n",
    "config = InferenceConfig()\n",
    "config.display()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "gybvUop2vy24"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Device mapping:\n",
      "/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device\n",
      "\n",
      "WARNING:tensorflow:From /home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1635: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "If using Keras pass *_constraint arguments to layers.\n",
      "WARNING:tensorflow:From /home/yiakwy/WorkSpace/Github/Mask_RCNN/mrcnn/model.py:438: calling crop_and_resize_v1 (from tensorflow.python.ops.image_ops_impl) with box_ind is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "box_ind is deprecated, use box_indices instead\n",
      "WARNING:tensorflow:From /home/yiakwy/WorkSpace/Github/Mask_RCNN/mrcnn/model.py:787: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use `tf.cast` instead.\n"
     ]
    }
   ],
   "source": [
    "# import keras\n",
    "# import keras.backend as K\n",
    "\n",
    "tf_config = tf.ConfigProto(device_count={'CPU': 1, 'GPU': 0})\n",
    "\n",
    "GPU_FRACTION = 0.5\n",
    "# config.gpu_options.per_process_gpu_memory_fraction = GPU_FRACTION\n",
    "# K.tensorflow_backend.set_session(tf.Session(config=config))\n",
    "\n",
    "# see issues raised from https://github.com/tensorflow/issues/24496\n",
    "tf_config.log_device_placement = True\n",
    "tf_config.gpu_options.allow_growth = True\n",
    "tf_config.gpu_options.per_process_gpu_memory_fraction = GPU_FRACTION\n",
    "tf.keras.backend.set_session(tf.Session(config=tf_config))\n",
    "\n",
    "# another approach\n",
    "gpus = tf.config.experimental.list_physical_devices('GPU')\n",
    "if gpus:\n",
    "  try:\n",
    "    for gpu in gpus:\n",
    "      tf.config.experimental.set_memory_growth(gpu, True)\n",
    "  except RuntimeError as e:\n",
    "    print(e)\n",
    "    \n",
    "\n",
    "# Create model object in inference mode.\n",
    "model = Model.MaskRCNN(mode=\"inference\", model_dir=MODEL_DIR, config=config)\n",
    "\n",
    "# Load weights trained on MS-COCO\n",
    "model.load_weights(COCO_MODEL_PATH, by_name=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Ul534KHox46a"
   },
   "source": [
    "##### Coco Dataset Class Names\n",
    "\n",
    "We don't want to require you to download the COCO dataset just to run this demo, so we're including the list of class names below. The index of the class name in the list represent its ID (first class is 0, second is 1, third is 2, ...etc.)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "YSmle6ZcwxRF"
   },
   "outputs": [],
   "source": [
    "# COCO Class names\n",
    "# Index of the class in the list is its ID. For example, to get ID of\n",
    "# the teddy bear class, use: class_names.index('teddy bear')\n",
    "class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane',\n",
    "               'bus', 'train', 'truck', 'boat', 'traffic light',\n",
    "               'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird',\n",
    "               'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear',\n",
    "               'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie',\n",
    "               'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',\n",
    "               'kite', 'baseball bat', 'baseball glove', 'skateboard',\n",
    "               'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',\n",
    "               'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',\n",
    "               'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',\n",
    "               'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed',\n",
    "               'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote',\n",
    "               'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',\n",
    "               'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',\n",
    "               'teddy bear', 'hair drier', 'toothbrush']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "_2gqsJmhBrGR"
   },
   "source": [
    "## Process the uploaded video"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 51
    },
    "colab_type": "code",
    "id": "0n0ktuC_yGts",
    "outputId": "1726d7b2-254e-4f48-ae6d-5ce4661caa15"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "semantic_tracker.ipynb\twidgets\r\n"
     ]
    }
   ],
   "source": [
    "!mkdir -p $Project_base/log/video\n",
    "!mkdir -p $Project_base/log/video/saver\n",
    "!ls"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "jK94wbBb4FwZ"
   },
   "source": [
    "## Semantic Tracker\n",
    "\n",
    "OpticalFlow mixed with Kalman Filter predictor and Hungarian algorithm (KM algorithm) matcher based info tracker\n",
    "\n",
    "Author: LEI WANG (yiak.wy@gmail.com)\n",
    "Date: Feb 28 2020\n",
    "\n",
    "In this section, I am goint to implementer a track used for relocalization. With refernces to the implementation of Re-Id Person project: Simple Online Realtime Tracking, --instead of using traditional SIFT and ORB features widely used by vison only SLAM system (ORB SLam for example), by learning frequencies decomposition of image domain quantities, (i.e. CNN backbone layers) I provided a robust version of semantic multi obstacles tracker for vision suppported odemetry (VSO, such as VIO, GPS-IMU-VISION Odemtry). See our pending work (Semantic Vision Supported Odemetry (SVSO)) for further references. \n",
    "\n",
    "The visual supported odemetry programe implements part of logics from an embedded, high definition localization software. The high definition localization software is an important component of HDMap, an entry to end devices"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "GZOqF1xCkKZ1"
   },
   "source": [
    "##### Predictor"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "PiLo_KNy4rCb"
   },
   "outputs": [],
   "source": [
    "# KalmanFilter works as a predictor. I also demonstrate that opticalFlow provide better clue of obstacle motion\n",
    "# movement\n",
    "import cv2\n",
    "import numpy as np\n",
    "import logging\n",
    "from logging.config import dictConfig\n",
    "logging.basicConfig(level=logging.INFO, filemode='w', format=u\"%(asctime)s [%(levelname)s]:%(filename)s, %(name)s, in line %(lineno)s >> %(message)s\".encode('utf-8'))\n",
    "# logging.basicConfig(level=logging.INFO)\n",
    "_logger = logging.getLogger(\"predictor\")\n",
    "\n",
    "class NotConfigured(Exception):pass\n",
    "\n",
    "class LoggerAdaptor(logging.LoggerAdapter):\n",
    "\n",
    "    def __init__(self, prefix, logger):\n",
    "        # super(self, App_LoggerAdaptor).__init__(logger, {})\n",
    "        logging.LoggerAdapter.__init__(self, logger, {})\n",
    "        self.prefix = prefix\n",
    "\n",
    "    def process(self, msg, kwargs):\n",
    "        return \"%s %s\" % (self.prefix, msg), kwargs\n",
    "\n",
    "def configure_loggings(config):\n",
    "    if config:\n",
    "        dictConfig(config)\n",
    "    else:\n",
    "        raise NotConfigured(details=\"passing null config!\")\n",
    "\n",
    "\n",
    "# This class implements universal Kalman Filter used by BBoxKalmanFilter\n",
    "# BBoxKalmFilter give specific values setup for a general states predictor KalamFilter\n",
    "# for video tracking mission.\n",
    "#\n",
    "# Author: Lei Wang\n",
    "# Date: Feb 20, 2020\n",
    "# reference of the implementation : \n",
    "# 1. http://ros-developer.com/2019/04/11/extended-kalman-filter-explained-with-python-code/\n",
    "# 2. Simple Online Realtime Tracking project.\n",
    "#\n",
    "# Credits to the relevant authors\n",
    "class ExtendedKalmanFilter(object):\n",
    "\n",
    "  logger = LoggerAdaptor(\"ExtendedKalmanFilter\", _logger);\n",
    "\n",
    "  def __init__(self, states_dim, measures_dim):\n",
    "    \"\"\"\n",
    "    @param states_dim: int\n",
    "    @param measures_dim: int\n",
    "    \"\"\"\n",
    "    self.states_dim = states_dim\n",
    "    self.measures_dim = measures_dim\n",
    "    # states to inspect, change to symbol `x` during computation\n",
    "    # timer series of states is maintained by a tracker with array of observation\n",
    "    self._states = np.zeros((states_dim,1))\n",
    "\n",
    "    # usually I used 1/f where f estimated frequences of video sequences or observation sequences\n",
    "    self.dt = 1.0 / 24\n",
    "    # Constant Acceleration Physics Model, as long as dt is small enough, this equation holds\n",
    "    self._A = self.populate_physics_constrain(self.dt)\n",
    "    self._B = None\n",
    "    self._u = None\n",
    "    # See usage examples from https://github.com/balzer82/Kalman. However pay attention here that\n",
    "    # my implementation is generalized to arbirary one-dimensional observations\n",
    "    self.P = np.eye(states_dim)\n",
    "    # observation variance matrix\n",
    "    self.Q = self.populate_states_variance_constrain();\n",
    "    # measurements cover matrix\n",
    "    self.H = self.populate_measures_constrain();\n",
    "    # variance matrix for measurements\n",
    "    self.R = self.populate_priors_variance_constrain();\n",
    "\n",
    "  def Init(self):pass\n",
    "\n",
    "  ## helper func to initate Kalman Filter Computing routines.\n",
    "\n",
    "  # The routines are also helpful when we implement multi fusion strategy for different sensors\n",
    "  # since the frequencies vary dramatically, we could apply differnt measures when observation data from \n",
    "  # sensors arrive. \n",
    "\n",
    "  # @todo : TODO impl\n",
    "  def populate_physics_constrain(self, dt):\n",
    "    \"\"\"\n",
    "    @return A : np.array with shape of (states_dim, states_dim)\n",
    "    \"\"\"\n",
    "    # for each observation x, we have\n",
    "    # x_{k+1} = x_{k} + v_{k} * dt + 1.0/2 * a_{k+1}^2\n",
    "    states_dim = self.states_dim\n",
    "\n",
    "    A = np.eye(states_dim)\n",
    "    factors = [dt, 0.5*dt*dt]\n",
    "    assert(self.states_dim % 3 == 0)\n",
    "    first_order_observation_dim = self.states_dim / 3.0\n",
    "    for i in range(states_dim):\n",
    "      k=1\n",
    "      j=int(i+first_order_observation_dim*k)\n",
    "      # print(\"i,j,k\", i,j,k)\n",
    "      while j < states_dim and k < len(factors):\n",
    "        A[i,j] *= factors[k]\n",
    "        k+=1\n",
    "    return A\n",
    "\n",
    "  # @todo : TODO impl\n",
    "  def populate_measures_constrain(self, measures_indice=None):\n",
    "    \"\"\"\n",
    "    @return H : np.array with shape of (measures_dim, states_dim)\n",
    "    \"\"\"\n",
    "    H = np.zeros((self.measures_dim, self.states_dim))\n",
    "    if measures_indice is not None and isinstance(measures_indice, (list, tuple)):\n",
    "      assert(len(measures_indice) == self.measures_dim)\n",
    "      # update H\n",
    "      for i, idx in enumerate(measures_indice):\n",
    "        H[i][idx] = 1\n",
    "    else:\n",
    "      if measures_indice is None:\n",
    "        logging.warning(\"The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\")\n",
    "      else:\n",
    "        raise Exception(\"expect list or tuple, but encounter %s for measures_indice\" % str(type(measures_indice)))\n",
    "    return H\n",
    "\n",
    "  def populate_priors_variance_constrain(self, measures_indice=None, measures_noises=None):\n",
    "    \"\"\"\n",
    "    @return R : np.array with shape of (measures_dim, measures_dim)\n",
    "    \"\"\"\n",
    "    R = np.eye(self.measures_dim)\n",
    "    if measures_indice is not None and isinstance(measures_indice, (list, tuple)):\n",
    "      if measures_noises is None: \n",
    "        measures_noises = np.ones((self.measures_dim, 1))\n",
    "      elif hasattr(measures_noises, \"__len__\"):\n",
    "        # implements list interface\n",
    "        pass\n",
    "      else:\n",
    "        raise Exception(\"expect `measures_noise` to be array alike object, but encounter %s\" % str(type(measures_noise)))\n",
    "      assert(len(measures_indice) == self.measures_dim)\n",
    "      # update R\n",
    "      R = np.diag(measures_noises)\n",
    "    else:\n",
    "      if measures_indice is None:\n",
    "        logging.warning(\"The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\")\n",
    "      else:\n",
    "        raise Exception(\"expect list or tuple, but encounter %s for measures_indice\" % str(type(measures_indice)))\n",
    "    return R\n",
    "\n",
    "  def populate_states_variance_constrain(self, factors=None):\n",
    "    \"\"\"\n",
    "    @return Q : np.array with shape of (states_dim, states_dim)\n",
    "    \"\"\"\n",
    "    if factors is None:\n",
    "      factors = np.zeros((self.states_dim,1))\n",
    "      factors[0,0] = 1.0\n",
    "    assert(factors.shape == (self.states_dim, 1))\n",
    "    Q = factors * factors.T\n",
    "    return Q\n",
    "\n",
    "  ## Interfaces to modify the public attributes\n",
    "\n",
    "  def set_dt(self, dt):\n",
    "    self.dt = dt\n",
    "    return self\n",
    "  \n",
    "  def set_P(self, P):\n",
    "    self.P = P\n",
    "    return self\n",
    "\n",
    "  def set_Q(self, Q):\n",
    "    self.Q = Q\n",
    "    return self\n",
    "\n",
    "  # @todo : TODO impl\n",
    "  def predict(self, states):\n",
    "    P = self.P\n",
    "    A = self._A\n",
    "    Q = self.Q\n",
    "\n",
    "    # performance states prediction using pure physic models\n",
    "    # I am going to embed opticalFlow control here, where magics happen\n",
    "    # Suppose our moovement equation is not obeying `rigid body movement` but the estiated from\n",
    "    # optcalFlow ? That's it!\n",
    "    self._states = A.dot(states)\n",
    "\n",
    "    self.P = A.dot(P.dot(A.T)) + Q\n",
    "    return self._states\n",
    "\n",
    "  # @todo : TODO impl\n",
    "  # Not we can change measures dynamically for multi sensor fusion \n",
    "  def update(self, observed_states):\n",
    "    states = self._states\n",
    "    P = self.P\n",
    "    H = self.H\n",
    "    R = self.R\n",
    "    z = observed_states\n",
    "    I = np.eye(self.states_dim)\n",
    "\n",
    "    def validate_states(states):\n",
    "      # check wheter states is an object implements python array alike protcol\n",
    "      if not hasattr(states, \"__len__\"):\n",
    "        raise TypeError(\"states should be an array like object!\")\n",
    "      assert(len(states) == self.measures_dim)\n",
    "\n",
    "    validate_states(z)\n",
    "\n",
    "    # perform states update\n",
    "    # Kalman Gain from standard EKF theory\n",
    "    K = (P.dot(H.T)).dot(np.linalg.pinv(H.dot(P.dot(H.T)) + R))\n",
    "    # Update estimates\n",
    "    self.states = states + K.dot(z-H.dot(states))\n",
    "    # Update states error covariance\n",
    "    self.P = (I - K.dot(H)).dot(P)\n",
    "    return self.states\n",
    "\n",
    "class BBoxKalmanFilter(ExtendedKalmanFilter):\n",
    "  # specify states to observe\n",
    "  # bounding box : (x, y, w, h) with shape equal to (4,1)\n",
    "  observation_dim = 4\n",
    "  states_dim = observation_dim * 3\n",
    "    \n",
    "  logger = LoggerAdaptor(\"BBoxKalmanFilter\", _logger); \n",
    "\n",
    "  def __init__(self):\n",
    "\n",
    "    # we don't know velocity and accelertion of frame changing\n",
    "    super().__init__(BBoxKalmanFilter.states_dim, BBoxKalmanFilter.observation_dim)\n",
    "\n",
    "    self.bbox = np.zeros((4,1))\n",
    "    self._states = np.zeros((12,1))\n",
    "\n",
    "    self._Init()\n",
    "    \n",
    "  def _Init(self):\n",
    "    # setup covariance matrix, borrow parameters by \"Simple Object Realtime Tracking\" directly]\n",
    "    # since this is extremely hacky for different tasks\n",
    "    self.setup_P()\n",
    "    self.setup_Q()\n",
    "\n",
    "    # initalize H and R\n",
    "    self.H = self.populate_measures_constrain([0,1,2,3])\n",
    "\n",
    "    self.R = self.populate_priors_variance_constrain([0,1,2,3])\n",
    "\n",
    "  def setup_P(self, factors=None):\n",
    "    if factors is None:\n",
    "      transition_weight = 1. / 20\n",
    "      velocity_weight = 1. / 160\n",
    "      # freshly added\n",
    "      acceleration_weight = 1. / 256\n",
    "      factors_raw = np.array([transition_weight * 2, velocity_weight * 10, acceleration_weight * 10])\n",
    "      factors = np.repeat(factors_raw, 4) #  x x x x y y y y z z z z\n",
    "      \n",
    "    self.P = np.diag(np.square(factors))\n",
    "    \n",
    "  def setup_Q(self, factors=None):\n",
    "    if factors is None:\n",
    "      transition_weight = 1. / 20\n",
    "      velocity_weight = 1. / 160\n",
    "      # freshly added\n",
    "      acceleration_weight = 1. / 256\n",
    "      factors_raw = np.array([transition_weight, velocity_weight, acceleration_weight])\n",
    "      factors = np.repeat(factors_raw, 4) #  x x x x y y y y z z z z\n",
    "\n",
    "    # also, you can use `populate_states_variance_constrain` method I provided\n",
    "    # to populate the variance matrix \n",
    "    self.Q = np.diag(np.square(factors))\n",
    "\n",
    "  def predict(self, bbox):\n",
    "    self.bbox = bbox\n",
    "    self._states[:4,0] = bbox\n",
    "    ret = super().predict(self._states)\n",
    "    return ret\n",
    "\n",
    "  def update(self, observed_states):\n",
    "    return super().update(np.array(observed_states).reshape((BBoxKalmanFilter.observation_dim,1)))\n",
    "\n",
    "# According to partial equations of images movement, we have\n",
    "#   uf_x + vf_y + f_t = 0  \n",
    "# u and v are unknow variables while (f_x, f_y) are computed numeric gradients of an image. This is a linear equation. Hence\n",
    "# we can sample gradients to estimate the movement of images. This gives a strong estimate movement\n",
    "# of detected objects in pixel level.\n",
    "# Note, this is a different from primitive KalmanFilter which applies physics assumptions to\n",
    "# projected objects onto images.\n",
    "#\n",
    "# The algorithm only apply to grey level images.\n",
    "#\n",
    "# @todo : TODO impl\n",
    "class Grid(object):\n",
    "  pass\n",
    "\n",
    "class OpticalFlowBBoxPredictor(object):\n",
    "\n",
    "  logger = LoggerAdaptor(\"OpticalFlowBBoxPredictor\", _logger);\n",
    "\n",
    "  def __init__(self):\n",
    "    #\n",
    "    self._impl = None\n",
    "    \n",
    "    #\n",
    "    self._cur_img = None \n",
    "    \n",
    "    #\n",
    "    self._pre_img = None\n",
    "\n",
    "    #\n",
    "    self.IMAGE_SHAPE = (None, None)\n",
    "\n",
    "    # dense optical flow for the image\n",
    "    self._flow = None\n",
    "     \n",
    "  def Init(self):\n",
    "    self._impl = cv2.calcOpticalFlowFarneback\n",
    "    return self\n",
    "\n",
    "  def set_FromImg(self, img):\n",
    "    self._cur_img = img\n",
    "    self.IMAGE_SHAPE = img.shape[0:2]\n",
    "    return self\n",
    "\n",
    "  def get_flow(self):\n",
    "    return self._flow\n",
    "\n",
    "  def set_flow(self, flow):\n",
    "    self._flow = flow\n",
    "    return self\n",
    "\n",
    "  def predict(self, states, observed_img=None):\n",
    "    assert(self._cur_img is not None)\n",
    "    flow = self._flow\n",
    "    if flow is None:\n",
    "      assert (observed_img is not None)\n",
    "      params = {\n",
    "          'pyr_scale': 0.8,\n",
    "          'levels': 3,\n",
    "          'winsize': 11,\n",
    "          'iterations': 5,\n",
    "          'poly_n': 7,\n",
    "          'poly_sigma': 1.1,\n",
    "          'flags': 0\n",
    "      }\n",
    "      self._pre_img = self._cur_img\n",
    "      self._flow = self._impl(self._pre_img, observed_img, None, \n",
    "                              params['pyr_scale'],\n",
    "                              params['levels'],\n",
    "                              params['winsize'],\n",
    "                              params['iterations'],\n",
    "                              params['poly_n'],\n",
    "                              params['poly_sigma'],\n",
    "                              params['flags'])\n",
    "      self._cur_img = observed_img\n",
    "      flow = self._flow\n",
    "    \n",
    "    dx, dy = flow[:,:,0], flow[:,:,1]\n",
    "    x1, y1, w, h = states\n",
    "    x2, y2 = x1 + w, y1 + h\n",
    "\n",
    "    H, W = self.IMAGE_SHAPE\n",
    "    if x2 >= W:\n",
    "      x2 = W - 1;\n",
    "    if y2 >= H:\n",
    "      y2 = H - 1;\n",
    "\n",
    "    ret = np.array([x1+dx[int(y1),int(x1)], y1+dy[int(y1),int(x1)], w+dx[int(y2),int(x2)] - dx[int(y1),int(x1)], h+dy[int(y2),int(x2)] - dy[int(y1),int(x1)]])\n",
    "    return ret\n",
    "\n",
    "  def update(self, measures):\n",
    "    import sys\n",
    "    func_name = sys._getframe().f_code.co_name\n",
    "    raise NotImplemented(\"the method %s is not implemented!\" % func_name)\n",
    " \n",
    "\n",
    "class OpticalFlowKPntPredictor(OpticalFlowBBoxPredictor):\n",
    "\n",
    "  logger = LoggerAdaptor(\"OpticalFlowKPntPredictor\", _logger);\n",
    "\n",
    "  def __init__(self):\n",
    "    super().__init__()  \n",
    "\n",
    "  def predict(self, kps, observed_img=None):\n",
    "    assert(self._cur_img is not None)\n",
    "    flow = self._flow\n",
    "    if flow is None:\n",
    "      assert (observed_img is not None)\n",
    "      params = {\n",
    "          'pyr_scale': 0.8,\n",
    "          'levels': 3,\n",
    "          'winsize': 11,\n",
    "          'iterations': 5,\n",
    "          'poly_n': 7,\n",
    "          'poly_sigma': 1.1,\n",
    "          'flags': 0\n",
    "      }\n",
    "      self._pre_img = self._cur_img\n",
    "      self._flow = self._impl(self._pre_img, observed_img, None, \n",
    "                              params['pyr_scale'],\n",
    "                              params['levels'],\n",
    "                              params['winsize'],\n",
    "                              params['iterations'],\n",
    "                              params['poly_n'],\n",
    "                              params['poly_sigma'],\n",
    "                              params['flags'])\n",
    "      self._cur_img = observed_img\n",
    "      flow = self._flow\n",
    "    \n",
    "    # see help lib, adapted from opencv4_source_code/samples/python/opt_flow.py\n",
    "    # about how to use `flow` object\n",
    "    dx, dy = flow[:,:,0], flow[:,:,1]\n",
    "\n",
    "    l = len(kps)\n",
    "    new_pixels = np.zeros((l,2))\n",
    "    # @todo : TODO impl\n",
    "    try:\n",
    "      for i, kp in enumerate(kps):\n",
    "        x1, y1 = kp\n",
    "        x2, y2 = x1 + dx[int(y1), int(x1)], y1 + dy[int(y1), int(x1)]\n",
    "        new_pixels[i,0] = x2\n",
    "        new_pixels[i,1] = y2\n",
    "    except Exception as e:\n",
    "      print(e)\n",
    "      print(\"kp\", kp)\n",
    "      raise(e)\n",
    "\n",
    "    return new_pixels\n",
    "\n",
    "# @todo : TODO impl\n",
    "class HybridOpticalFlowFilter(object):\n",
    "  \n",
    "  def __init__(self):\n",
    "    pass\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "g1HF4TVb7wMq"
   },
   "source": [
    "##### CUDA enabled Opencv checking"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 104
    },
    "colab_type": "code",
    "id": "eXQmY1xC46Ap",
    "outputId": "dc65474e-4384-45b3-d918-d3d75707758d"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "4.2.0\n",
      "OpenCV(4.2.0) /io/opencv/modules/core/include/opencv2/core/private.cuda.hpp:109: error: (-216:No CUDA support) The library is compiled without CUDA support in function 'throw_no_cuda'\n",
      "\n",
      "You have to compile CUDA manually.\n"
     ]
    }
   ],
   "source": [
    "import cv2\n",
    "CV_CUDA = False\n",
    "\n",
    "print(cv2.__version__)\n",
    "\n",
    "VERSION = cv2.__version__.split('.')\n",
    "CV_MAJOR_VERSION = int(VERSION[0])\n",
    "\n",
    "# print(cv2.CV_AA)\n",
    "\n",
    "if not cv2.cuda.getCudaEnabledDeviceCount():\n",
    "  try:\n",
    "    mat_cpu = (np.random.random((128, 128, 3)) * 255).astype(np.uint8)\n",
    "    mat_gpu = cv2.cuda_GpuMat()\n",
    "    mat_gpu.upload(mat_cpu)\n",
    "    CV_CUDA = True\n",
    "    print(\"cuda is enabled for opencv\")\n",
    "  except Exception as e:\n",
    "    print(e)\n",
    "    print(\"You have to compile CUDA manually.\")\n",
    "    # @todo : TODO add cuda support"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "UQlLddF4fLFx"
   },
   "source": [
    "##### Basic Data Structure for Map Block\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 152,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "xjQNHnugeXvj"
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import threading\n",
    "\n",
    "class AtomicCounter(object):\n",
    "\n",
    "  def __init__(self):\n",
    "    self._counter = 0\n",
    "    self.lock = threading.Lock()\n",
    "\n",
    "  def incr(self):\n",
    "    with self.lock:\n",
    "      self._counter += 1\n",
    "      return self._counter\n",
    "\n",
    "  def __call__(self):\n",
    "    return self.incr()\n",
    "\n",
    "# helper classes\n",
    "class Point3D:\n",
    "\n",
    "  # Simulate an atomic 64 bit integer as an index. Note python provides no \n",
    "  # concepts like atomic operators (see c++) such that the compiled instructions\n",
    "  # not be affected by disorder of threads execution sequence.\n",
    "  Seq = AtomicCounter()\n",
    "\n",
    "  def __init__(self, x, y, z=0):\n",
    "    self.x = x\n",
    "    self.y = y\n",
    "    self.z = z # depth value is it can be viewed by a camera\n",
    "    \n",
    "    # might not be selected fro triangulation\n",
    "    self.triangulated = False\n",
    "    \n",
    "    self.type = \"local\"\n",
    "    \n",
    "    ## Covisibility Graph Topology\n",
    "    \n",
    "    # source\n",
    "    self.world = None\n",
    "    \n",
    "    # If point is upgraded as keypoint, the attribute is used to trace back\n",
    "    self.parent = None\n",
    "    \n",
    "    # projected pixel\n",
    "    self.px = None\n",
    "    \n",
    "    # used if the point is a world point\n",
    "    self.frames = {}\n",
    "    \n",
    "    ## Identity\n",
    "    \n",
    "    # id\n",
    "    self.id = None\n",
    "    self.seq = self.Seq()\n",
    "    # generate uuid using uuid algorithm\n",
    "    self.uuid = \"\"\n",
    "    \n",
    "  def __getitem__(self, i):\n",
    "    # note i might be a tuple to be processed by a slicer, hence I use numpy for sanity checking\n",
    "    data = np.array([self.x, self.y, self.z])\n",
    "    return data[i]\n",
    "  \n",
    "  def data(self):\n",
    "    data = np.array([self.x, self.y, self.z])\n",
    "    return data\n",
    "  \n",
    "  def __setitem__(self, k, v):\n",
    "    # note i might be a tuple to be processed by a slicer, hence I use numpy for sanity checking\n",
    "    data = np.array([self.x, self.y, self.z])\n",
    "    data[k] = v\n",
    "    self.x = data[0]\n",
    "    self.y = data[1]\n",
    "    self.z = data[2]\n",
    "    return self\n",
    "\n",
    "  def set_FromWorld(self, world):\n",
    "    if not isinstance(world, self.__class__):\n",
    "      raise ValueError(\"Expect world to be type %s but find %s\\n\" % (str(self.__class__), str(type(world))))\n",
    "    self.world = world\n",
    "\n",
    "  def associate_with(self, frame, pos):\n",
    "    if self.frames.get(frame, None):\n",
    "      self.frames[frame] = pos\n",
    "    else:\n",
    "      last_pos = self.frames[frame]\n",
    "      if last_pos is not pos:\n",
    "        raise Exception(\"The point %s has already been mapped to a different place\" % self)\n",
    "    return self\n",
    "\n",
    "  def __str__(self):\n",
    "    return \"<Point3D %d>\" % self.seq\n",
    "\n",
    "\n",
    "class Pixel2D:\n",
    "\n",
    "  Seq = AtomicCounter()\n",
    "\n",
    "  # implements STL iterators\n",
    "  class VectorIterator(object):\n",
    "    def __init__(self, vec):\n",
    "      self._vec = vec \n",
    "      self.counter = self.__counter__()\n",
    "\n",
    "    def __iter__(self):\n",
    "      return self\n",
    "\n",
    "    def __counter__(self):\n",
    "      l = len(self._vec)\n",
    "      # one dimension index\n",
    "      ind = 0\n",
    "      while True:\n",
    "        yield ind\n",
    "        ind += 1\n",
    "        if ind >= l:\n",
    "          break\n",
    "\n",
    "    def __next__(self):\n",
    "      try:\n",
    "        ind = next(self.counter)\n",
    "        return self._vec[ind]\n",
    "      except StopIteration:\n",
    "        raise StopIteration()\n",
    "\n",
    "    def __str__(self):\n",
    "      return \"Vector iterator\"\n",
    "\n",
    "  def __init__(self, r, c, val=0):\n",
    "    self.r = r\n",
    "    self.c = c\n",
    "    self.val = val\n",
    "\n",
    "    ## Covisibility Graph Topology\n",
    "\n",
    "    # reproj 3d points in camera space\n",
    "    self.sources = []\n",
    "    # a weak frame reference\n",
    "    self.frame = None\n",
    "    # a weak roi reference\n",
    "    self.roi = None\n",
    "   \n",
    "    ## Identity \n",
    "   \n",
    "    # id\n",
    "    self.id = None\n",
    "    self.seq = self.Seq()\n",
    "    # generate uuid using uuid algorithm\n",
    "    self.uuid = None\n",
    "    self.key = None\n",
    "\n",
    "  def set_FromFrame(self, frame):\n",
    "    self.frame = frame\n",
    "    H, W = frame.img.shape[:2]\n",
    "    idx = int(self.r*W + self.c)\n",
    "    # add strong connection\n",
    "    frame.pixels[idx] = self\n",
    "    return self\n",
    "\n",
    "  def set_FromROI(self, landmark):\n",
    "    self.roi = landmark\n",
    "    H, W = self.frame.img.shape[:2]\n",
    "    # add strong connecion\n",
    "    idx = int(self.r*W + self.c)\n",
    "    landmark.pixels[idx] = self\n",
    "    return self\n",
    "  \n",
    "  def add_camera_point(self, p3d):\n",
    "    # if not present\n",
    "    self.sources.append(p3d)\n",
    "    return self\n",
    "  \n",
    "  def __getitem__(self, i):\n",
    "    data = np.array([self.c, self.r])\n",
    "    return data[i]\n",
    "\n",
    "  @property\n",
    "  def x(self):\n",
    "    return self.c\n",
    "  \n",
    "  @property\n",
    "  def y(self):\n",
    "    return self.r\n",
    "  \n",
    "  def __setitem__(self, k, v):\n",
    "    data = np.array([self.c, self.r])\n",
    "    data[k] = v\n",
    "    self.r = data[1]\n",
    "    self.c = data[0]\n",
    "    return self\n",
    "\n",
    "  def __iter__(self):\n",
    "    data = np.array([self.c, self.r])\n",
    "    return self.VectorIterator(data)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Cg5MdSp8f4T-"
   },
   "source": [
    "##### Utilities of images as matrix\n",
    "\n",
    "Here are some useful utilities I developed for common usage. See details from https://github.com/yiakwy/SpatialPerceptron/blob/master/notebooks/ModelArts-Improvement_One-Stage-Detectron.ipynb"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "iggbX5FqgJSb"
   },
   "outputs": [],
   "source": [
    "'''\n",
    "Created on 15 Jul, 2019\n",
    "\n",
    "@author: wangyi\n",
    "'''\n",
    "\n",
    "import os\n",
    "import sys\n",
    "import random\n",
    "import math\n",
    "import numpy as np\n",
    "import cv2\n",
    "import matplotlib\n",
    "import matplotlib.pyplot as plt\n",
    "import matplotlib.gridspec as gridspec\n",
    "\n",
    "# numpy array utils. \n",
    "# warning: the utilities should accept tensors as input.\n",
    "\n",
    "def read_img(file_path):\n",
    "    if not os.path.exists(file_path):\n",
    "        raise ValueError(\"Image path [%s] does not exist.\" % (file_path))\n",
    "    im = cv2.imread(file_path)\n",
    "    im = im.astype(np.float32, copy=False)\n",
    "    im = cv2.resize(im, (config.HEIGHT, config.WIDTH), interpolation=cv2.INTER_CUBIC)\n",
    "    return im\n",
    "\n",
    "def load_images(files):\n",
    "    count = len(files)\n",
    "    X = np.ndarray((count, config.HEIGHT, config.WIDTH, config.CHANNEL), dtype=np.uint8)\n",
    "    for i, image_file in enumerate(files):\n",
    "        image = read_img(image_file)\n",
    "        X[i] = image\n",
    "    return X\n",
    "\n",
    "def random_colors(N, bright=True):\n",
    "    \"\"\"\n",
    "    Generate random colors.\n",
    "    To get visually distinct colors, generate them in HSV space then\n",
    "    convert to RGB.\n",
    "    \"\"\"\n",
    "    brightness = 1.0 if bright else 0.7\n",
    "    hsv = [(i / N, 1, brightness) for i in range(N)]\n",
    "    colors = list(map(lambda c: colorsys.hsv_to_rgb(*c), hsv))\n",
    "    random.shuffle(colors)\n",
    "    return colors\n",
    "\n",
    "def apply_mask(image, mask, color, alpha=0.5):\n",
    "    \"\"\"Apply the given mask to the image.\n",
    "    \"\"\"\n",
    "    for c in range(3):\n",
    "        image[:, :, c] = np.where(mask == 1,\n",
    "                                  image[:, :, c] * (1 - alpha) + alpha * color[c] * 255,\n",
    "                                  image[:, :, c])\n",
    "    return image\n",
    "\n",
    "def IoU_numeric(left_box, right_box, left_area, right_area):\n",
    "    # Compute intersection areas\n",
    "    x1 = max(left_box[0], right_box[0])\n",
    "    y1 = max(left_box[1], right_box[1])\n",
    "    x2 = min(left_box[2], right_box[2])\n",
    "    y2 = min(left_box[3], right_box[3])\n",
    "    \n",
    "    h = max(0, y2 - y1)\n",
    "    w = max(0, x2 - x1)\n",
    "    \n",
    "    overlap = float(w * h)\n",
    "    union = left_area + right_area - overlap\n",
    "    iou = overlap / union\n",
    "    return iou\n",
    "\n",
    "# Simple implementation of my UIoUs, i.e, universal IoUs\n",
    "def UIoU_numeric(left_box, right_box, left_area, right_area):\n",
    "    # Compute intersection areas\n",
    "    x1 = max(left_box[0], right_box[0])\n",
    "    y1 = max(left_box[1], right_box[1])\n",
    "    x2 = min(left_box[2], right_box[2])\n",
    "    y2 = min(left_box[3], right_box[3])\n",
    "    \n",
    "    h = max(0, y2 - y1)\n",
    "    w = max(0, x2 - x1)\n",
    "    \n",
    "    overlap = float(w * h)\n",
    "    union = left_area + right_area - overlap\n",
    "    iou = overlap / union\n",
    "\n",
    "    x1 = min(left_box[0], right_box[0])\n",
    "    y1 = min(left_box[1], right_box[1])\n",
    "    x2 = max(left_box[2], right_box[2])\n",
    "    y2 = max(left_box[2], right_box[2])\n",
    "\n",
    "    h = max(0, y2 - y1)\n",
    "    w = max(0, x2 - x1)\n",
    "\n",
    "    chul_area = float(w * h)\n",
    "    x = 0.0 if abs(iou) < 1e-09 else 1.0\n",
    "    uious = (1-x) * iou - x * ((chul_area - union + overlap) / chul_area)\n",
    "    return uious\n",
    "\n",
    "\n",
    "# Non-Max Suppression: simliar to tf.image.non_max_suppression for non-symbolic computation\n",
    "# see https://github.com/matterport/Mask_RCNN/blob/master/mrcnn/utils.py\n",
    "def nms(boxes, scores=None, threshold=0.3):\n",
    "    \"\"\"\n",
    "    @param boxes: np.array with standard tensorflow box order, [x1, y1, x2, y2]\n",
    "    @param scores: np.array\n",
    "    @param threshold: float32\n",
    "    \"\"\"\n",
    "    if len(boxes) == 0:\n",
    "        return 0\n",
    "    \n",
    "    # Compute box area: (x2 - x1) * (y2 - y1)\n",
    "    area = (boxes[:,2] - boxes[:,0]) * (boxes[:,3] - boxes[:,1])\n",
    "    \n",
    "    if scores is not None:\n",
    "        # Sort boxes indices by box scores\n",
    "        idx = scores.argsort()[::-1]\n",
    "    else:\n",
    "        # see https://www.pyimagesearch.com/2014/11/17/non-maximum-suppression-object-detection-python/\n",
    "        # Sort boxes indices by bottom-right y-coordinates of bounding box\n",
    "        idx = np.argsort(boxes[:,3])\n",
    "        \n",
    "    picked = []\n",
    "    \n",
    "    while len(idx) > 0:\n",
    "        # Pick one to the list\n",
    "        i = idx[0]\n",
    "        picked.append(i)\n",
    "        # Compute IoU of the picked box with the rest\n",
    "        ious = np.array([IoU_numeric(boxes[i], boxes[j], area[i], area[j]) for j in idx[1:]])\n",
    "        remove_idx = np.where(ious > threshold)[0] + 1\n",
    "        # Remove indices of overlapped boxes\n",
    "        idx = np.delete(idx, remove_idx)\n",
    "        idx = np.delete(idx, 0)\n",
    "    return np.array(picked, dtype=np.int32)\n",
    "\n",
    "# We will use KL distance to compare two pdf\n",
    "def logloss(p, q):\n",
    "    return np.sum(cross_entropy(p, q), axis=0) \n",
    "    \n",
    "def cross_entropy(p, q):\n",
    "    # print(\"q(sigmoid): %s\" % np.array2string(q, formatter={'float_kind':'{0:6.3f}'.format}))\n",
    "    # print(\"p(y): %s\" % np.array2string(p, formatter={'float_kind':'{0:6.3f}'.format}))\n",
    "    # print(\"entropy: %s\" % np.array2string(-p * np.log(q) - (1-p) * np.log(1-q), formatter={'float_kind':'{0:6.3f}'.format}))\n",
    "    EPILON = 1e-6\n",
    "    p = np.asarray(p) \n",
    "    q = np.asarray(q)\n",
    "\n",
    "    # make it probability distribution\n",
    "    p = (p - np.min(p)) / (np.max(p) - np.min(p)) * (1 - 1e-6)\n",
    "\n",
    "    q = (q - np.min(q)) / (np.max(q) - np.min(q)) * (1 - 1e-6)\n",
    "\n",
    "    return -p * np.log(q+EPILON) - (1-p) * np.log(1-q)\n",
    "\n",
    "def cosine_dist(p, q):\n",
    "    p = (np.asarray(p)-np.mean(p))\n",
    "    p =  p / np.linalg.norm(p)\n",
    "    q = (np.asarray(q)-np.mean(q))\n",
    "    q =  q / np.linalg.norm(q)\n",
    "    return p.dot(q)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "QBUz7jCk8xo0"
   },
   "outputs": [],
   "source": [
    "# Additional helper classes to match implementation in c++ side\n",
    "import ctypes\n",
    "\n",
    "class Vector3D(ctypes.Array):\n",
    "  _type_ = ctypes.c_double\n",
    "  _length_ = 3\n",
    "\n",
    "class SE3(ctypes.Union):\n",
    "  _fields_ = (\"x\", ctypes.c_double), (\"y\", ctypes.c_double), (\"z\", ctypes.c_double), (\"eye\", Vector3D)\n",
    "\n",
    "class Pose(ctypes.Union):\n",
    "  _fields_ = (\"azimuth\", ctypes.c_double), (\"pitch\", ctypes.c_double), (\"roll\", ctypes.c_double), (\"pose\", Vector3D)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "cuhx-hGWl72O"
   },
   "source": [
    "##### Tracker for SVSO"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 150,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "9Qprk4T70L-H"
   },
   "outputs": [],
   "source": [
    "# OpticalFlow mixed with Kalman Filter predictor and Hungarian algorithm (KM algorithm) matcher based info tracker\n",
    "# Author: LEI WANG (yiak.wy@gmail.com)\n",
    "# Date: Feb 28th, 2020\n",
    "#\n",
    "# In this section, I am going to implement a tracker used for the relocalization problems i.e. when a robot changes its situation \n",
    "# significantly(or our system instrinsic parameters change significantly), we want to generate a map by registion of observations at different angles. With refernces to \n",
    "# the implementation of Re-Id Person project: Simple Online Realtime Tracking, --instead of using\n",
    "# traditional SIFT and ORB features widely used by vison only SLAM system (ORB SLam for example), by learning frequencies decomposition of image domain quantities,\n",
    "# (i.e. CNN backbone layers) I provided a robust version of semantic \n",
    "# multi obstacles tracker for vision supported odemetry (VSO, such as VIO, GPS-IMU-Visual Odemetry). See our pending work (Semantic Visual Supported Odemetry (SVSO)) for further \n",
    "# references. \n",
    "\n",
    "# The VSO programe implements part of logics from an embedded, high \n",
    "# definition localization software. The high definition localization software is an important\n",
    "# component of HDMap, an entry to end devices.\n",
    "\n",
    "# This version of tracker does not fully employ the concurency and parallelism of the machine\n",
    "# and work as an example to demonstrate how a prototype in which we implemented asynchornous and\n",
    "# multi threads tracker based on Pub/Sub network model or producer/consuter pair for inter threads communication(ITC) [1] \n",
    "# (contract to ITC, see gpc(c++ server side, c++/python/golang client sides) pubsub for IPC)\n",
    "\n",
    "import threading\n",
    "try:\n",
    "  import queue\n",
    "except:\n",
    "  import Queue as queue\n",
    "import cv2\n",
    "# we will use linear_assignment to quickly write experiments,\n",
    "# later a customerized KM algorithms with various optimization in c++ is employed\n",
    "# see https://github.com/berhane/LAP-solvers\n",
    "\n",
    "# This is used for \"Complete Matching\" and we can remove unreasonable \"workers\" first and then apply it \n",
    "import scipy.optimize as Optimizer\n",
    "\n",
    "# This is used for \"Maximum Matching\". There is a desired algorithm implementation for our references\n",
    "import scipy.sparse.csgraph as Graph \n",
    "from skimage.measure import find_contours\n",
    "\n",
    "import keras\n",
    "import keras.backend as K\n",
    "import keras.layers as KL\n",
    "import keras.engine as KE\n",
    "import keras.models as KM\n",
    "\n",
    "K.set_image_data_format('channels_last')\n",
    "\n",
    "import logging\n",
    "_logger = logging.getLogger(\"tracker\")\n",
    "import numpy as np\n",
    "from enum import Enum\n",
    "import uuid\n",
    "\n",
    "DEBUG = True\n",
    "\n",
    "def display(im, ax=None):\n",
    "  figsize = (16, 16)\n",
    "  if ax is None:\n",
    "    _, ax = plt.subplots(1, figsize=figsize)\n",
    "  height, width = im.shape[:2]\n",
    "  size=(width, height)\n",
    "  ax.set_ylim(height + 10, -10)\n",
    "  ax.set_xlim(-10, width + 10)\n",
    "  ax.axis('off')\n",
    "  ax.imshow(im.astype(np.uint8))\n",
    "\n",
    "class SemanticFeatureExtractor:\n",
    "  \"\"\"\n",
    "  Author: LEI WANG (yiak.wy@gmail.com)\n",
    "  Date: March 1, 2020\n",
    "\n",
    "  \"\"\"\n",
    "\n",
    "  logger = LoggerAdaptor(\"SemanticFeatureExtractor\", _logger)\n",
    "\n",
    "  # shared encoder among all SemanticFeatureExtractor instances\n",
    "  oneHotEncoder = None\n",
    "\n",
    "  def __init__(self, model):\n",
    "    # pretrained model with trained parameters\n",
    "    self._base_model = model\n",
    "    self._model = None\n",
    "    # input tensor to the model\n",
    "    self.inp_tensor = None\n",
    "    # output features tensor from model model\n",
    "    self.features_tensor = None\n",
    "    # pool_size\n",
    "    self.POOL_SIZE = model.config.POOL_SIZE\n",
    "    # pool channel\n",
    "    self.POOL_CHANNEL = None\n",
    "\n",
    "    # weak ref to frame attached to\n",
    "    self._frame = None\n",
    "\n",
    "    # dataset labels\n",
    "    self.LABELS_SET = class_names\n",
    "\n",
    "    self.USE_BBOX_AS_KEY_POINTS = False\n",
    "\n",
    "    self.USE_ROI_LEVEL_ORB = True\n",
    "\n",
    "    # choosed subset of detection labels\n",
    "    self.TRACK_LABELS_SET = ['BG', 'person', \n",
    "                             # 'bicycle', 'car', 'motorcycle', 'airplane','bus', 'train', 'truck', 'boat', 'traffic light',\n",
    "                             'cat', 'dog',\n",
    "                             'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', \n",
    "                             'sports ball',\n",
    "                             # 'kite',\n",
    "                             'bottle', 'wine glass', 'cup',\n",
    "                             'fork', 'knife', 'spoon', 'bowl', \n",
    "                             'banana', 'apple',\n",
    "                             'sandwich', 'hot dog', 'pizza', 'donut', 'cake',\n",
    "                             'orange', 'broccoli', 'carrot', \n",
    "                             'chair', 'couch', \n",
    "                             'potted plant', \n",
    "                             'bed', 'dining table', 'toilet', 'tv', 'laptop', \n",
    "                             'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster',\n",
    "                             'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors',\n",
    "                             'teddy bear', 'hair drier', 'toothbrush']\n",
    "\n",
    "  def attach_to(self, frame):\n",
    "    self._frame = frame\n",
    "\n",
    "  def get_base_model(self):\n",
    "\n",
    "    def init_base_model(base_model):\n",
    "        for layer in base_model.layers:\n",
    "            layer.trainable = False\n",
    "        return base_model\n",
    "\n",
    "    self._base_model.keras_model = init_base_model(self._base_model.keras_model)\n",
    "    return self._base_model\n",
    "\n",
    "  def get_model(self):\n",
    "    if self._model is None:\n",
    "      # restructure the model\n",
    "      base_model = self.get_base_model()\n",
    "      \n",
    "      # see Keras implementation of MaskRCNN for inference mode\n",
    "      \n",
    "      # MaskRCNN accepts input image, generated \n",
    "      self.inp_tensor = base_model.keras_model.inputs\n",
    "      \n",
    "      # @todo : TODO important!\n",
    "\n",
    "      # RPN compute (dx, dy, log(dh), log(dw)) and ProposalLayer generates filtered bbox\n",
    "      # of ROIs with topk and Non-Maximal-Suppression algorithm. Then ROIAlign layer aligns ROI with \n",
    "      # Pyramid Network Features (generarted in the last step).\n",
    "      #\n",
    "      #  x = PyramidROIAlign([pool_size, pool_size], name=\"{Task_Name}\")([rois, image_meta] + fpn_feature_maps)\n",
    "      #\n",
    "      # Features nd vector shape : (batch, size of diferent ratios, vectorized image cropped by bbox(using interpolation algorihtms), )\n",
    "      # Note MaskRCNN only implements resized 244*244 Roi(w x h, on the input image)-FPN mapping (sampling pixel to the level of feature map)\n",
    "      # ROI level for Pyramid Network head is computed using\n",
    "      #\n",
    "      #   RoI_level = Tensor.round(4+log2(sqrt(w*h)/(244/sqrt(IMAGE_WIDTH x IMAGE_HEIGHT))))\n",
    "      #\n",
    "      # Note most of 'famous' implementation just \"crop and resize by binlinar interpolation\". \n",
    "      # You don't know how a \"statement\" is implemented until you see it (feel sad)\n",
    "      #  \n",
    "      from mrcnn.model import PyramidROIAlign, norm_boxes_graph\n",
    "\n",
    "      config = self._base_model.config\n",
    "      inputs = self.inp_tensor\n",
    "\n",
    "      input_image = inputs[0]\n",
    "      image_meta = inputs[1]\n",
    "      rois_inp = KL.Input(shape=[None, 4], name=\"rois_inp\")\n",
    "      rois_inp1 = KL.Lambda(lambda x: norm_boxes_graph(x, K.shape(input_image)[1:3]))(rois_inp)\n",
    "\n",
    "      P2 = self._base_model.keras_model.get_layer('fpn_p2').output\n",
    "      P3 = self._base_model.keras_model.get_layer('fpn_p3').output\n",
    "      P4 = self._base_model.keras_model.get_layer('fpn_p4').output\n",
    "      P5 = self._base_model.keras_model.get_layer('fpn_p5').output\n",
    "      feature_maps = [P2, P3, P4, P5]\n",
    "      self._feature_maps_tensor = feature_maps\n",
    "\n",
    "      x = PyramidROIAlign((config.POOL_SIZE, config.POOL_SIZE), name=\"features_extractor\")([rois_inp1, image_meta] + feature_maps)\n",
    "      self.features_tensor = x\n",
    "      \n",
    "      self.logger.info(\"Constructing deep feature extration model ...\")\n",
    "\n",
    "      class ModelWrapper:\n",
    "\n",
    "        def __init__(self, model, base_model):\n",
    "          self._model = model\n",
    "          self._base_model = base_model\n",
    "        \n",
    "        def detect(self, img, bboxes):\n",
    "          # mold images\n",
    "          molded_images, image_metas, windows = self._base_model.mold_inputs([img])\n",
    "          \n",
    "          # get anchors\n",
    "          config  = self._base_model.config\n",
    "          anchors = self._base_model.get_anchors(molded_images[0].shape)\n",
    "          anchors = np.broadcast_to(anchors, (config.BATCH_SIZE,) + anchors.shape)\n",
    "          \n",
    "          # reshape bbox\n",
    "          bboxes  = np.broadcast_to( bboxes, (config.BATCH_SIZE,) +  bboxes.shape)\n",
    "\n",
    "          features = self._model.predict([bboxes, molded_images, image_metas, anchors], verbose=0)\n",
    "          return features\n",
    "        \n",
    "      self._model = ModelWrapper(KM.Model(inputs=[rois_inp,] + inputs, outputs=self.features_tensor, name=\"SemanticFeatureExtractor\"), self._base_model)\n",
    "      \n",
    "      self.logger.info(\"Construction of deep feature extraction model complete.\")\n",
    "\n",
    "    return self._model\n",
    "\n",
    "  # @todo : TODO\n",
    "  def encodeDeepFeatures(self, boxes, masks, roi_features, class_ids, scores):\n",
    "    keypoints = []\n",
    "    features = []\n",
    "\n",
    "    n_instances = boxes.shape[0]\n",
    "    for i in range(n_instances):\n",
    "      keypoints_per_box = []\n",
    "      feature = {}\n",
    "\n",
    "      if not np.any(boxes[i]):\n",
    "        # Skip this instance. Has no bbox. Likely lost in image cropping.\n",
    "        continue\n",
    "\n",
    "      y1, x1, y2, x2 = boxes[i]\n",
    "\n",
    "      class_id = class_ids[i]\n",
    "      score = scores[i]\n",
    "      label = self.LABELS_SET[class_id]\n",
    "\n",
    "      # our landmark idx starts from 1\n",
    "      print(\"#%d type(%s), score:%f, bbox:\" % (i+1, label, score), (x1, y1, x2, y2))\n",
    "\n",
    "      if label not in self.TRACK_LABELS_SET:\n",
    "        # self.logger.info(\"Found label unexpected label %s for track, ignore ...\" % label)\n",
    "        print(\"Found label unexpected label %s for track, ignore ...\" % label)\n",
    "        continue\n",
    "\n",
    "      Thr = 0.7\n",
    "      if score < Thr:\n",
    "        print(\"detected %s score is less than %f, ignore ...\"  % Thr)\n",
    "        continue\n",
    "\n",
    "      if self.USE_BBOX_AS_KEY_POINTS:\n",
    "        keypoints_per_box.append(Pixel2D(y1, x1).set_FromFrame(self._frame))\n",
    "        keypoints_per_box.append(Pixel2D(y1, x1).set_FromFrame(self._frame))\n",
    "\n",
    "      if self.USE_ROI_LEVEL_ORB:\n",
    "        kp, des = self._frame.ExtractORB(bbox=boxes[i])\n",
    "        for p in kp:\n",
    "          keypoints_per_box.append(Pixel2D(p.pt[1],p.pt[0]).set_FromFrame(self._frame))\n",
    "\n",
    "        # keep a referene to key points associated with the descriptor\n",
    "        feature['roi_orb'] = (des, kp)\n",
    "        \n",
    "        self.logger.info(\"extracting orb key points and features for detection\")\n",
    "\n",
    "      feature['box'] = boxes[i]\n",
    "      feature['mask'] = masks[:,:,i]\n",
    "      \n",
    "      # for vocabulary database of large size, please use one-hot encoding + embedding instead.\n",
    "      # encode it to category value vector\n",
    "      if SemanticFeatureExtractor.oneHotEncoder is None:\n",
    "        from sklearn.preprocessing import LabelEncoder\n",
    "        from sklearn.preprocessing import OneHotEncoder\n",
    "        \n",
    "        label_encoder = LabelEncoder()\n",
    "        indice = label_encoder.fit_transform(self.TRACK_LABELS_SET)\n",
    "        categorical_features_encoder = OneHotEncoder(handle_unknown='ignore')\n",
    "\n",
    "        inp = list(zip(self.TRACK_LABELS_SET, indice))\n",
    "        print(\"categorical_features shp:\", np.array(inp).shape)\n",
    "        import pandas as pd\n",
    "        df = pd.DataFrame({\n",
    "            'LABEL': self.TRACK_LABELS_SET,\n",
    "            'int': indice\n",
    "        })\n",
    "        print(df.head(10))\n",
    "        categorical_features_encoder.fit(inp)\n",
    "        encoded_features = categorical_features_encoder.transform(inp).toarray()\n",
    "        \n",
    "        def encoder(label):\n",
    "          new_class_id = self.TRACK_LABELS_SET.index(label)\n",
    "          return encoded_features[new_class_id, :]\n",
    "\n",
    "        SemanticFeatureExtractor.oneHotEncoder = encoder\n",
    "\n",
    "      feature['roi_feature'] = roi_features[i]\n",
    "      feature['class_id'] = SemanticFeatureExtractor.oneHotEncoder(label)\n",
    "      feature['score'] = score\n",
    "      # feature['keypoints_per_box'] = keypoints_per_box\n",
    "\n",
    "      # used for constructin of observation\n",
    "      feature['label'] = label\n",
    "\n",
    "      # add to features list\n",
    "      features.append(feature)\n",
    "      keypoints.append(keypoints_per_box)\n",
    "\n",
    "    return (keypoints, features)\n",
    "\n",
    "  def detect(self, img):\n",
    "    base_model = self.get_base_model()\n",
    "\n",
    "    ret = base_model.detect([img], verbose=1)[0]\n",
    "    return ret\n",
    "\n",
    "  # @todo : TODO\n",
    "  def compute(self, img, detection):\n",
    "    # get keras model\n",
    "    model = self.get_model()\n",
    "    \n",
    "    # BATCH_SIZE is set to 1\n",
    "    roi_features = model.detect(img, detection['rois'])[0]\n",
    "    print(\"roi_features shape: \", roi_features.shape)\n",
    "    \n",
    "    rois, masks = detection['rois'], detection['masks']\n",
    "    assert(len(rois) == roi_features.shape[0])\n",
    "    assert(self.POOL_SIZE == roi_features.shape[1] == roi_features.shape[2])\n",
    "\n",
    "    self.POOL_CHANNEL = roi_features.shape[3]\n",
    "    shp = roi_features.shape\n",
    "    roi_features = np.reshape(roi_features, (shp[0], shp[1]*shp[2]*shp[3]))\n",
    "\n",
    "    print(\"=== Detection Results ===\")\n",
    "\n",
    "    keypoints, features = self.encodeDeepFeatures(rois, masks, roi_features, detection['class_ids'], detection['scores'])\n",
    "    return (keypoints, features)\n",
    "\n",
    "class Frame:\n",
    "\n",
    "  Seq = AtomicCounter()\n",
    "  logger = LoggerAdaptor(\"Frame\", _logger)\n",
    "\n",
    "  def __init__(self):\n",
    "    self.id = None\n",
    "    self.seq = self.Seq()\n",
    "\n",
    "    ## Content\n",
    "\n",
    "    # might be a image path or url read by a asynchronous reader \n",
    "    self._img_src = None\n",
    "    # color img\n",
    "    self.img = None\n",
    "\n",
    "    # computed grey img or collected grey img directly from a camera\n",
    "    self._img_grey = None\n",
    "\n",
    "    # a camera instance to performance MVP computation or other image related computation\n",
    "    self.camera = None\n",
    "\n",
    "    # group of map tiles, where we storage 3d points and frames\n",
    "    # Note: this should be a weak reference to the original data representation\n",
    "    self.runtimeBlock = None\n",
    "\n",
    "    ## Rigid object movements\n",
    "    # later we will move these attributes to Object3D as common practice\n",
    "    # in game development area, i.e, class Frame -> class Frame: public Object3D\n",
    "    \n",
    "    # rotation and translation relative to origins\n",
    "    self.R0 = np.eye(3)\n",
    "    self.t0 = np.zeros((3,1))\n",
    "    \n",
    "    # rotation and translation relative to the last frame, updated in each frame\n",
    "    self.R1 = np.eye(3)\n",
    "    self.t1 = np.zeros((3,1))\n",
    "    \n",
    "    ## Covisibility Graph Topology\n",
    "\n",
    "    # \n",
    "    self.pixels = {}\n",
    "\n",
    "    ## Features Expression Layer\n",
    "\n",
    "    # extracted features\n",
    "    self.kps = None\n",
    "    self.kps_feats = None\n",
    "\n",
    "    # extracted roi features\n",
    "    self.roi_kps = None\n",
    "    self.roi_feats = None\n",
    "\n",
    "    # meta data\n",
    "    self._detections = {}\n",
    "\n",
    "    self.extractors = {}\n",
    "\n",
    "    self.is_First = False\n",
    "\n",
    "    ### Main Logics Executor ###\n",
    "\n",
    "    #\n",
    "    self.predictors = {\n",
    "        'OpticalFlow': OpticalFlowBBoxPredictor(),\n",
    "        'OpticalFlowKPnt': OpticalFlowKPntPredictor()\n",
    "    }\n",
    "\n",
    "    #\n",
    "    self.matchers = {}\n",
    "\n",
    "    #\n",
    "    self.USE_IMAGE_LEVEL_ORB = False # =>\n",
    "\n",
    "    #\n",
    "    self.SHOW_ROI = False\n",
    "\n",
    "    #\n",
    "    self.sample_size = 7\n",
    "\n",
    "  def set_camera(self, camera):\n",
    "    self.camera = camera\n",
    "    return self\n",
    "\n",
    "  # @todo : TODO\n",
    "  def set_FromImg(self, img):\n",
    "    self.img = img\n",
    "    \n",
    "    self.predictors['OpticalFlow'].set_FromImg(self.img_grey()).Init()\n",
    "    self.predictors['OpticalFlowKPnt'].set_FromImg(self.img_grey()).Init()\n",
    "    return self\n",
    "\n",
    "  # getter of self._grey_img\n",
    "  def img_grey(self):\n",
    "    img_grey = self._img_grey \n",
    "    if img_grey is None:\n",
    "      img_grey = cv2.cvtColor(self.img, cv2.COLOR_BGR2GRAY)\n",
    "      # suppose the image is undistorted\n",
    "      self._img_grey = img_grey\n",
    "    return img_grey\n",
    "\n",
    "  # @todo : TODO\n",
    "  def extract(self):\n",
    "    orb_kp, orb_desc = (None, None) \n",
    "    if self.USE_IMAGE_LEVEL_ORB:\n",
    "      # @todo : REFACTOR the tasks should be running in pararllel\n",
    "      # self.logger.info(\"%s, Extracting Image ORB features ...\" % self)\n",
    "      print(\"%s, Extracting Image ORB features ...\" % self)\n",
    "      orb_kp, orb_desc = self.ExtractORB()\n",
    "\n",
    "      # self.logger.info(\"Type of image orb key points : %s, size %d\" % (type(orb_kp), len(orb_kp)))\n",
    "      print(\"Type of image orb key points : %s, size %d\" % (type(orb_kp), len(orb_kp)))\n",
    "      # self.logger.info(\"Type of image orb descriptors : %s, shape %s\" % (type(orb_desc), orb_desc.shape))\n",
    "      print(\"Type of image orb descriptors : %s, shape %s\" % (type(orb_desc), orb_desc.shape))\n",
    "\n",
    "    # extract deep features or ROI\n",
    "    # self.logger.info(\"%s, Extracting ROI features ...\" % self)\n",
    "    print(\"%s, Extracting ROI features ...\" % self)\n",
    "    roi_kp, roi_features = self.ExtractROI()\n",
    "    \n",
    "    kps = []\n",
    "    kps_feats = []\n",
    "\n",
    "    if self.USE_IMAGE_LEVEL_ORB:\n",
    "      kps.extend(orb_kp)\n",
    "      kps_feats.extend(orb_desc)\n",
    "    # catenate orb keypoints and features, see opencv docs for definition \n",
    "    # of returned key points and descriptors\n",
    "    if self.extractors['sfe'].USE_ROI_LEVEL_ORB:\n",
    "      for i, roi_feat_per_box in enumerate(roi_features):\n",
    "        desc_per_box, kps_per_box = roi_feat_per_box['roi_orb']\n",
    "        if len(kps_per_box) is 0:\n",
    "          print('bbox: ', roi_feat_per_box['box'])\n",
    "          # raise ValueError(\"'kps_per_box' should not be an empty list!\")\n",
    "        kps.extend(kps_per_box)\n",
    "        kps_feats.extend(desc_per_box)\n",
    "\n",
    "    self.kps = kps\n",
    "    self.kps_feats = kps_feats\n",
    "    self.roi_kp = roi_kp\n",
    "    self.roi_features = roi_features\n",
    "\n",
    "    return (kps, kps_feats, roi_kp, roi_features)\n",
    "\n",
    "  def ExtractORB(self, bbox=None, mask=None):\n",
    "    # using opencv ORB extractor\n",
    "    orb = self.extractors.get('orb', None)\n",
    "    if orb is None:\n",
    "      orb = cv2.ORB_create(edgeThreshold=15, \n",
    "                          patchSize=31, \n",
    "                          nlevels=8, \n",
    "                          fastThreshold=20, \n",
    "                          scaleFactor=1.2, \n",
    "                          WTA_K=2,\n",
    "                          scoreType=cv2.ORB_HARRIS_SCORE, \n",
    "                          firstLevel=0, \n",
    "                          nfeatures=500)\n",
    "      self.extractors['orb'] = orb\n",
    "\n",
    "    img_grey = self.img_grey()\n",
    "    shp = img_grey.shape\n",
    "    if bbox is not None:\n",
    "      y1, x1, y2, x2 = bbox\n",
    "      # crop image\n",
    "    \n",
    "      new_img_grey = np.zeros(shp)\n",
    "      new_img_grey[y1:y2, x1:x2] = img_grey[y1:y2, x1:x2]\n",
    "      if self.SHOW_ROI:\n",
    "        display(new_img_grey)\n",
    "      img_grey = img_grey[y1:y2, x1:x2]\n",
    "      img_grey = cv2.resize(img_grey,(shp[0], shp[1]))\n",
    "      # img_grey = cv2.cvtColor(new_img_grey.astype('uint8'), cv2.COLOR_GRAY2BGR)\n",
    "  \n",
    "    # compute key points vector\n",
    "    kp = orb.detect(img_grey, None)\n",
    "\n",
    "    # compute the descriptors with ORB\n",
    "    kp, des = orb.compute(img_grey, kp)\n",
    "\n",
    "    if bbox is not None:\n",
    "      y1, x1, y2, x2 = bbox\n",
    "      h = y2 - y1\n",
    "      w = x2 - x1\n",
    "      shp0 = img_grey.shape\n",
    "\n",
    "      def _mapping(keypoint):\n",
    "        x = keypoint.pt[0] * w / shp0[1]+ x1 \n",
    "        y = keypoint.pt[1] * h / shp0[0]+ y1\n",
    "        keypoint.pt = (x, y)\n",
    "        return keypoint\n",
    "\n",
    "      kp = list(map(lambda p: _mapping(p), \n",
    "                    kp))\n",
    "      # kp = list(map(lambda idx: cv2.KeyPoint(kp[idx].x + x1, kp[idx].y + y1), indice))\n",
    "    if bbox is not None and len(kp) > self.sample_size and self.sample_size is not -1:\n",
    "      indice = np.random.choice(len(kp), self.sample_size)\n",
    "\n",
    "      kp = list(map(lambda idx: kp[idx], indice))\n",
    "      des = list(map(lambda idx: des[idx], indice))\n",
    "\n",
    "    # filter out kp, des with mask\n",
    "    # @todo : TODO\n",
    "\n",
    "    # assert(len(kp) > 0)\n",
    "    if len(kp) == 0:\n",
    "      return [], []\n",
    "\n",
    "    return kp, des\n",
    "  \n",
    "  def ExtractROI(self):\n",
    "    # using our semantic features extractor\n",
    "    sfe = self.extractors.get('sfe', None)\n",
    "    if sfe is None:\n",
    "      sfe = SemanticFeatureExtractor(model)\n",
    "      self.extractors['sfe'] = sfe\n",
    "      sfe.attach_to(self)\n",
    "\n",
    "    # defaults to opencv channel last format\n",
    "    img = self.img\n",
    "    \n",
    "    detections = sfe.detect(img)\n",
    "    self._detections = detections\n",
    "\n",
    "    # compute the descriptors with our SemanticFeaturesExtractor.encodeDeepFeatures\n",
    "    kp, des = sfe.compute(img, detections)\n",
    "    return kp, des\n",
    "\n",
    "  def mark_as_first(self):\n",
    "    self.is_First = True\n",
    "    return self\n",
    "\n",
    "  def __str__(self):\n",
    "    return \"<Frame %d>\" % self.seq \n",
    "\n",
    "\n",
    "# use to compute perspective camera MVP projections with intrinsic parameters and distortion recover (using OpenCV4)\n",
    "# the reading loop is implemented using CV2 camera.\n",
    "class Camera:\n",
    "\n",
    "  from enum import Enum\n",
    "  class Status(Enum):\n",
    "    MONOCULAR = 1\n",
    "\n",
    "  class ProjectionType(Enum):\n",
    "    PERSPECTIVE = 1\n",
    "    UNSOPPROTED = -1\n",
    "\n",
    "  def __init__(self, device, R, t, anchor_point):\n",
    "    # default mode is monocular\n",
    "    self.mode = Camera.Status.MONOCULAR\n",
    "    self.type = Camera.ProjectionType.PERSPECTIVE\n",
    "\n",
    "    #\n",
    "    self.device = device\n",
    "\n",
    "    # extrinsic parameters of a camera, see TUM vision group dataset format\n",
    "    self.K = device.K\n",
    "\n",
    "    # eye and pose\n",
    "    self.R = R\n",
    "    self.t = t\n",
    "\n",
    "    self.anchor_point = anchor_point\n",
    "\n",
    "  def Init(self):\n",
    "    self.K = self.device.K\n",
    "    # update other computed properties\n",
    "    return self\n",
    "\n",
    "  # @todo : TODO\n",
    "  def t_SE3ToR3(self):\n",
    "    t = self.t\n",
    "    return np.array([\n",
    "      [ 0.  , -t[2],  t[1]],\n",
    "      [ t[2], 0.   , -t[0]],\n",
    "      [-t[1],  t[0],  0.   ]\n",
    "    ])\n",
    "\n",
    "  # @tdo : TODO\n",
    "  def view(self, point3d):\n",
    "    raise Exception(\"Not Implemented!\")\n",
    "  \n",
    "  def reproj(self, pixel2d):\n",
    "    px = pixel2d\n",
    "    K = self.K\n",
    "    # compute normalized point in camera space\n",
    "    if isinstance(px, cv2.KeyPoint):\n",
    "      px = Pixel2D(px.pt[1], px.pt[0])\n",
    "    \n",
    "    if isinstance(px, tuple):\n",
    "      px = Pixel2D(px[1], px[0])\n",
    "    \n",
    "    return Point3D(\n",
    "      (px.x - K[0,2]) / K[0,0],\n",
    "      (px.y - K[1,2]) / K[1,1],\n",
    "      1\n",
    "    )\n",
    "\n",
    "# Utilities\n",
    "def push(stack, e):\n",
    "  stack.append(e)\n",
    "\n",
    "def pop(stack, e):\n",
    "  return stack.pop()\n",
    "\n",
    "class Device:\n",
    "  def __init__(self):\n",
    "    self.fx = None\n",
    "    self.cx = None\n",
    "    self.fy = None\n",
    "    self.cy = None\n",
    "    \n",
    "    self.distortion = None\n",
    "    self.image_size = None\n",
    "\n",
    "  @property\n",
    "  def K(self):\n",
    "    return np.array([\n",
    "     [self.fx, 0.,      self.cx],\n",
    "     [0.,      self.fy, 0.     ],\n",
    "     [0.,      0.,      1.     ]                \n",
    "    ])\n",
    "\n",
    "class RuntimeBlock:\n",
    "\n",
    "  logger = LoggerAdaptor(\"RuntimeBlock\", _logger)\n",
    "\n",
    "  def __init__(self):\n",
    "    self._frames = []\n",
    "    self.device = Device()\n",
    "    # detected landmarks from sfe \n",
    "    self.landmarks = {}\n",
    "    # key points\n",
    "    self.keypointCloud = {}\n",
    "    # active frames selected from dynamic sliding window\n",
    "    self.slidingWindow = 10\n",
    "    # active frames stack: see discussion https://github.com/raulmur/ORB_SLAM2/issues/872\n",
    "    self.active_frames = []\n",
    "    \n",
    "  def load_device(self, device_path):\n",
    "    \n",
    "    def parseCalibratedDevice(fn_yaml):\n",
    "      import yaml\n",
    "      ret = {}\n",
    "      skip_lines=1\n",
    "      with open(fn_yaml) as f:\n",
    "        f.readline()\n",
    "        content = f.read()\n",
    "        parsed = yaml.load(content, Loader=yaml.FullLoader)\n",
    "        return parsed\n",
    "    \n",
    "    parsed = parseCalibratedDevice(device_path)\n",
    "    parsed_camera = parsed[\"Camera\"]\n",
    "    \n",
    "    self.device.fx = parsed_camera[\"fx\"]\n",
    "    self.device.fy = parsed_camera[\"fy\"]\n",
    "    self.device.cx = parsed_camera[\"cx\"]\n",
    "    self.device.cy = parsed_camera[\"cy\"]\n",
    "    self.device.distortion = parsed_camera[\"distortion\"]\n",
    "    return self\n",
    "\n",
    "  def add(self, entity):\n",
    "    if isinstance(entity, Frame):\n",
    "      if len(self._frames) == 0:\n",
    "        entity.mark_as_first()\n",
    "        # self.add_new_key_frame(entity)\n",
    "      self._frames.append(entity)\n",
    "    else:\n",
    "      raise ValueError(\"expect entity to be type of %s but found %s\" % (str(self.__class__), str(type(entity))))\n",
    "\n",
    "  def get_frames(self):\n",
    "    return self._frames\n",
    "\n",
    "  # @todo : TODO\n",
    "  def get_active_frames(self):\n",
    "    # @todo : TODO modify active frames list\n",
    "\n",
    "    return self.active_frames\n",
    "\n",
    "  # @todo : TODO\n",
    "  def add_new_key_frame(self, entity):\n",
    "    if isinstance(entity, Frame):\n",
    "      push(self.active_frames, entity)\n",
    "    else:\n",
    "      raise TypeError(\"expect type of entity to be Frame, but found %s\" % type(entity))\n",
    "\n",
    "  def register(self, detection):\n",
    "    self.landmarks[detection.seq] = landmark\n",
    "\n",
    "  # keypoints registration\n",
    "  def registerKeyPoints(self, cur_frame, kps1, last_frame, kps2, lpoints, rpoints, points, mtched, mask):\n",
    "    H, W = cur_frame.img.shape[0:2]\n",
    "    \n",
    "    for i, mtch in enumerate(mtched):\n",
    "      left_idx = mtch.queryIdx\n",
    "      right_idx = mtch.trainIdx\n",
    "\n",
    "      # x - columns\n",
    "      # y - rows\n",
    "      (x1,y1) = kp1[left_idx].pt\n",
    "      (x2,y2) = kp2[right_idx].pt\n",
    "\n",
    "      left_px_idx  = int(y1 * W + x1)\n",
    "      right_px_idx = int(y2 * W + x2)\n",
    "\n",
    "      # retrieve stored key points\n",
    "      left_px = cur_frame.pixels[left_px_idx]\n",
    "      right_px = last_frame.pixels[right_px_idx]\n",
    "\n",
    "      #\n",
    "      left_px.add_camera_point(lpoints[i])\n",
    "      lpoints[i].px = left_px\n",
    "      right_px.add_camera_point(rpoints[i])\n",
    "      rpoints[i].px = right_px\n",
    "      \n",
    "      # upgrade from camera point to map key point for registration\n",
    "      kp = points[i]\n",
    "      lpoints[i].world = kp\n",
    "      rpoints[i].world = kp\n",
    "      \n",
    "      kp.associate_with(cur_frame, left_px_idx)\n",
    "      kp.associate_with(last_frame, right_px_idx)\n",
    "\n",
    "      # compute an uuid key for registration\n",
    "      # This varies from one programe to another. For example, if a program\n",
    "      # retrieves it from tiles, a key is computed by uint64(tileid) << 32 | seq_id\n",
    "      \n",
    "      key = self.get_point3d_key(None, kp.seq)\n",
    "      kp.key = key\n",
    "      # store it in the map instance\n",
    "      if self.keypointCloud.get(key, None) is not None:\n",
    "        raise Exception(\"\")\n",
    "        \n",
    "      # octree check\n",
    "        \n",
    "      \n",
    "      # store the point into a concurrent hash map. Note since in python interator\n",
    "      # is guarded to be executed by only one thread, we can directly use it but suffer \n",
    "      # from performance issues.\n",
    "      self.keypointCloud[key] = kp\n",
    "\n",
    "  def get_point3d_key(self, tile_id, seq_id):\n",
    "    if tile_id is None:\n",
    "      tile_id = 0.\n",
    "    return tile_id << 32 | seq_id\n",
    "\n",
    "  # @todo : TODO\n",
    "  def track(self, frame, detected_objects):\n",
    "    if frame.is_First:\n",
    "      self.logger.info(\"Add %d detected objects to initialize landmarks\" % len(detected_objects))\n",
    "      for ob in detected_objects:\n",
    "        self.landmarks[ob.seq] = ob\n",
    "      return tuple()\n",
    "    else:\n",
    "      trackList = self.trackList()\n",
    "\n",
    "      for landmark in trackList:\n",
    "        landmark.predict(frame)\n",
    "\n",
    "      matcher = None   \n",
    "      if frame.matchers.get('ROIMacher', None) is None:\n",
    "        matcher = ROIMatcher()\n",
    "        frame.matchers['ROIMacher'] = matcher\n",
    "\n",
    "      N = len(trackList)\n",
    "      M = len(detected_objects)\n",
    "\n",
    "      # solving N*M assignment matrix using KM algorithm\n",
    "      mtched_indice, unmtched_landmarks_indice, unmtched_detections_indice = matcher.mtch(trackList, detected_objects)\n",
    "      \n",
    "      print(\"%d mtches, %d unmtched landmarks, %d unmtched detections\" % \n",
    "                   (len(mtched_indice), len(unmtched_landmarks_indice), len(unmtched_detections_indice)))\n",
    "\n",
    "      mtched, unmtched_landmarks, unmtched_detections = [], [], []\n",
    "      # mtched List<Tuple> of (row,col,weights[row,col],distance[row,col]))\n",
    "      for match in mtched_indice:\n",
    "        landmark = trackList[match[0]]\n",
    "        detection = detected_objects[match[1]]\n",
    "        \n",
    "        detection.parent = landmark\n",
    "        landmark.records.append((detection, detection.frame.seq, landmark.predicted_states))\n",
    "        landmark.update(detection)\n",
    "        \n",
    "        mtched.append((landmark, detection))\n",
    "        \n",
    "      # mark unmtched_landmarks\n",
    "      # @todo : TODO\n",
    "      for idx in unmtched_landmarks_indice:\n",
    "        landmark = trackList[idx]\n",
    "        unmtched_landmarks.append(landmark)\n",
    "\n",
    "      # add unmtched_detections to landmarks\n",
    "      l = len(unmtched_detections_indice)\n",
    "      if l > 0:\n",
    "        self.logger.info(\"Adding %d new landmarks\" % l)\n",
    "        for j in unmtched_detections_indice:\n",
    "          detection = detected_objects[j]\n",
    "          if self.landmarks.get(detection.seq, None) is not None:\n",
    "            raise ValueError(\"The detection has already been registered\")\n",
    "          self.landmarks[detection.seq] = detection\n",
    "\n",
    "          unmtched_detections.append(detection)\n",
    "      else:\n",
    "        self.logger.info(\"No new landmarks found.\")\n",
    "        \n",
    "      # do something with the mtched\n",
    "      if DEBUG:\n",
    "        # rendering the matches\n",
    "        pass\n",
    "\n",
    "      return mtched, unmtched_landmarks, unmtched_detections\n",
    "    \n",
    "  def trackList(self):\n",
    "    _trackList = []\n",
    "    for key, landmark in self.landmarks.items():\n",
    "      if landmark.is_active() or landmark.viewable():\n",
    "        _trackList.append(landmark)\n",
    "    self.logger.info(\"Retrieve %d active and viewable landmarks\" % len(_trackList))\n",
    "    print(self.landmarks)\n",
    "    return _trackList\n",
    "\n",
    "  def trackKeyPoints(self, frame, kps, kps_features, last_frame=None, matches1to2=None):\n",
    "    if frame.is_First:\n",
    "      self.logger.info(\"Add %d extracted keypoints to initialize key points cloud\" % len(kps))\n",
    "      H, W = cur_frame.img.shape[0:2]\n",
    "      for i, kp in enumerate(kps):\n",
    "        # create a world, though we don't know depth infomation and camera poses\n",
    "        world = Point3D(-1, -1, -1)\n",
    "        # just for demo, don't need to worry about it\n",
    "        world.type = \"world\"\n",
    "        # see cv2::KeyPoint for details\n",
    "        x, y = kp.pt\n",
    "        px_idx = int(y*W + x)\n",
    "        px = frame.pixels[px_idx]\n",
    "        world.associate_with(frame, px_idx)\n",
    "        \n",
    "        local = Point3D(-1, -1, -1)\n",
    "        local.world = world\n",
    "        \n",
    "        local.px = px\n",
    "        px.add_camera_point(local)\n",
    "    else:\n",
    "      # @todo : TODO\n",
    "      \n",
    "      pass\n",
    "    pass\n",
    "\n",
    "class Observation:\n",
    "  \n",
    "  Seq = AtomicCounter()  \n",
    "\n",
    "  def __init__(self, label, roi_kps, roi_features):\n",
    "    # label\n",
    "    self.label = label\n",
    "    # score\n",
    "    self.score = roi_features['score']\n",
    "    # roi keypoints\n",
    "    self.roi_kps = roi_kps\n",
    "    # roi feature\n",
    "    self.roi_features = roi_features\n",
    "    # associated frame\n",
    "    self.frame = None\n",
    "\n",
    "    ## Updated while tracking ...\n",
    "\n",
    "    #\n",
    "    self.projected_pos = roi_features['box']\n",
    "    #\n",
    "    self.projected_mask = roi_features['mask']\n",
    "\n",
    "    # key points used to reconstruct struction in 3d world\n",
    "    self.pixels = {}\n",
    "    #\n",
    "    self.bbox = []    \n",
    "\n",
    "    ## Covisibility Graph Topology\n",
    "    # recording upgraded 3dpoints for each ROI\n",
    "    self.points = {}\n",
    "\n",
    "    ## Identity information \n",
    "\n",
    "    self.seq = self.Seq()\n",
    "    self.id = None\n",
    "    self.uuid = uuid.uuid4()\n",
    "    self.key = None\n",
    "\n",
    "    # parameters used for tracking\n",
    "    self.kf = None\n",
    "    # self.opticalPredictor = None\n",
    "    # self.predicted_states = None\n",
    "\n",
    "    # union set, here i uses Uncompressed Union Set to present films of the object\n",
    "    self.parent = None \n",
    "    self.records = []\n",
    "\n",
    "  def Init(self):\n",
    "    self.kf = BBoxKalmanFilter()\n",
    "    # self.opticalPredictor = OpticalFlowBBoxPredictor()\n",
    "    self.records.append((\n",
    "        None, # Observation\n",
    "        self.frame.seq if hasattr(self, \"frame\") else -1, # frameId\n",
    "    ))\n",
    "    return self\n",
    "\n",
    "  def set_FromFrame(self, frame):\n",
    "    self.frame = frame\n",
    "    return self\n",
    "\n",
    "  # @todo : TODO already matched \n",
    "  def is_active(self):\n",
    "    return True\n",
    "\n",
    "  # @todo : TODO observed by camera, i.e, camera is able to capure 3d points of the landmark\n",
    "  # we use projection test upon estimated 3d structures to see whether this is viewable by \n",
    "  # the camera\n",
    "  def viewable(self):\n",
    "    return True\n",
    "\n",
    "  def last_frame(self):\n",
    "    last_observation = self.records[-1][0]\n",
    "    if last_observation != None:\n",
    "      return last_observation.frame\n",
    "    else:\n",
    "      return self.frame\n",
    "\n",
    "  def predict(self, cur_frame=None):\n",
    "    y1, x1, y2, x2 = self.projected_pos\n",
    "    w1, h1 = x2 - x1, y2 - y1\n",
    "    box = np.array([x1, y1, w1, h1])\n",
    "    box0 = self.kf.predict(box)\n",
    "    self.predicted_states = np.array([box0[0], box0[1], box0[0] + box0[2], box0[1] + box0[3]])\n",
    "    \n",
    "    if cur_frame is not None:\n",
    "      # use the velocity estimated for the last frame (containing the object)\n",
    "      last_frame = self.last_frame()\n",
    "      box1 = last_frame.predictors['OpticalFlow'].predict(box, cur_frame.img_grey())\n",
    "\n",
    "    if DEBUG:\n",
    "      logging.info(\"<Landmark %d> , bbox <%d, %d, %d, %d>; predicted bbox by KalmanFilter: <%d, %d, %d, %d>\" %\n",
    "            (self.seq, x1, y1, x2, y2, \n",
    "             self.predicted_states[0], self.predicted_states[1],\n",
    "             self.predicted_states[2], self.predicted_states[3]))\n",
    "\n",
    "      logging.info(\"<Landmark %d> , bbox velocity (delta/frame) <%d, %d, %d, %d>, predicted by KalmanFilter\" % \n",
    "            (self.seq,\n",
    "             self.predicted_states[0] - x1,\n",
    "             self.predicted_states[1] - y1,\n",
    "             box0[2] - w1,\n",
    "             box0[3] - h1))\n",
    "      \n",
    "      if cur_frame is not None:\n",
    "        \n",
    "        logging.info(\"<landmark %d> , bbox velocity (delta/frame) <%d, %d, %d, %d>, predicted by OpticalFlowBBoxPredictor\" %\n",
    "            (self.seq,\n",
    "             box1[0] - x1,\n",
    "             box1[1] - y1,\n",
    "             box1[2] - w1,\n",
    "             box1[3] - h1))\n",
    "\n",
    "    return self.predicted_states\n",
    "\n",
    "  def update(self, detection):\n",
    "    logging.info(\"<Landmark %d> update projected pose from %s => %s\" % \n",
    "                 (self.seq, str(self.projected_pos), str(detection.projected_pos)))\n",
    "    self.projected_pos = detection.projected_pos\n",
    "    self.projected_mask = detection.projected_mask\n",
    "    #\n",
    "    y1, x1, y2, x2 = self.projected_pos\n",
    "    box = np.array([x1, y1, x2 - x1, y2 - y1])\n",
    "    self.kf.update(box)\n",
    "   \n",
    "## Spanning Tree\n",
    "\n",
    "  def find_parent(self):\n",
    "    if self.parent is None:\n",
    "      return self\n",
    "    \n",
    "    # uncompressed union set find operation\n",
    "    return self.find_parent()\n",
    "\n",
    "## Just for logging\n",
    "\n",
    "  def __str__(self):\n",
    "    return \"<F#%d.Ob#%d(%s:%.2f)>\" % (self.frame.seq, self.seq, self.label, self.score)\n",
    "\n",
    "  def __repr__(self):\n",
    "    return \"<F#%d.Ob#%d(%s)>\" % (self.frame.seq, self.seq, self.label)\n",
    "  \n",
    "# Pub/Sub Network model or producer/consumer pair based ITC utilities for the tracker\n",
    "#\n",
    "# Camera in a main process or thread produce images for digestion. Meanwhile, once an image is retrieved, we\n",
    "# will notify mapper worker to performance features extraction and register the image into images pool. We derived that\n",
    "# this is a simple producer/worker local communication model. Feature extractor is the most important part of our programmes\n",
    "# where we introduce feature comparision and realtime neural network features extractor.\n",
    "\n",
    "# Note since this is a demo with produced image sequences offline, using a slower network to inference is just enough for our\n",
    "# job. \n",
    "# \n",
    "\n",
    "# simple image renderer suite\n",
    "class WebImageRenderer:\n",
    "\n",
    "  # implement a image renderer using opencv as backend\n",
    "  def __init__(self):\n",
    "    pass\n",
    "\n",
    "  def drawMatchedROI(self, img, reference_img, mtched, unmtched_landmarks, unmtched_detections):\n",
    "    \n",
    "    # First : unmatched landmarks\n",
    "    # Second : unmatched detections\n",
    "\n",
    "    n_mtches = len(mtched) + 2\n",
    "    colors = visualize.random_colors(n_mtches)\n",
    "\n",
    "    if not n_mtches:\n",
    "      logging.info(\"No instances to display!\")\n",
    "      return image\n",
    "\n",
    "    def _apply_mask(image, mask, color, alpha=0.5):\n",
    "      \"\"\"Apply the given mask to the image.\n",
    "      \"\"\"\n",
    "      for c in range(3):\n",
    "        image[:, :, c] = np.where(mask == 1,\n",
    "                                  image[:, :, c] * (1 - alpha) + alpha * color[c],\n",
    "                                  image[:, :, c])\n",
    "      return image\n",
    "\n",
    "    def _drawROI(image, box, mask, color, label, score, _id):\n",
    "        \n",
    "      masked_image = image.copy()\n",
    "      \n",
    "      # Bounding box\n",
    "      if not np.any(box):\n",
    "        # Skip this instance. Has no bbox. Likely lost in image cropping.\n",
    "        return masked_image\n",
    "      \n",
    "      y1, x1, y2, x2 = box\n",
    "      \n",
    "      caption = \"<Landmark #{} : {}({:.3f})>\".format(_id, label, score) if score \\\n",
    "           else \"<landmark #{} : {}>\" .format(_id, label)\n",
    "      \n",
    "      masked_image = visualize.apply_mask(masked_image, mask, color)\n",
    "      masked_image_with_boxes = cv2.rectangle(masked_image, (x1, y1), (x2, y2), np.array(color) * 255, 2)\n",
    "      \n",
    "      # Mask Polygon\n",
    "      padded_mask = np.zeros(\n",
    "        (mask.shape[0] + 2, mask.shape[1] + 2), dtype=np.uint8\n",
    "      )\n",
    "      padded_mask[1:-1, 1:-1] = mask\n",
    "      # contours = find_contours(padded_mask, 0.5)\n",
    "      if CV_MAJOR_VERSION > 3:\n",
    "        contours, _ = cv2.findContours(padded_mask, \n",
    "  cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n",
    "      else:\n",
    "        _, contours, _ = cv2.findContours(padded_mask, \n",
    "  cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n",
    "      \n",
    "      masked_image_with_contours_plus_boxes = cv2.drawContours(masked_image_with_boxes, contours, -1, (0, 255, 0), 1)\n",
    "      \n",
    "      out = cv2.putText(\n",
    "        masked_image_with_contours_plus_boxes, caption, (x1, y1-4), cv2.FONT_HERSHEY_PLAIN, 0.5, np.array(color)*255, 1\n",
    "      )\n",
    "      \n",
    "      masked_image = out\n",
    "      return out\n",
    "\n",
    "    MATCHED_COLORS = colors[1:-1]\n",
    "    # print(\"mtched colors\", MATCHED_COLORS)\n",
    "    UNMATCHED__LANDMARK_COLORS = colors[ 0]\n",
    "    UNMATCHED_DETECTION_COLORS = colors[-1]\n",
    "\n",
    "    masked_img = img.copy()\n",
    "    masked_reference_img = reference_img.copy()\n",
    "\n",
    "    # drawing extraction results\n",
    "    offset = 0\n",
    "    for mtch in mtched:\n",
    "      landmark, detection = mtch\n",
    "      masked_reference_img = _drawROI(masked_reference_img, \n",
    "                            landmark.roi_features['box'], \n",
    "                            landmark.roi_features['mask'], \n",
    "                            MATCHED_COLORS[offset], \n",
    "                            landmark.label,\n",
    "                            landmark.score,\n",
    "                            landmark.seq)\n",
    "      masked_img = _drawROI(masked_img, \n",
    "                            detection.roi_features['box'],\n",
    "                            detection.roi_features['mask'],\n",
    "                            MATCHED_COLORS[offset],\n",
    "                            detection.label,\n",
    "                            detection.score,\n",
    "                            landmark.seq)\n",
    "      offset += 1\n",
    "\n",
    "    for unmtched_landmark in unmtched_landmarks:\n",
    "      masked_reference_img = _drawROI(masked_reference_img,\n",
    "                            unmtched_landmark.roi_features['box'],\n",
    "                            unmtched_landmark.roi_features['mask'],\n",
    "                            (0., 1., 1.), # Yellow\n",
    "                            unmtched_landmark.label,\n",
    "                            unmtched_landmark.score,\n",
    "                            unmtched_landmark.seq)\n",
    "\n",
    "    for unmtched_detection in unmtched_detections:\n",
    "      masked_img = _drawROI(masked_img,\n",
    "                            unmtched_detection.roi_features['box'],\n",
    "                            unmtched_detection.roi_features['mask'],\n",
    "                            (0., 0., 1.), # Red\n",
    "                            unmtched_detection.label,\n",
    "                            unmtched_detection.score,\n",
    "                            unmtched_detection.seq)\n",
    "\n",
    "    # drawing bbox matching results\n",
    "    r1, c1 = masked_img.shape[0], masked_img.shape[1]\n",
    "    r2, c2 = masked_reference_img.shape[0], masked_reference_img.shape[1]\n",
    "\n",
    "    out = np.zeros((max([r1, r2]), c1+c2,3), dtype='uint8')\n",
    "\n",
    "    out[:r1, :c1] = np.dstack([masked_img])\n",
    "    out[:r2, c1:] = np.dstack([masked_reference_img])\n",
    "\n",
    "    # draw line between matched bbox\n",
    "    offset = 0\n",
    "    for mtch in mtched:\n",
    "      color = np.array(MATCHED_COLORS[offset]) * 255\n",
    "      y1_1, x1_1, y2_1, x2_1 = mtch[1].roi_features['box']\n",
    "      cy_1 = (y2_1 + y1_1) / 2.0\n",
    "      cx_1 = (x2_1 + x1_1) / 2.0 \n",
    "      y1_2, x1_2, y2_2, x2_2 = mtch[0].roi_features['box']\n",
    "      cy_2 = (y2_2 + y1_2) / 2.0\n",
    "      cx_2 = (x2_2 + x1_2) / 2.0 \n",
    "\n",
    "      # draw lines\n",
    "      cv2.line(out, (x1_1,y1_1), (x1_2+c1,y1_2), color, 1)\n",
    "      cv2.line(out, (int(x2_1),int(y1_1)), (int(x2_2)+c1,int(y1_2)), color, 1)\n",
    "      cv2.line(out, (int(x2_1),int(y2_1)), (int(x2_2)+c1,int(y2_2)), color, 1)\n",
    "      cv2.line(out, (int(x1_1),int(y2_1)), (int(x1_2)+c1,int(y2_2)), color, 1)\n",
    "\n",
    "      # cv2.line(out, (int(cx_1),int(cy_1)), (int(cx_2)+c1,int(cy_2)), color, 1)\n",
    "\n",
    "      offset += 1\n",
    "\n",
    "    return out\n",
    "\n",
    "  def drawMatchesKnn(self, img1, kps1, img2, kps2, kps_mtched1to2, mask):\n",
    "    draw_params = dict(matchColor = (0,255,0), # G\n",
    "                       singlePointColor = (255,0,0), # R\n",
    "                       # this is important to filter out the dense connection\n",
    "                       matchesMask = mask,\n",
    "                       flags = 0)\n",
    "\n",
    "    masked_img = cv2.drawMatchesKnn(img1, kps1, img2, kps2, kps_mtched1to2, None, **draw_params)\n",
    "    return masked_img\n",
    "\n",
    "  # Credits to the original author: \n",
    "  #   https://www.hongweipeng.com/index.php/archives/709/\n",
    "  #   https://stackoverflow.com/questions/20259025/module-object-has-no-attribute-drawmatches-opencv-python\n",
    "  def drawMatches(self, img1, kp1, img2, kp2, matches):\n",
    "    \"\"\"\n",
    "    My own implementation of cv2.drawMatches as OpenCV 2.4.9\n",
    "    does not have this function available but it's supported in\n",
    "    OpenCV 3.0.0\n",
    "\n",
    "    This function takes in two images with their associated\n",
    "    keypoints, as well as a list of DMatch data structure (matches)\n",
    "    that contains which keypoints matched in which images.\n",
    "\n",
    "    An image will be produced where a montage is shown with\n",
    "    the first image followed by the second image beside it.\n",
    "\n",
    "    Keypoints are delineated with circles, while lines are connected\n",
    "    between matching keypoints.\n",
    "\n",
    "    img1,img2 - Grayscale images\n",
    "    kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint\n",
    "              detection algorithms\n",
    "    matches - A list of matches of corresponding keypoints through any\n",
    "              OpenCV keypoint matching algorithm\n",
    "    \"\"\"\n",
    "\n",
    "    # Create a new output image that concatenates the two images together\n",
    "    # (a.k.a) a montage\n",
    "    rows1 = img1.shape[0]\n",
    "    cols1 = img1.shape[1]\n",
    "    rows2 = img2.shape[0]\n",
    "    cols2 = img2.shape[1]\n",
    "\n",
    "    out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')\n",
    "\n",
    "    # Place the first image to the left\n",
    "    out[:rows1, :cols1] = np.dstack([img1])\n",
    "\n",
    "    # Place the next image to the right of it\n",
    "    out[:rows2, cols1:] = np.dstack([img2])\n",
    "\n",
    "    # For each pair of points we have between both images\n",
    "    # draw circles, then connect a line between them\n",
    "    for mat in matches:\n",
    "\n",
    "        # Get the matching keypoints for each of the images\n",
    "        img1_idx = mat.queryIdx\n",
    "        img2_idx = mat.trainIdx\n",
    "\n",
    "        # x - columns\n",
    "        # y - rows\n",
    "        (x1,y1) = kp1[img1_idx].pt\n",
    "        (x2,y2) = kp2[img2_idx].pt\n",
    "\n",
    "        # Draw a small circle at both co-ordinates\n",
    "        # radius 4\n",
    "        # colour blue\n",
    "        # thickness = 1\n",
    "        cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)\n",
    "        cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)\n",
    "\n",
    "        # Draw a line in between the two points\n",
    "        # thickness = 1\n",
    "        # colour blue\n",
    "        cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255, 0, 0), 1)\n",
    "\n",
    "    return out\n",
    "\n",
    "  def drawOpticalFlow(self, frame, step=16):\n",
    "    flow = frame.predictors[\"OpticalFlowKPnt\"].get_flow()\n",
    "    H, W = frame.img.shape[:2]\n",
    "    y, x = np.mgrid[step/2:H:step,step/2:W:step].reshape(2,-1).astype(int)\n",
    "    fx, fy = flow[y, x].T\n",
    "    lines = np.vstack([x, y, x+fx, y+fy]).T.reshape(-1,2,2)\n",
    "    lines = np.int32(lines+0.5)\n",
    "    mask = np.zeros_like(frame.img)\n",
    "    cv2.polylines(mask, lines, 0, (0, 255, 0))\n",
    "    for (x1, y1), (_x2, _y2) in lines:\n",
    "      cv2.circle(mask, (x1, y1), 1, (0, 255, 0), -1)\n",
    "    return mask\n",
    "\n",
    "  def render(self, im, mode='webcam'):\n",
    "    if mode is 'webcam':\n",
    "      try:\n",
    "        from google.colab.patches import cv2_imshow\n",
    "      except Exception as e:\n",
    "        logging.warn(e)\n",
    "        \n",
    "        def wrapped_cv_img_render(img):\n",
    "          cv2.imshow(mode, img)\n",
    "        \n",
    "        cv2_imshow = wrapped_cv_img_render\n",
    "        \n",
    "      cv2_imshow(im)\n",
    "      # wait for highgui processing drawing requests from cv::show\n",
    "      k = cv2.waitKey(30) & 0xff\n",
    "      if k == 'ESC':\n",
    "        raise StopIteration()\n",
    "    else:\n",
    "      logging.warn(\"Use matplotlib as image rendering backend\")\n",
    "      figsize = (16, 16)\n",
    "      _, ax = plt.subplots(1, figsize=figsize)\n",
    "      height, width = im.shape[:2]\n",
    "      size=(width, height)\n",
    "      ax.set_ylim(height + 10, -10)\n",
    "      ax.set_xlim(-10, width + 10)\n",
    "      ax.axis('off')\n",
    "      ax.imshow(im.astype(np.uint8))\n",
    "\n",
    "\n",
    "# Linear Assignment Problems Solver Wrapper\n",
    "class ROIMatcher:\n",
    "\n",
    "  from enum import Enum\n",
    "  class Algorithm(Enum):\n",
    "    COMPLETE_MATCHING = 0\n",
    "    MAXIMUM_MATCHING = 1\n",
    "\n",
    "  def __init__(self):\n",
    "    self.algorithm = ROIMatcher.Algorithm.COMPLETE_MATCHING\n",
    "    pass\n",
    "\n",
    "  def mtch(self, trackList, detected_objects):\n",
    "    N = len(trackList)\n",
    "    M = len(detected_objects)\n",
    "    \n",
    "    weights = np.zeros((N, M))\n",
    "    \n",
    "    distance = np.zeros((N, M))\n",
    "    corr = np.zeros((N, M))\n",
    "\n",
    "    def make_standard_tf_box(box):\n",
    "      y1, x1, y2, x2 = box\n",
    "      return np.array([x1, y1, x2, y2])\n",
    "\n",
    "    def compose_feat_vec(roi_feats, encodedId, score):\n",
    "      new_feats = np.concatenate([roi_feats, encodedId, np.array([score])], axis=0)\n",
    "      return new_feats\n",
    "\n",
    "    INF = float(\"inf\")\n",
    "    EPILON = 1e-9\n",
    "\n",
    "    column_names = list(map(lambda detection: str(detection), detected_objects))\n",
    "    row_names = list(map(lambda landmark: str(landmark), trackList))\n",
    "\n",
    "    for i in range(N):\n",
    "      for j in range(M):\n",
    "        obj1 = trackList[i]\n",
    "        obj2 = detected_objects[j]\n",
    "\n",
    "        # must hold same semantic meaning if we belive our detectron\n",
    "        if obj1.roi_features['label'] != obj2.roi_features['label']:\n",
    "          # weights[i,j] = INF\n",
    "          weights[i,j] = 1000\n",
    "          continue\n",
    "\n",
    "        box1 = obj1.predicted_states\n",
    "        box2 = make_standard_tf_box(obj2.projected_pos)\n",
    "\n",
    "        left_area  = (box1[2] - box1[0]) * (box1[3] - box1[1])\n",
    "        right_area = (box2[2] - box2[0]) * (box2[3] - box2[1])\n",
    "\n",
    "        # 0 ~ 1\n",
    "        # iou = IoU_numeric(box1, box2, left_area, right_area)\n",
    "        # distance[i,j] = 1. - iou\n",
    "\n",
    "        # -1 < -a ~ 0 ~ 1\n",
    "        uiou = UIoU_numeric(box1, box2, left_area, right_area)  \n",
    "        distance[i,j] = (1 + uiou) / 2.0\n",
    "\n",
    "        # assign IoU distance\n",
    "        # weights[i,j] = 1. - iou\n",
    "\n",
    "        # assign UIoU distance\n",
    "        weights[i,j] = (1 + uiou) / 2.0\n",
    "\n",
    "        # deep feature score\n",
    "        ext_feat1 = compose_feat_vec(obj1.roi_features['roi_feature'],\n",
    "                                     obj1.roi_features['class_id'],\n",
    "                                     obj1.roi_features['score'])\n",
    "        ext_feat2 = compose_feat_vec(obj2.roi_features['roi_feature'],\n",
    "                                     obj2.roi_features['class_id'],\n",
    "                                     obj2.roi_features['score'])\n",
    "        \n",
    "        # compute cosine distance\n",
    "        score = cosine_dist(ext_feat1, ext_feat2)\n",
    "        corr[i,j] = score\n",
    "        weights[i,j] *= score\n",
    "\n",
    "    mtched, unmtched_landmarks, unmtched_detections = ([], [], [])\n",
    "    row_indice, col_indice = [], []\n",
    "    if self.algorithm is ROIMatcher.Algorithm.COMPLETE_MATCHING:\n",
    "      if DEBUG:\n",
    "        # print weight matrix\n",
    "        print(\"%d landmarks, %d detections, forms %d x %d cost matrix :\" % (N, M, N, M))\n",
    "        print(weights)\n",
    "        pass\n",
    "\n",
    "      # remove rows if there are no reasonable matches from cols so that we could \n",
    "      # apply maximum match here. I have to say that this is very important!\n",
    "      \n",
    "      # @todo : TODO\n",
    "\n",
    "      # see http://csclab.murraystate.edu/~bob.pilgrim/445/munkres.html, \n",
    "      # also see https://www.kaggle.com/c/santa-workshop-tour-2019/discussion/120020\n",
    "      try:\n",
    "        row_indice, col_indice = Optimizer.linear_sum_assignment(weights)\n",
    "      except Exception as e:\n",
    "        print(e)\n",
    "        import pandas as pd\n",
    "        # iou scores\n",
    "        df = pd.DataFrame(distance, index=row_names, columns=column_names)\n",
    "        print(\"IoUs:\")\n",
    "        print(df)\n",
    "\n",
    "        # entropy scores\n",
    "        df = pd.DataFrame(corr, index=row_names, columns=column_names)\n",
    "        print(\"Corr:\")\n",
    "        print(df)\n",
    "        raise(e)\n",
    "        \n",
    "    \n",
    "    else:\n",
    "      raise Exception(\"Not Implemented Yet!\")\n",
    "\n",
    "    # use maximum matching strategy\n",
    "    assignment = np.zeros((N, M))\n",
    "    for i, col in enumerate(col_indice):\n",
    "      row = row_indice[i]\n",
    "      if weights[row, col] > 0.5:\n",
    "        continue\n",
    "      mtched.append((row,col,weights[row,col],distance[row,col]))\n",
    "      assignment[row, col] = 1\n",
    "\n",
    "    for i in range(N):\n",
    "      if i not in row_indice:\n",
    "        unmtched_landmarks.append(i)\n",
    "\n",
    "    for j in range(M):\n",
    "      if j not in col_indice:\n",
    "        unmtched_detections.append(j)\n",
    "\n",
    "    import pandas as pd\n",
    "    # iou scores\n",
    "    df = pd.DataFrame(distance, index=row_names, columns=column_names)\n",
    "    print(\"IoUs:\")\n",
    "    print(df)\n",
    "\n",
    "    # entropy scores\n",
    "    df = pd.DataFrame(corr, index=row_names, columns=column_names)\n",
    "    print(\"Corr:\")\n",
    "    print(df)\n",
    "\n",
    "    # draw matches\n",
    "\n",
    "    df = pd.DataFrame(np.array(assignment), index=row_names, columns=column_names)\n",
    "    print(\"assignment:\")\n",
    "    print(df)\n",
    "\n",
    "    return mtched, unmtched_landmarks, unmtched_detections\n",
    "\n",
    "\n",
    "# using cv2.FlannBasedMatcher as backend to match key points\n",
    "class FlannBasedKeyPointsMatcher:\n",
    "\n",
    "  def __init__(self):\n",
    "    self.features = None\n",
    "    self._impl = None\n",
    "    self.LOWE_RATIO_TEST_ON = True\n",
    "    self.sample_size = 7\n",
    "\n",
    "  def Init(self):\n",
    "    # FLANN parameters, simply borrow from opencv website: https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_feature2d/py_matcher/py_matcher.html\n",
    "    FLANN_INDEX_KDTREE = 0\n",
    "    index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees = 5)\n",
    "    search_params = dict(checks=50)   # or pass empty dictionary\n",
    "    # use cv2 implementation as backend\n",
    "    self._impl = cv2.FlannBasedMatcher(index_params, search_params)\n",
    "    self.K = 2\n",
    "    return self\n",
    "\n",
    "  def set_FromFrame(self, frame):\n",
    "    self._frame = frame \n",
    "    return self\n",
    "\n",
    "  def set_FromFeatures(self, features):\n",
    "    self.features = features\n",
    "    return self\n",
    "\n",
    "  # @todo : TODO impl\n",
    "  def mtch(self, other_features):\n",
    "    \"\"\"\n",
    "    @return mtched : Tuple(List<Tuple(DMatch, DMatch)>, list (2d))\n",
    "    \"\"\"\n",
    "\n",
    "    # opencv2 flann matcher for more information, also see this\n",
    "    # https://answers.opencv.org/question/192712/why-does-knnmatch-return-a-list-of-tuples-instead-of-a-list-of-dmatch/\n",
    "    mtches = self._impl.knnMatch(np.asarray( self.features, np.float32), \n",
    "                                 np.asarray(other_features, np.float32), \n",
    "                                 self.K)\n",
    "\n",
    "    # filtered matches\n",
    "    # @todo : TODO impl\n",
    "    l = len(mtches)\n",
    "    mask = np.ones((l, self.K))\n",
    "\n",
    "    def ratio_test(mask, mtches):\n",
    "      # ratio test as per Lowe's paper\n",
    "      for i,row in enumerate(mtches):\n",
    "        if row[0].distance >= 0.7*row[1].distance:\n",
    "          # always ignore the second match object\n",
    "          mask[i,0]= 0\n",
    "\n",
    "    # Lowe ratio test\n",
    "    if self.LOWE_RATIO_TEST_ON:\n",
    "      ratio_test(mask, mtches)\n",
    "\n",
    "    if self.sample_size is not -1:\n",
    "      print(\"Filtering out samples ...\")\n",
    "      choosed_indice_mask = np.where(mask[:,0] == 1)[0]\n",
    "      permutated_choosed_indice_mask = np.random.permutation(choosed_indice_mask)\n",
    "      \n",
    "      # mask[permutated_choosed_indice_mask[self.sample_size:], 0] = 0\n",
    "      indice = permutated_choosed_indice_mask[:self.sample_size]\n",
    "      mtches = list(map(lambda idx: mtches[idx], indice))\n",
    "      mask = mask[indice]\n",
    "    else:\n",
    "      print(\"Keep all key piont matches.\")\n",
    "    \n",
    "    return mtches, mask.tolist()\n",
    "\n",
    "class OpticalFlowBasedKeyPointsMatcher:\n",
    "\n",
    "  def __init__(self):\n",
    "    self.kps = None\n",
    "    self.features = None\n",
    "    self.R = 3 # px\n",
    "    self.sample_size = 7\n",
    "\n",
    "  def Init(self):\n",
    "    return self\n",
    "\n",
    "  def set_FromFrame(self, frame):\n",
    "    self._frame = frame \n",
    "    return self\n",
    "\n",
    "  def set_FromKP(self, kps):\n",
    "    self.kps = kps\n",
    "    return self\n",
    "\n",
    "  def set_FromFeatures(self, features):\n",
    "    self.features = features\n",
    "    return self\n",
    "\n",
    "  # @todo : TODO impl\n",
    "  def _get_neighbors(self, row, col, feat_map, img_shp):\n",
    "    H, W = img_shp[0:2]\n",
    "    x1, y1 = (col - self.R, row - self.R)\n",
    "    x2, y2 = (col + self.R, row + self.R)\n",
    "\n",
    "    if x1 < 0:\n",
    "      x1 = 0\n",
    "    if y1 < 0:\n",
    "      y1 = 0\n",
    "\n",
    "    if x2 >= W:\n",
    "      x2 = W - 1\n",
    "    if y2 >= H:\n",
    "      y2 = H - 1\n",
    "\n",
    "    indice = feat_map[y1:y2, x1:x2] != -1\n",
    "    return feat_map[y1:y2, x1:x2][indice]\n",
    "\n",
    "  # @todo : TODO impl\n",
    "  def mtch(self, kps, other_features, last_frame):\n",
    "    mtches = []\n",
    "    img_shp = self._frame.img.shape[0:2]\n",
    "\n",
    "    # predict coord\n",
    "    kps_coor_r = list(map(lambda kp: kp.pt, kps))\n",
    "    predicted_pos = last_frame.predictors[\"OpticalFlowKPnt\"].predict(kps_coor_r, self._frame.img_grey())\n",
    "\n",
    "    # init feat_map\n",
    "    feat_map = np.full(img_shp, -1)\n",
    "    for i, kp in enumerate(predicted_pos):\n",
    "      x, y = kp\n",
    "      feat_map[int(y),int(x)] = i  \n",
    "\n",
    "    def _hamming_distance(x, y):\n",
    "      from scipy.spatial import distance\n",
    "      return distance.hamming(x, y)\n",
    "\n",
    "    # \n",
    "    for i,kp in enumerate(self.kps):\n",
    "      x, y = kp.pt\n",
    "      indice = self._get_neighbors(int(y), int(x), feat_map, img_shp)\n",
    "      if len(indice) == 0:\n",
    "        continue\n",
    "\n",
    "      # KNN search\n",
    "      feat_l = self.features[i]\n",
    "\n",
    "      dist = None\n",
    "      min_dist, min_ind = np.inf, None\n",
    "\n",
    "      for ind in indice:\n",
    "        feat_r = other_features[ind]\n",
    "        dist = _hamming_distance(feat_l, feat_r)\n",
    "        if min_dist > dist:\n",
    "            min_dist = dist\n",
    "            min_ind = ind\n",
    "      try:\n",
    "        mtches.append(cv2.DMatch(i,min_ind, min_dist))\n",
    "        if DEBUG:\n",
    "          kpl = kp\n",
    "          kpr = kps[min_ind]\n",
    "          if np.sqrt(np.power(kpl.pt[0] - kpr.pt[0], 2) + \\\n",
    "                     np.power(kpr.pt[1] - kpr.pt[1], 2)) > self.R:\n",
    "            pass\n",
    "            # raise Exception(\"Wrong Match!\")\n",
    "\n",
    "      except Exception as e:\n",
    "        print(e)\n",
    "        print(\"i\",i)\n",
    "        print(\"kpl(cur)\", kp.pt)\n",
    "        print(\"min_ind\", min_ind)\n",
    "        print(\"kpr(last_frame)\", kps[min_ind].pt)\n",
    "        print(\"predicted kpr(last_frame)\", predicted_pos[min_ind])\n",
    "        print(\"min_dist\", min_dist)\n",
    "        print(\"dist\", dist)\n",
    "        raise(e)\n",
    "\n",
    "    l = len(self.kps)\n",
    "    mask = np.ones((l,1))\n",
    "    return mtches, mask.tolist()\n",
    "\n",
    "\n",
    "# This should be running in the main thread\n",
    "class Tracker:\n",
    "\n",
    "  def __init__(self):\n",
    "    # since there is no atomic operator around python, I have warp the value with a lock manually\n",
    "    self._initialized = False\n",
    "    self._state_modifier_lock = threading.Lock()\n",
    "    self._renderer = WebImageRenderer()\n",
    "    self._map = None\n",
    "    self.matcher = None\n",
    "    # previous key frames and cur frame used for triangluation\n",
    "    self.key_frames = []\n",
    "    self.reference_frame = None \n",
    "    self.cur = None\n",
    "\n",
    "    #\n",
    "    self.cur_anchor_point = np.array([0., 0., 0.])\n",
    "\n",
    "    # trajectory\n",
    "    self.trajectory = [self.cur_anchor_point]\n",
    "\n",
    "  def set_FromMap(self, map):\n",
    "    if not isinstance(map, RuntimeBlock):\n",
    "      raise ValueError(\"expect map of type <%s> but found %s\" % (str(RuntimeBlock), str(type(map))))\n",
    "    self._map = map\n",
    "    return self\n",
    "\n",
    "  def _init(self):\n",
    "    pass\n",
    "    \n",
    "  # @todo : TODO\n",
    "  # Cold start may take few seconds for visual only system. If IMU we could have more robust initial esitmation\n",
    "  # of camera motions and poses.\n",
    "  def _coldStartTrack(self):\n",
    "\n",
    "    #### Step 1 : exract features\n",
    "\n",
    "    # extract features used for matching\n",
    "    cur_kp, cur_kp_features, cur_roi_kp, cur_roi_features = self.cur.extract()\n",
    "    detected_objects = []\n",
    "    for i, roi_feature in enumerate(cur_roi_features):\n",
    "      detected_object = Observation(roi_feature['label'], cur_roi_kp[i], roi_feature).set_FromFrame(self.cur)\n",
    "      detected_object.Init()\n",
    "      detected_objects.append(detected_object)\n",
    "      \n",
    "      kp_per_box = cur_roi_kp[i]\n",
    "      if len(kp_per_box) is 0:\n",
    "        logging.warn(\"There is no detected key points(camera space) associated to %s\" % detected_object)\n",
    "      else:\n",
    "        for px in kp_per_box:\n",
    "          x, y = px\n",
    "          for obj in detected_objects:\n",
    "            px.set_FromROI(obj)\n",
    "          \n",
    "    # perform slam predictor observed on previous frames using sliding window algorithm\n",
    "    reference_frame = self.get_reference_frame()\n",
    "\n",
    "    #### Step 2 : track detected objects\n",
    "\n",
    "    # track detected objects using landmarks with updated postions (bbox)\n",
    "    rets = self._map.track(self.cur, detected_objects)\n",
    "    if DEBUG and len(rets) > 0:\n",
    "      mtched, unmtched_landmarks, unmtched_detections = rets\n",
    "      \n",
    "      draw_group = {}\n",
    "      for mtch in mtched:\n",
    "        fid = mtch[0].frame.seq\n",
    "        frame = draw_group.get(fid, None)\n",
    "        if frame is None:\n",
    "          draw_group[fid] = {\n",
    "              'frame': mtch[0].frame,\n",
    "              'matches': [],\n",
    "              'unmatches': []\n",
    "          }\n",
    "        draw_group[fid]['matches'].append(mtch)\n",
    "\n",
    "      for unmtch in unmtched_landmarks:\n",
    "        fid = unmtch.frame.seq\n",
    "        frame = draw_group.get(fid, None)\n",
    "        if frame is None:\n",
    "          draw_group[fid] = {\n",
    "              'frame': unmtch.frame,\n",
    "              'matches': [], # mtched detection and landmarks\n",
    "              'unmatches': [] # unmtched landmarks\n",
    "          }\n",
    "        draw_group[fid]['unmatches'].append(unmtch)\n",
    "\n",
    "      for fid, kw in draw_group.items():\n",
    "        masked_img = self._renderer.drawMatchedROI(self.cur.img, \n",
    "                                                   kw['frame'].img, \n",
    "                                                   kw['matches'], \n",
    "                                                   kw['unmatches'], \n",
    "                                                   unmtched_detections)\n",
    "        print(\"Plot matches between cur frame %s and frame %s\" % (self.cur, kw['frame']))\n",
    "        self._renderer.render(masked_img)\n",
    "\n",
    "    if reference_frame is None:\n",
    "      # see key frames selection principles discussion: https://github.com/raulmur/ORB_SLAM2/issues/872\n",
    "      self._map.add_new_key_frame(self.cur)\n",
    "      return\n",
    "\n",
    "    #### Step 3 : triangulization\n",
    "\n",
    "    # compute matched key pionts\n",
    "    mtches = []\n",
    "    matcher = FlannBasedKeyPointsMatcher().set_FromFrame(self.cur).set_FromFeatures(cur_kp_features)\n",
    "    matcher.Init()\n",
    "    self.cur.matchers['FlannBasedMatcher'] = matcher \n",
    "\n",
    "    last_frame = self.get_last_frame()\n",
    "    kps_mtched, mask = matcher.mtch(last_frame.kps_feats)\n",
    "\n",
    "    ## Freshly updated matcher\n",
    "    another_matcher = OpticalFlowBasedKeyPointsMatcher() \\\n",
    "                      .set_FromFrame(self.cur) \\\n",
    "                      .set_FromKP(cur_kp) \\\n",
    "                      .set_FromFeatures(cur_kp_features)\n",
    "\n",
    "    self.cur.matchers['OpticalFlowBasedKeyPointsMatcher'] = another_matcher\n",
    "\n",
    "    kps_2_mtched, mask_2 = another_matcher.mtch(last_frame.kps, last_frame.kps_feats, last_frame)\n",
    "\n",
    "    ## add mtched keypoints to map\n",
    "    \n",
    "    logging.info(\"key points matched (kps_mtched) : %d\" % len(kps_mtched))\n",
    "    # do visualization of orb key points matching\n",
    "    # draw matches from cur to frame\n",
    "    if DEBUG:\n",
    "      masked_img = self._renderer.drawMatchesKnn(self.cur.img, cur_kp, \n",
    "                                               last_frame.img, last_frame.kps, \n",
    "                                               kps_mtched, \n",
    "                                               mask)\n",
    "      # draw key points mtches\n",
    "      # self._renderer.render(masked_img)\n",
    "\n",
    "      flow_mask = self._renderer.drawOpticalFlow(last_frame)\n",
    "      masked_2_img = cv2.add(last_frame.img, flow_mask)\n",
    "      masked_2_img = self._renderer.drawMatches(\n",
    "          self.cur.img, cur_kp,\n",
    "          masked_2_img, last_frame.kps,\n",
    "          kps_2_mtched\n",
    "      )\n",
    "\n",
    "      self._renderer.render(masked_2_img)\n",
    "\n",
    "\n",
    "    # estimate camera motion and poses with depths of keypoints for monocular camera using\n",
    "    # RandSacWrapper + linear implementation of epipolar equation solver\n",
    "\n",
    "    # estimate R, t\n",
    "    try:\n",
    "      R, t = self.resolve_pose(cur_kp, last_frame.kps, kps_2_mtched)\n",
    "    except Exception as e:\n",
    "      print(e)\n",
    "      print(\"skip the frame ...\")\n",
    "      return\n",
    "     \n",
    "    # if there are enough kp for triangulation\n",
    "    if True:\n",
    "      self._map.add_new_key_frame(self.cur)\n",
    "    \n",
    "    # done\n",
    "    print(\"R:\", R)\n",
    "    print(\"t:\", t)\n",
    "\n",
    "    anchor_point = self.cur_anchor_point + t\n",
    "    camera = Camera(self._map.device, R, t, anchor_point)\n",
    "\n",
    "    self.cur.set_camera(camera)\n",
    "    # check solver relative epipolar contraint precision\n",
    "    if DEBUG:\n",
    "      t_mat = camera.t_SE3ToR3()\n",
    "      K = camera.K\n",
    "      E = t_mat.dot(R)\n",
    "      ComulativeErr = 0.\n",
    "      AvgErr = 0.\n",
    "      num_test_cases = len(kps_2_mtched)\n",
    "      logging.info(\"check solver relative epipolar constraint precision, test cases %d\" % num_test_cases)\n",
    "      for i, mtch in enumerate(kps_2_mtched):\n",
    "        x1 = camera.reproj(last_frame.kps[mtch.trainIdx]).data().reshape(3,1)\n",
    "        x2 = camera.reproj(cur_kp[mtch.queryIdx]).data().reshape(3,1)\n",
    "        epipolar_constrain_eq = x2.T.dot(E.dot(x1))\n",
    "        print(\"Epipolar contrain eqution <%d(cur), %d(last_frame)>: %f\" % (\n",
    "          mtch.queryIdx, mtch.trainIdx, epipolar_constrain_eq))\n",
    "        ComulativeErr += epipolar_constrain_eq\n",
    "        \n",
    "      AvgErr = ComulativeErr / num_test_cases\n",
    "      print(\"Avervage error of epipolar constrain error : %f\" % AvgErr)\n",
    "      \n",
    "    \n",
    "    # check computed pose with ground truth (Optional)\n",
    "    \n",
    "    \n",
    "    # info fusion with IMU for a high accuracy R, t\n",
    "    # @todo TODO\n",
    "\n",
    "    # upate frame matrix stack\n",
    "    self.cur.R0 = last_frame.R0 * R\n",
    "    self.cur.t0 = last_frame.t0 + t\n",
    "    \n",
    "    self.cur.R1 = R\n",
    "    self.cur.t1 = t\n",
    "\n",
    "    # estimate keypoints depth and compute projection errors\n",
    "    points, cur_cam_pts, last_cam_pts = self.triangulate(self.cur, last_frame, R, t, cur_kp, last_frame.kps, kps_2_mtched)\n",
    "    \n",
    "    l = len(points)\n",
    "    for i in range(l):\n",
    "      # depth value\n",
    "      z = points[i].z\n",
    "      p = points[i]\n",
    "      \n",
    "      \n",
    "      if DEBUG:\n",
    "        p_last_camera = Point3D(points[i].x / z, points[i].y / z, 1)\n",
    "        dp1 = Point3D(\n",
    "          p_last_camera.x - last_cam_pts[i].x,\n",
    "          p_last_camera.y - last_cam_pts[i].y\n",
    "        )\n",
    "        logging.info(\"check reprojection error ... \")\n",
    "        print(\"last frame reproj err: (%f, %f)\" % (dp1.x, dp1.y) )\n",
    "        \n",
    "        v1 = np.array([points[i].x, points[i].y, points[i].z]).reshape(3,1)\n",
    "        p_cur_camera = R.dot(v1) + t\n",
    "        print(\"(R*v1).shape:\", (R.dot(v1)).shape)\n",
    "        print(\"t.shape\", t.shape)\n",
    "        print(\"p_cur_camera.shape:\", p_cur_camera.shape)\n",
    "        z1 = p_cur_camera[2]\n",
    "        p_cur_camera = Point3D(p_cur_camera[0] / z1, p_cur_camera[1] / z1, 1)\n",
    "        print(\"cur_cam_pts[%d].x:\" % i, cur_cam_pts[i].x)\n",
    "        print(\"cur_cam_pts[%d].y:\" % i, cur_cam_pts[i].y)\n",
    "        dp2 = Point3D(\n",
    "          p_cur_camera.x - cur_cam_pts[i].x,\n",
    "          p_cur_camera.y - cur_cam_pts[i].y\n",
    "        )\n",
    "        print(\"dp2.x\", dp2.x)\n",
    "        print(\"dp2.y\", dp2.y)\n",
    "        print(\"cur frame reproj err: (%f, %f)\" % (dp2.x, dp2.y))\n",
    "\n",
    "    # reconstruct camera instance attached to the frame\n",
    "    \n",
    "    # return\n",
    "    # register KeyPoints\n",
    "    self._map.registerKeyPoints(self.cur, cur_kp, last_frame, last_frame.kps, cur_cam_pts, last_cam_pts, points, kps_2_mtched, mask_2)\n",
    "    \n",
    "    \n",
    "    pass\n",
    "\n",
    "  # @todo : TODO\n",
    "  def get_reference_frame(self):\n",
    "    cur_seq = self.cur.seq\n",
    "    if cur_seq - self._map.slidingWindow < 0 or not self.collect_enough_kp_mtches():\n",
    "      if cur_seq == 1:\n",
    "        return None\n",
    "      else: \n",
    "        if len(self._map.get_active_frames()) > 0:\n",
    "          return self._map.get_active_frames()[0]\n",
    "        else:\n",
    "          # no frames now\n",
    "          return None\n",
    "    else:\n",
    "      pass\n",
    "\n",
    "    return None\n",
    "\n",
    "  # @todo : TODO\n",
    "  def get_last_frame(self):\n",
    "    frames = self._map.get_frames()\n",
    "    if len(frames) == 1:\n",
    "      return frames[-1]\n",
    "    else:\n",
    "      return frames[-2]\n",
    "\n",
    "  # @todo : TODO\n",
    "  def collect_enough_kp_mtches(self):\n",
    "    return False\n",
    "\n",
    "  # @todo : TODO\n",
    "  # see the implementation of ORBSlam for refernce, I simply borrow the name from it\n",
    "  def track(self, img):\n",
    "    frame = Frame().set_FromImg(img)\n",
    "    self.cur = frame\n",
    "    self._map.add(frame)\n",
    "\n",
    "    if not self.isInitalized():\n",
    "      # to collect enough frames\n",
    "      if len(self._map.get_frames()) > self._map.slidingWindow:\n",
    "        self._init()\n",
    "      # perform primitive tracker based on previous frames\n",
    "      self._coldStartTrack()\n",
    "      return\n",
    "    # \n",
    "    else:      \n",
    "      pass\n",
    "\n",
    "  def resolve_pose(self, cur_kps, last_kps, matches1to2):\n",
    "    # intrinsic parameters\n",
    "    K = self._map.device.K\n",
    "\n",
    "    p1, p2 = [], []\n",
    "\n",
    "    \n",
    "    print(\"num of cur frame keypoints: %d\" % len(cur_kps))\n",
    "    print(\"num of last frame keypoints: %d\" % len(last_kps))\n",
    "    print(\"num of mtches: %d\" % len(matches1to2))\n",
    "    \n",
    "    # last frame\n",
    "    for i, mtch in enumerate(matches1to2):\n",
    "      p1.append(last_kps[mtch.trainIdx].pt)\n",
    "      p2.append(cur_kps[mtch.queryIdx].pt)\n",
    "\n",
    "    # find fundmental matrix\n",
    "    F, inliers = cv2.findFundamentalMat(np.float32(p1), np.float32(p2),\n",
    "                                        method=cv2.FM_RANSAC)\n",
    "\n",
    "    # if there are many key points sampling from plane object, \n",
    "    # find homography using linear algebra wrapped by RandSac\n",
    "\n",
    "    # else compute estimated the Essential Matrix and decompose the matrix\n",
    "    # to recover camera pose\n",
    "\n",
    "    print(\"K\", K)\n",
    "    if F is None:\n",
    "      print(\"No solution found using Randsac. Switch to FM_LMEDS method ...\")\n",
    "      F, inliers = cv2.findFundamentalMat(np.float32(p1), np.float32(p2),\n",
    "                                         method=cv2.FM_LMEDS)\n",
    "    print(\"F\", F)\n",
    "    \n",
    "    F0 = F\n",
    "    if F is None:\n",
    "      raise Exception(\"No solutions!\")\n",
    "    \n",
    "    num_solutions = F.shape[0] / 3\n",
    "    if num_solutions > 1:\n",
    "      print(\"choose the first solution\")\n",
    "      F = F0[0:3,0:3]\n",
    "    estimated_E = np.dot(K.T, np.dot(F, K))\n",
    "\n",
    "    # decomposition Essential Matrix    \n",
    "    mask = inliers.astype(bool).flatten()\n",
    "    _, R, t, _ = cv2.recoverPose(estimated_E, np.array(p1)[mask], np.array(p2)[mask], K)\n",
    "    return R, t\n",
    "\n",
    "  \n",
    "  def triangulate(self, cur, last_frame, R, t, cur_kps, last_kps, matches):\n",
    "    pose_matl = np.c_[last_frame.R0, last_frame.t0]\n",
    "    pose_matr = np.c_[R, t]\n",
    "   \n",
    "    cam = cur.camera\n",
    "    \n",
    "    p1, p2 = [], []\n",
    "    \n",
    "    points = []\n",
    "    cur_cam_pts = []\n",
    "    last_cam_pts = []\n",
    "    \n",
    "    # last frame\n",
    "    for i, mtch in enumerate(matches):\n",
    "      p1.append(cam.reproj(last_kps[mtch.trainIdx].pt).data())\n",
    "      p2.append(cam.reproj(cur_kps[mtch.queryIdx].pt).data())\n",
    "    \n",
    "    p1 = np.float32(p1)\n",
    "    p2 = np.float32(p2)\n",
    "    print(\"p1.shape\", p1.T.shape)\n",
    "    ret = cv2.triangulatePoints(pose_matl, pose_matr, p1.T[0:2], p2.T[0:2])\n",
    "    # make it homogenous\n",
    "    ret /= ret[3]\n",
    "    \n",
    "    l = ret.shape[0]\n",
    "    for i in range(l):\n",
    "      vec = ret[:,i].T\n",
    "      points.append(Point3D(vec[0],vec[1],vec[2]))\n",
    "      last_cam_pts.append(Point3D(p1[i][0], p1[i][1], 1))\n",
    "      cur_cam_pts.append(Point3D(p2[i][0], p2[i][1], 1))\n",
    "    \n",
    "    return points, cur_cam_pts, last_cam_pts\n",
    "  \n",
    "  # @todo : TODO\n",
    "  def isInitalized(self):\n",
    "    return False\n",
    "\n",
    "# Semantic vision suppported tracker for multiple obstacles\n",
    "class SVSOTracker(Tracker):\n",
    "  def __init__(self):\n",
    "    super().__init__()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "sOFNOn-S23Oo"
   },
   "source": [
    "##### Play With SVSOTracker"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 151,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 67
    },
    "colab_type": "code",
    "id": "QT-BE0fp0DkL",
    "outputId": "6c5b9db3-fe10-4400-f2dc-862c1d57be8e"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "\n",
      "\n",
      "2020-04-01 01:14:06,743 [INFO]:<ipython-input-151-8477809ffe69>.root, in line 117 >> exec tracker to track motions\n",
      "<Frame 1>, Extracting ROI features ...\n",
      "Processing 1 images\n",
      "image                    shape: (480, 640, 3)         min:    0.00000  max:  255.00000  uint8\n",
      "molded_images            shape: (1, 1024, 1024, 3)    min: -123.70000  max:  151.10000  float64\n",
      "image_metas              shape: (1, 93)               min:    0.00000  max: 1024.00000  float64\n",
      "anchors                  shape: (1, 261888, 4)        min:   -0.35390  max:    1.29134  float32\n",
      "2020-04-01 01:14:08,454 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 180 >> SemanticFeatureExtractor Constructing deep feature extration model ...\n",
      "2020-04-01 01:14:08,462 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 206 >> SemanticFeatureExtractor Construction of deep feature extraction model complete.\n",
      "roi_features shape:  (12, 7, 7, 256)\n",
      "=== Detection Results ===\n",
      "#1 type(cup), score:0.992234, bbox: (369, 234, 416, 301)\n",
      "2020-04-01 01:14:11,333 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "categorical_features shp: (51, 2)\n",
      "         LABEL  int\n",
      "0           BG    0\n",
      "1       person   31\n",
      "2          cat   11\n",
      "3          dog   18\n",
      "4     backpack    2\n",
      "5     umbrella   48\n",
      "6      handbag   22\n",
      "7          tie   43\n",
      "8     suitcase   41\n",
      "9  sports ball   40\n",
      "#2 type(keyboard), score:0.989099, bbox: (410, 298, 636, 448)\n",
      "2020-04-01 01:14:11,355 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#3 type(book), score:0.975806, bbox: (84, 101, 206, 183)\n",
      "2020-04-01 01:14:11,372 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#4 type(cell phone), score:0.967612, bbox: (143, 303, 192, 340)\n",
      "2020-04-01 01:14:11,374 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#5 type(tv), score:0.966128, bbox: (369, 11, 636, 231)\n",
      "2020-04-01 01:14:11,390 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#6 type(book), score:0.874373, bbox: (65, 189, 185, 283)\n",
      "2020-04-01 01:14:11,407 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#7 type(book), score:0.868076, bbox: (19, 191, 180, 448)\n",
      "2020-04-01 01:14:11,427 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#8 type(chair), score:0.854792, bbox: (55, 2, 232, 87)\n",
      "2020-04-01 01:14:11,443 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#9 type(laptop), score:0.812607, bbox: (357, 165, 640, 451)\n",
      "2020-04-01 01:14:11,463 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#10 type(book), score:0.806445, bbox: (214, 204, 283, 276)\n",
      "2020-04-01 01:14:11,479 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#11 type(dining table), score:0.759895, bbox: (15, 49, 640, 480)\n",
      "2020-04-01 01:14:11,501 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#12 type(book), score:0.718977, bbox: (287, 328, 456, 476)\n",
      "2020-04-01 01:14:11,517 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "bbox:  [234 369 301 416]\n",
      "bbox:  [303 143 340 192]\n",
      "2020-04-01 01:14:11,518 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,518 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,519 [WARNING]:<ipython-input-150-637944d7d6ca>.root, in line 1642 >> There is no detected key points(camera space) associated to <F#1.Ob#1(cup:0.99)>\n",
      "2020-04-01 01:14:11,519 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,519 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,520 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,520 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,520 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,521 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,521 [WARNING]:<ipython-input-150-637944d7d6ca>.root, in line 1642 >> There is no detected key points(camera space) associated to <F#1.Ob#4(cell phone:0.97)>\n",
      "2020-04-01 01:14:11,521 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,522 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,522 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,522 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,523 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,523 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,523 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,524 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,524 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,524 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,525 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,525 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2020-04-01 01:14:11,525 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,526 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,526 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,526 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:11,527 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 778 >> RuntimeBlock Add 12 detected objects to initialize landmarks\n",
      "2020-04-01 01:14:11,529 [INFO]:<ipython-input-151-8477809ffe69>.root, in line 117 >> exec tracker to track motions\n",
      "<Frame 2>, Extracting ROI features ...\n",
      "Processing 1 images\n",
      "image                    shape: (480, 640, 3)         min:    0.00000  max:  255.00000  uint8\n",
      "molded_images            shape: (1, 1024, 1024, 3)    min: -123.70000  max:  151.10000  float64\n",
      "image_metas              shape: (1, 93)               min:    0.00000  max: 1024.00000  float64\n",
      "anchors                  shape: (1, 261888, 4)        min:   -0.35390  max:    1.29134  float32\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages/ipykernel_launcher.py:1642: DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2020-04-01 01:14:13,311 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 180 >> SemanticFeatureExtractor Constructing deep feature extration model ...\n",
      "2020-04-01 01:14:13,319 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 206 >> SemanticFeatureExtractor Construction of deep feature extraction model complete.\n",
      "roi_features shape:  (9, 7, 7, 256)\n",
      "=== Detection Results ===\n",
      "#1 type(cup), score:0.995555, bbox: (365, 239, 415, 307)\n",
      "2020-04-01 01:14:16,200 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#2 type(keyboard), score:0.986600, bbox: (409, 310, 632, 456)\n",
      "2020-04-01 01:14:16,218 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#3 type(book), score:0.951678, bbox: (58, 100, 207, 195)\n",
      "2020-04-01 01:14:16,237 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#4 type(book), score:0.938902, bbox: (73, 193, 184, 287)\n",
      "2020-04-01 01:14:16,254 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#5 type(tv), score:0.900105, bbox: (370, 9, 635, 239)\n",
      "2020-04-01 01:14:16,270 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#6 type(cell phone), score:0.875890, bbox: (145, 309, 192, 348)\n",
      "2020-04-01 01:14:16,286 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#7 type(book), score:0.814892, bbox: (214, 213, 283, 285)\n",
      "2020-04-01 01:14:16,303 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#8 type(dining table), score:0.801651, bbox: (11, 72, 640, 477)\n",
      "2020-04-01 01:14:16,325 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "#9 type(laptop), score:0.775325, bbox: (349, 10, 638, 247)\n",
      "2020-04-01 01:14:16,342 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 255 >> SemanticFeatureExtractor extracting orb key points and features for detection\n",
      "2020-04-01 01:14:16,342 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,343 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,343 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,343 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,344 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,344 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,345 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,345 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,345 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,346 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,347 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,347 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,347 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,347 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,348 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,348 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,348 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 115 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,349 [WARNING]:<ipython-input-11-1f2bdfc0b3d4>.root, in line 138 >> The measures indice is none. please set it using 'Kalmanfilter.populate_measures_constrain(self, indice)' later.\n",
      "2020-04-01 01:14:16,349 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 846 >> RuntimeBlock Retrieve 12 active and viewable landmarks\n",
      "{1: <F#1.Ob#1(cup)>, 2: <F#1.Ob#2(keyboard)>, 3: <F#1.Ob#3(book)>, 4: <F#1.Ob#4(cell phone)>, 5: <F#1.Ob#5(tv)>, 6: <F#1.Ob#6(book)>, 7: <F#1.Ob#7(book)>, 8: <F#1.Ob#8(chair)>, 9: <F#1.Ob#9(laptop)>, 10: <F#1.Ob#10(book)>, 11: <F#1.Ob#11(dining table)>, 12: <F#1.Ob#12(book)>}\n",
      "2020-04-01 01:14:16,498 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 945 >> <Landmark 1> , bbox <369, 234, 416, 301>; predicted bbox by KalmanFilter: <369, 234, 416, 301>\n",
      "2020-04-01 01:14:16,498 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 952 >> <Landmark 1> , bbox velocity (delta/frame) <0, 0, 0, 0>, predicted by KalmanFilter\n",
      "2020-04-01 01:14:16,499 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 961 >> <landmark 1> , bbox velocity (delta/frame) <-1, 6, 0, -5>, predicted by OpticalFlowBBoxPredictor\n",
      "2020-04-01 01:14:16,499 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 945 >> <Landmark 2> , bbox <410, 298, 636, 448>; predicted bbox by KalmanFilter: <410, 298, 636, 448>\n",
      "2020-04-01 01:14:16,500 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 952 >> <Landmark 2> , bbox velocity (delta/frame) <0, 0, 0, 0>, predicted by KalmanFilter\n",
      "2020-04-01 01:14:16,500 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 961 >> <landmark 2> , bbox velocity (delta/frame) <-1, 4, 1, -4>, predicted by OpticalFlowBBoxPredictor\n",
      "2020-04-01 01:14:16,500 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 945 >> <Landmark 3> , bbox <84, 101, 206, 183>; predicted bbox by KalmanFilter: <84, 101, 206, 183>\n",
      "2020-04-01 01:14:16,501 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 952 >> <Landmark 3> , bbox velocity (delta/frame) <0, 0, 0, 0>, predicted by KalmanFilter\n",
      "2020-04-01 01:14:16,501 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 961 >> <landmark 3> , bbox velocity (delta/frame) <3, 6, -1, 0>, predicted by OpticalFlowBBoxPredictor\n",
      "2020-04-01 01:14:16,502 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 945 >> <Landmark 4> , bbox <143, 303, 192, 340>; predicted bbox by KalmanFilter: <143, 303, 192, 340>\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2020-04-01 01:14:16,502 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 952 >> <Landmark 4> , bbox velocity (delta/frame) <0, 0, 0, 0>, predicted by KalmanFilter\n",
      "2020-04-01 01:14:16,502 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 961 >> <landmark 4> , bbox velocity (delta/frame) <1, 5, 0, 0>, predicted by OpticalFlowBBoxPredictor\n",
      "2020-04-01 01:14:16,503 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 945 >> <Landmark 5> , bbox <369, 11, 636, 231>; predicted bbox by KalmanFilter: <369, 11, 636, 231>\n",
      "2020-04-01 01:14:16,503 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 952 >> <Landmark 5> , bbox velocity (delta/frame) <0, 0, 0, 0>, predicted by KalmanFilter\n",
      "2020-04-01 01:14:16,503 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 961 >> <landmark 5> , bbox velocity (delta/frame) <0, 7, 1, 2>, predicted by OpticalFlowBBoxPredictor\n",
      "2020-04-01 01:14:16,504 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 945 >> <Landmark 6> , bbox <65, 189, 185, 283>; predicted bbox by KalmanFilter: <65, 189, 185, 283>\n",
      "2020-04-01 01:14:16,504 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 952 >> <Landmark 6> , bbox velocity (delta/frame) <0, 0, 0, 0>, predicted by KalmanFilter\n",
      "2020-04-01 01:14:16,504 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 961 >> <landmark 6> , bbox velocity (delta/frame) <1, 5, 0, 0>, predicted by OpticalFlowBBoxPredictor\n",
      "2020-04-01 01:14:16,505 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 945 >> <Landmark 7> , bbox <19, 191, 180, 448>; predicted bbox by KalmanFilter: <19, 191, 180, 448>\n",
      "2020-04-01 01:14:16,505 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 952 >> <Landmark 7> , bbox velocity (delta/frame) <0, 0, 0, 0>, predicted by KalmanFilter\n",
      "2020-04-01 01:14:16,505 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 961 >> <landmark 7> , bbox velocity (delta/frame) <0, 0, 0, 4>, predicted by OpticalFlowBBoxPredictor\n",
      "2020-04-01 01:14:16,505 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 945 >> <Landmark 8> , bbox <55, 2, 232, 87>; predicted bbox by KalmanFilter: <55, 2, 232, 87>\n",
      "2020-04-01 01:14:16,506 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 952 >> <Landmark 8> , bbox velocity (delta/frame) <0, 0, 0, 0>, predicted by KalmanFilter\n",
      "2020-04-01 01:14:16,506 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 961 >> <landmark 8> , bbox velocity (delta/frame) <2, 4, -2, -4>, predicted by OpticalFlowBBoxPredictor\n",
      "2020-04-01 01:14:16,506 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 945 >> <Landmark 9> , bbox <357, 165, 640, 451>; predicted bbox by KalmanFilter: <357, 165, 640, 451>\n",
      "2020-04-01 01:14:16,506 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 952 >> <Landmark 9> , bbox velocity (delta/frame) <0, 0, 0, 0>, predicted by KalmanFilter\n",
      "2020-04-01 01:14:16,507 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 961 >> <landmark 9> , bbox velocity (delta/frame) <-1, 5, 1, -5>, predicted by OpticalFlowBBoxPredictor\n",
      "2020-04-01 01:14:16,507 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 945 >> <Landmark 10> , bbox <214, 204, 283, 276>; predicted bbox by KalmanFilter: <214, 204, 283, 276>\n",
      "2020-04-01 01:14:16,508 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 952 >> <Landmark 10> , bbox velocity (delta/frame) <0, 0, 0, 0>, predicted by KalmanFilter\n",
      "2020-04-01 01:14:16,508 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 961 >> <landmark 10> , bbox velocity (delta/frame) <1, 4, -1, 0>, predicted by OpticalFlowBBoxPredictor\n",
      "2020-04-01 01:14:16,509 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 945 >> <Landmark 11> , bbox <15, 49, 640, 480>; predicted bbox by KalmanFilter: <15, 49, 640, 480>\n",
      "2020-04-01 01:14:16,509 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 952 >> <Landmark 11> , bbox velocity (delta/frame) <0, 0, 0, 0>, predicted by KalmanFilter\n",
      "2020-04-01 01:14:16,509 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 961 >> <landmark 11> , bbox velocity (delta/frame) <0, 0, 0, 0>, predicted by OpticalFlowBBoxPredictor\n",
      "2020-04-01 01:14:16,510 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 945 >> <Landmark 12> , bbox <287, 328, 456, 476>; predicted bbox by KalmanFilter: <287, 328, 456, 476>\n",
      "2020-04-01 01:14:16,510 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 952 >> <Landmark 12> , bbox velocity (delta/frame) <0, 0, 0, 0>, predicted by KalmanFilter\n",
      "2020-04-01 01:14:16,510 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 961 >> <landmark 12> , bbox velocity (delta/frame) <0, 5, 0, -5>, predicted by OpticalFlowBBoxPredictor\n",
      "12 landmarks, 9 detections, forms 12 x 9 cost matrix :\n",
      "[[4.40602974e-02 1.00000000e+03 1.00000000e+03 1.00000000e+03\n",
      "  1.00000000e+03 1.00000000e+03 1.00000000e+03 1.00000000e+03\n",
      "  1.00000000e+03]\n",
      " [1.00000000e+03 3.05090722e-02 1.00000000e+03 1.00000000e+03\n",
      "  1.00000000e+03 1.00000000e+03 1.00000000e+03 1.00000000e+03\n",
      "  1.00000000e+03]\n",
      " [1.00000000e+03 1.00000000e+03 1.15603776e-01 1.71454142e-01\n",
      "  1.00000000e+03 1.00000000e+03 1.81244407e-01 1.00000000e+03\n",
      "  1.00000000e+03]\n",
      " [1.00000000e+03 1.00000000e+03 1.00000000e+03 1.00000000e+03\n",
      "  1.00000000e+03            inf 1.00000000e+03 1.00000000e+03\n",
      "  1.00000000e+03]\n",
      " [1.00000000e+03 1.00000000e+03 1.00000000e+03 1.00000000e+03\n",
      "  8.53985110e-03 1.00000000e+03 1.00000000e+03 1.00000000e+03\n",
      "  1.00000000e+03]\n",
      " [1.00000000e+03 1.00000000e+03 2.34698928e-01            inf\n",
      "  1.00000000e+03 1.00000000e+03 2.05308988e-01 1.00000000e+03\n",
      "  1.00000000e+03]\n",
      " [1.00000000e+03 1.00000000e+03 3.33380949e-01            inf\n",
      "  1.00000000e+03 1.00000000e+03 1.16193444e-01 1.00000000e+03\n",
      "  1.00000000e+03]\n",
      " [1.00000000e+03 1.00000000e+03 1.00000000e+03 1.00000000e+03\n",
      "  1.00000000e+03 1.00000000e+03 1.00000000e+03 1.00000000e+03\n",
      "  1.00000000e+03]\n",
      " [1.00000000e+03 1.00000000e+03 1.00000000e+03 1.00000000e+03\n",
      "  1.00000000e+03 1.00000000e+03 1.00000000e+03 1.00000000e+03\n",
      "  1.01899446e-01]\n",
      " [1.00000000e+03 1.00000000e+03 1.57312219e-01 2.06383128e-01\n",
      "  1.00000000e+03 1.00000000e+03 1.05338887e-01 1.00000000e+03\n",
      "  1.00000000e+03]\n",
      " [1.00000000e+03 1.00000000e+03 1.00000000e+03 1.00000000e+03\n",
      "  1.00000000e+03 1.00000000e+03 1.00000000e+03 2.21794474e-02\n",
      "  1.00000000e+03]\n",
      " [1.00000000e+03 1.00000000e+03 1.39475774e-01 6.24578251e-02\n",
      "  1.00000000e+03 1.00000000e+03 1.21776904e-01 1.00000000e+03\n",
      "  1.00000000e+03]]\n",
      "IoUs:\n",
      "                                <F#2.Ob#13(cup:1.00)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                         0.045518   \n",
      "<F#1.Ob#2(keyboard:0.99)>                    0.000000   \n",
      "<F#1.Ob#3(book:0.98)>                        0.000000   \n",
      "<F#1.Ob#4(cell phone:0.97)>                  0.000000   \n",
      "<F#1.Ob#5(tv:0.97)>                          0.000000   \n",
      "<F#1.Ob#6(book:0.87)>                        0.000000   \n",
      "<F#1.Ob#7(book:0.87)>                        0.000000   \n",
      "<F#1.Ob#8(chair:0.85)>                       0.000000   \n",
      "<F#1.Ob#9(laptop:0.81)>                      0.000000   \n",
      "<F#1.Ob#10(book:0.81)>                       0.000000   \n",
      "<F#1.Ob#11(dining table:0.76)>               0.000000   \n",
      "<F#1.Ob#12(book:0.72)>                       0.000000   \n",
      "\n",
      "                                <F#2.Ob#14(keyboard:0.99)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                              0.000000   \n",
      "<F#1.Ob#2(keyboard:0.99)>                         0.033796   \n",
      "<F#1.Ob#3(book:0.98)>                             0.000000   \n",
      "<F#1.Ob#4(cell phone:0.97)>                       0.000000   \n",
      "<F#1.Ob#5(tv:0.97)>                               0.000000   \n",
      "<F#1.Ob#6(book:0.87)>                             0.000000   \n",
      "<F#1.Ob#7(book:0.87)>                             0.000000   \n",
      "<F#1.Ob#8(chair:0.85)>                            0.000000   \n",
      "<F#1.Ob#9(laptop:0.81)>                           0.000000   \n",
      "<F#1.Ob#10(book:0.81)>                            0.000000   \n",
      "<F#1.Ob#11(dining table:0.76)>                    0.000000   \n",
      "<F#1.Ob#12(book:0.72)>                            0.000000   \n",
      "\n",
      "                                <F#2.Ob#15(book:0.95)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                          0.000000   \n",
      "<F#1.Ob#2(keyboard:0.99)>                     0.000000   \n",
      "<F#1.Ob#3(book:0.98)>                         0.130183   \n",
      "<F#1.Ob#4(cell phone:0.97)>                   0.000000   \n",
      "<F#1.Ob#5(tv:0.97)>                           0.000000   \n",
      "<F#1.Ob#6(book:0.87)>                         0.752525   \n",
      "<F#1.Ob#7(book:0.87)>                         1.356035   \n",
      "<F#1.Ob#8(chair:0.85)>                        0.000000   \n",
      "<F#1.Ob#9(laptop:0.81)>                       0.000000   \n",
      "<F#1.Ob#10(book:0.81)>                        0.500000   \n",
      "<F#1.Ob#11(dining table:0.76)>                0.000000   \n",
      "<F#1.Ob#12(book:0.72)>                        0.500000   \n",
      "\n",
      "                                <F#2.Ob#16(book:0.94)>  <F#2.Ob#17(tv:0.90)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                               0.0              0.000000   \n",
      "<F#1.Ob#2(keyboard:0.99)>                          0.0              0.000000   \n",
      "<F#1.Ob#3(book:0.98)>                              0.5              0.000000   \n",
      "<F#1.Ob#4(cell phone:0.97)>                        0.0              0.000000   \n",
      "<F#1.Ob#5(tv:0.97)>                                0.0              0.009229   \n",
      "<F#1.Ob#6(book:0.87)>                              inf              0.000000   \n",
      "<F#1.Ob#7(book:0.87)>                              inf              0.000000   \n",
      "<F#1.Ob#8(chair:0.85)>                             0.0              0.000000   \n",
      "<F#1.Ob#9(laptop:0.81)>                            0.0              0.000000   \n",
      "<F#1.Ob#10(book:0.81)>                             0.5              0.000000   \n",
      "<F#1.Ob#11(dining table:0.76)>                     0.0              0.000000   \n",
      "<F#1.Ob#12(book:0.72)>                             0.5              0.000000   \n",
      "\n",
      "                                <F#2.Ob#18(cell phone:0.88)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                                     0.0   \n",
      "<F#1.Ob#2(keyboard:0.99)>                                0.0   \n",
      "<F#1.Ob#3(book:0.98)>                                    0.0   \n",
      "<F#1.Ob#4(cell phone:0.97)>                              inf   \n",
      "<F#1.Ob#5(tv:0.97)>                                      0.0   \n",
      "<F#1.Ob#6(book:0.87)>                                    0.0   \n",
      "<F#1.Ob#7(book:0.87)>                                    0.0   \n",
      "<F#1.Ob#8(chair:0.85)>                                   0.0   \n",
      "<F#1.Ob#9(laptop:0.81)>                                  0.0   \n",
      "<F#1.Ob#10(book:0.81)>                                   0.0   \n",
      "<F#1.Ob#11(dining table:0.76)>                           0.0   \n",
      "<F#1.Ob#12(book:0.72)>                                   0.0   \n",
      "\n",
      "                                <F#2.Ob#19(book:0.81)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                          0.000000   \n",
      "<F#1.Ob#2(keyboard:0.99)>                     0.000000   \n",
      "<F#1.Ob#3(book:0.98)>                         0.500000   \n",
      "<F#1.Ob#4(cell phone:0.97)>                   0.000000   \n",
      "<F#1.Ob#5(tv:0.97)>                           0.000000   \n",
      "<F#1.Ob#6(book:0.87)>                         0.500000   \n",
      "<F#1.Ob#7(book:0.87)>                         0.500000   \n",
      "<F#1.Ob#8(chair:0.85)>                        0.000000   \n",
      "<F#1.Ob#9(laptop:0.81)>                       0.000000   \n",
      "<F#1.Ob#10(book:0.81)>                        0.113924   \n",
      "<F#1.Ob#11(dining table:0.76)>                0.000000   \n",
      "<F#1.Ob#12(book:0.72)>                        0.500000   \n",
      "\n",
      "                                <F#2.Ob#20(dining table:0.80)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                                  0.000000   \n",
      "<F#1.Ob#2(keyboard:0.99)>                             0.000000   \n",
      "<F#1.Ob#3(book:0.98)>                                 0.000000   \n",
      "<F#1.Ob#4(cell phone:0.97)>                           0.000000   \n",
      "<F#1.Ob#5(tv:0.97)>                                   0.000000   \n",
      "<F#1.Ob#6(book:0.87)>                                 0.000000   \n",
      "<F#1.Ob#7(book:0.87)>                                 0.000000   \n",
      "<F#1.Ob#8(chair:0.85)>                                0.000000   \n",
      "<F#1.Ob#9(laptop:0.81)>                               0.000000   \n",
      "<F#1.Ob#10(book:0.81)>                                0.000000   \n",
      "<F#1.Ob#11(dining table:0.76)>                        0.024036   \n",
      "<F#1.Ob#12(book:0.72)>                                0.000000   \n",
      "\n",
      "                                <F#2.Ob#21(laptop:0.78)>  \n",
      "<F#1.Ob#1(cup:0.99)>                            0.000000  \n",
      "<F#1.Ob#2(keyboard:0.99)>                       0.000000  \n",
      "<F#1.Ob#3(book:0.98)>                           0.000000  \n",
      "<F#1.Ob#4(cell phone:0.97)>                     0.000000  \n",
      "<F#1.Ob#5(tv:0.97)>                             0.000000  \n",
      "<F#1.Ob#6(book:0.87)>                           0.000000  \n",
      "<F#1.Ob#7(book:0.87)>                           0.000000  \n",
      "<F#1.Ob#8(chair:0.85)>                          0.000000  \n",
      "<F#1.Ob#9(laptop:0.81)>                         0.281861  \n",
      "<F#1.Ob#10(book:0.81)>                          0.000000  \n",
      "<F#1.Ob#11(dining table:0.76)>                  0.000000  \n",
      "<F#1.Ob#12(book:0.72)>                          0.000000  \n",
      "Corr:\n",
      "                                <F#2.Ob#13(cup:1.00)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                         0.967971   \n",
      "<F#1.Ob#2(keyboard:0.99)>                    0.000000   \n",
      "<F#1.Ob#3(book:0.98)>                        0.000000   \n",
      "<F#1.Ob#4(cell phone:0.97)>                  0.000000   \n",
      "<F#1.Ob#5(tv:0.97)>                          0.000000   \n",
      "<F#1.Ob#6(book:0.87)>                        0.000000   \n",
      "<F#1.Ob#7(book:0.87)>                        0.000000   \n",
      "<F#1.Ob#8(chair:0.85)>                       0.000000   \n",
      "<F#1.Ob#9(laptop:0.81)>                      0.000000   \n",
      "<F#1.Ob#10(book:0.81)>                       0.000000   \n",
      "<F#1.Ob#11(dining table:0.76)>               0.000000   \n",
      "<F#1.Ob#12(book:0.72)>                       0.000000   \n",
      "\n",
      "                                <F#2.Ob#14(keyboard:0.99)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                              0.000000   \n",
      "<F#1.Ob#2(keyboard:0.99)>                         0.902753   \n",
      "<F#1.Ob#3(book:0.98)>                             0.000000   \n",
      "<F#1.Ob#4(cell phone:0.97)>                       0.000000   \n",
      "<F#1.Ob#5(tv:0.97)>                               0.000000   \n",
      "<F#1.Ob#6(book:0.87)>                             0.000000   \n",
      "<F#1.Ob#7(book:0.87)>                             0.000000   \n",
      "<F#1.Ob#8(chair:0.85)>                            0.000000   \n",
      "<F#1.Ob#9(laptop:0.81)>                           0.000000   \n",
      "<F#1.Ob#10(book:0.81)>                            0.000000   \n",
      "<F#1.Ob#11(dining table:0.76)>                    0.000000   \n",
      "<F#1.Ob#12(book:0.72)>                            0.000000   \n",
      "\n",
      "                                <F#2.Ob#15(book:0.95)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                          0.000000   \n",
      "<F#1.Ob#2(keyboard:0.99)>                     0.000000   \n",
      "<F#1.Ob#3(book:0.98)>                         0.888013   \n",
      "<F#1.Ob#4(cell phone:0.97)>                   0.000000   \n",
      "<F#1.Ob#5(tv:0.97)>                           0.000000   \n",
      "<F#1.Ob#6(book:0.87)>                         0.311882   \n",
      "<F#1.Ob#7(book:0.87)>                         0.245850   \n",
      "<F#1.Ob#8(chair:0.85)>                        0.000000   \n",
      "<F#1.Ob#9(laptop:0.81)>                       0.000000   \n",
      "<F#1.Ob#10(book:0.81)>                        0.314624   \n",
      "<F#1.Ob#11(dining table:0.76)>                0.000000   \n",
      "<F#1.Ob#12(book:0.72)>                        0.278952   \n",
      "\n",
      "                                <F#2.Ob#16(book:0.94)>  <F#2.Ob#17(tv:0.90)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                          0.000000              0.000000   \n",
      "<F#1.Ob#2(keyboard:0.99)>                     0.000000              0.000000   \n",
      "<F#1.Ob#3(book:0.98)>                         0.342908              0.000000   \n",
      "<F#1.Ob#4(cell phone:0.97)>                   0.000000              0.000000   \n",
      "<F#1.Ob#5(tv:0.97)>                           0.000000              0.925338   \n",
      "<F#1.Ob#6(book:0.87)>                         0.919975              0.000000   \n",
      "<F#1.Ob#7(book:0.87)>                         0.276497              0.000000   \n",
      "<F#1.Ob#8(chair:0.85)>                        0.000000              0.000000   \n",
      "<F#1.Ob#9(laptop:0.81)>                       0.000000              0.000000   \n",
      "<F#1.Ob#10(book:0.81)>                        0.412766              0.000000   \n",
      "<F#1.Ob#11(dining table:0.76)>                0.000000              0.000000   \n",
      "<F#1.Ob#12(book:0.72)>                        0.124916              0.000000   \n",
      "\n",
      "                                <F#2.Ob#18(cell phone:0.88)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                                 0.00000   \n",
      "<F#1.Ob#2(keyboard:0.99)>                            0.00000   \n",
      "<F#1.Ob#3(book:0.98)>                                0.00000   \n",
      "<F#1.Ob#4(cell phone:0.97)>                          0.83476   \n",
      "<F#1.Ob#5(tv:0.97)>                                  0.00000   \n",
      "<F#1.Ob#6(book:0.87)>                                0.00000   \n",
      "<F#1.Ob#7(book:0.87)>                                0.00000   \n",
      "<F#1.Ob#8(chair:0.85)>                               0.00000   \n",
      "<F#1.Ob#9(laptop:0.81)>                              0.00000   \n",
      "<F#1.Ob#10(book:0.81)>                               0.00000   \n",
      "<F#1.Ob#11(dining table:0.76)>                       0.00000   \n",
      "<F#1.Ob#12(book:0.72)>                               0.00000   \n",
      "\n",
      "                                <F#2.Ob#19(book:0.81)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                          0.000000   \n",
      "<F#1.Ob#2(keyboard:0.99)>                     0.000000   \n",
      "<F#1.Ob#3(book:0.98)>                         0.362489   \n",
      "<F#1.Ob#4(cell phone:0.97)>                   0.000000   \n",
      "<F#1.Ob#5(tv:0.97)>                           0.000000   \n",
      "<F#1.Ob#6(book:0.87)>                         0.410618   \n",
      "<F#1.Ob#7(book:0.87)>                         0.232387   \n",
      "<F#1.Ob#8(chair:0.85)>                        0.000000   \n",
      "<F#1.Ob#9(laptop:0.81)>                       0.000000   \n",
      "<F#1.Ob#10(book:0.81)>                        0.924641   \n",
      "<F#1.Ob#11(dining table:0.76)>                0.000000   \n",
      "<F#1.Ob#12(book:0.72)>                        0.243554   \n",
      "\n",
      "                                <F#2.Ob#20(dining table:0.80)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                                  0.000000   \n",
      "<F#1.Ob#2(keyboard:0.99)>                             0.000000   \n",
      "<F#1.Ob#3(book:0.98)>                                 0.000000   \n",
      "<F#1.Ob#4(cell phone:0.97)>                           0.000000   \n",
      "<F#1.Ob#5(tv:0.97)>                                   0.000000   \n",
      "<F#1.Ob#6(book:0.87)>                                 0.000000   \n",
      "<F#1.Ob#7(book:0.87)>                                 0.000000   \n",
      "<F#1.Ob#8(chair:0.85)>                                0.000000   \n",
      "<F#1.Ob#9(laptop:0.81)>                               0.000000   \n",
      "<F#1.Ob#10(book:0.81)>                                0.000000   \n",
      "<F#1.Ob#11(dining table:0.76)>                        0.922772   \n",
      "<F#1.Ob#12(book:0.72)>                                0.000000   \n",
      "\n",
      "                                <F#2.Ob#21(laptop:0.78)>  \n",
      "<F#1.Ob#1(cup:0.99)>                            0.000000  \n",
      "<F#1.Ob#2(keyboard:0.99)>                       0.000000  \n",
      "<F#1.Ob#3(book:0.98)>                           0.000000  \n",
      "<F#1.Ob#4(cell phone:0.97)>                     0.000000  \n",
      "<F#1.Ob#5(tv:0.97)>                             0.000000  \n",
      "<F#1.Ob#6(book:0.87)>                           0.000000  \n",
      "<F#1.Ob#7(book:0.87)>                           0.000000  \n",
      "<F#1.Ob#8(chair:0.85)>                          0.000000  \n",
      "<F#1.Ob#9(laptop:0.81)>                         0.361524  \n",
      "<F#1.Ob#10(book:0.81)>                          0.000000  \n",
      "<F#1.Ob#11(dining table:0.76)>                  0.000000  \n",
      "<F#1.Ob#12(book:0.72)>                          0.000000  \n",
      "assignment:\n",
      "                                <F#2.Ob#13(cup:1.00)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                              1.0   \n",
      "<F#1.Ob#2(keyboard:0.99)>                         0.0   \n",
      "<F#1.Ob#3(book:0.98)>                             0.0   \n",
      "<F#1.Ob#4(cell phone:0.97)>                       0.0   \n",
      "<F#1.Ob#5(tv:0.97)>                               0.0   \n",
      "<F#1.Ob#6(book:0.87)>                             0.0   \n",
      "<F#1.Ob#7(book:0.87)>                             0.0   \n",
      "<F#1.Ob#8(chair:0.85)>                            0.0   \n",
      "<F#1.Ob#9(laptop:0.81)>                           0.0   \n",
      "<F#1.Ob#10(book:0.81)>                            0.0   \n",
      "<F#1.Ob#11(dining table:0.76)>                    0.0   \n",
      "<F#1.Ob#12(book:0.72)>                            0.0   \n",
      "\n",
      "                                <F#2.Ob#14(keyboard:0.99)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                                   0.0   \n",
      "<F#1.Ob#2(keyboard:0.99)>                              1.0   \n",
      "<F#1.Ob#3(book:0.98)>                                  0.0   \n",
      "<F#1.Ob#4(cell phone:0.97)>                            0.0   \n",
      "<F#1.Ob#5(tv:0.97)>                                    0.0   \n",
      "<F#1.Ob#6(book:0.87)>                                  0.0   \n",
      "<F#1.Ob#7(book:0.87)>                                  0.0   \n",
      "<F#1.Ob#8(chair:0.85)>                                 0.0   \n",
      "<F#1.Ob#9(laptop:0.81)>                                0.0   \n",
      "<F#1.Ob#10(book:0.81)>                                 0.0   \n",
      "<F#1.Ob#11(dining table:0.76)>                         0.0   \n",
      "<F#1.Ob#12(book:0.72)>                                 0.0   \n",
      "\n",
      "                                <F#2.Ob#15(book:0.95)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                               0.0   \n",
      "<F#1.Ob#2(keyboard:0.99)>                          0.0   \n",
      "<F#1.Ob#3(book:0.98)>                              1.0   \n",
      "<F#1.Ob#4(cell phone:0.97)>                        0.0   \n",
      "<F#1.Ob#5(tv:0.97)>                                0.0   \n",
      "<F#1.Ob#6(book:0.87)>                              0.0   \n",
      "<F#1.Ob#7(book:0.87)>                              0.0   \n",
      "<F#1.Ob#8(chair:0.85)>                             0.0   \n",
      "<F#1.Ob#9(laptop:0.81)>                            0.0   \n",
      "<F#1.Ob#10(book:0.81)>                             0.0   \n",
      "<F#1.Ob#11(dining table:0.76)>                     0.0   \n",
      "<F#1.Ob#12(book:0.72)>                             0.0   \n",
      "\n",
      "                                <F#2.Ob#16(book:0.94)>  <F#2.Ob#17(tv:0.90)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                               0.0                   0.0   \n",
      "<F#1.Ob#2(keyboard:0.99)>                          0.0                   0.0   \n",
      "<F#1.Ob#3(book:0.98)>                              0.0                   0.0   \n",
      "<F#1.Ob#4(cell phone:0.97)>                        0.0                   0.0   \n",
      "<F#1.Ob#5(tv:0.97)>                                0.0                   1.0   \n",
      "<F#1.Ob#6(book:0.87)>                              0.0                   0.0   \n",
      "<F#1.Ob#7(book:0.87)>                              0.0                   0.0   \n",
      "<F#1.Ob#8(chair:0.85)>                             0.0                   0.0   \n",
      "<F#1.Ob#9(laptop:0.81)>                            0.0                   0.0   \n",
      "<F#1.Ob#10(book:0.81)>                             0.0                   0.0   \n",
      "<F#1.Ob#11(dining table:0.76)>                     0.0                   0.0   \n",
      "<F#1.Ob#12(book:0.72)>                             1.0                   0.0   \n",
      "\n",
      "                                <F#2.Ob#18(cell phone:0.88)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                                     0.0   \n",
      "<F#1.Ob#2(keyboard:0.99)>                                0.0   \n",
      "<F#1.Ob#3(book:0.98)>                                    0.0   \n",
      "<F#1.Ob#4(cell phone:0.97)>                              0.0   \n",
      "<F#1.Ob#5(tv:0.97)>                                      0.0   \n",
      "<F#1.Ob#6(book:0.87)>                                    0.0   \n",
      "<F#1.Ob#7(book:0.87)>                                    0.0   \n",
      "<F#1.Ob#8(chair:0.85)>                                   0.0   \n",
      "<F#1.Ob#9(laptop:0.81)>                                  0.0   \n",
      "<F#1.Ob#10(book:0.81)>                                   0.0   \n",
      "<F#1.Ob#11(dining table:0.76)>                           0.0   \n",
      "<F#1.Ob#12(book:0.72)>                                   0.0   \n",
      "\n",
      "                                <F#2.Ob#19(book:0.81)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                               0.0   \n",
      "<F#1.Ob#2(keyboard:0.99)>                          0.0   \n",
      "<F#1.Ob#3(book:0.98)>                              0.0   \n",
      "<F#1.Ob#4(cell phone:0.97)>                        0.0   \n",
      "<F#1.Ob#5(tv:0.97)>                                0.0   \n",
      "<F#1.Ob#6(book:0.87)>                              0.0   \n",
      "<F#1.Ob#7(book:0.87)>                              0.0   \n",
      "<F#1.Ob#8(chair:0.85)>                             0.0   \n",
      "<F#1.Ob#9(laptop:0.81)>                            0.0   \n",
      "<F#1.Ob#10(book:0.81)>                             1.0   \n",
      "<F#1.Ob#11(dining table:0.76)>                     0.0   \n",
      "<F#1.Ob#12(book:0.72)>                             0.0   \n",
      "\n",
      "                                <F#2.Ob#20(dining table:0.80)>  \\\n",
      "<F#1.Ob#1(cup:0.99)>                                       0.0   \n",
      "<F#1.Ob#2(keyboard:0.99)>                                  0.0   \n",
      "<F#1.Ob#3(book:0.98)>                                      0.0   \n",
      "<F#1.Ob#4(cell phone:0.97)>                                0.0   \n",
      "<F#1.Ob#5(tv:0.97)>                                        0.0   \n",
      "<F#1.Ob#6(book:0.87)>                                      0.0   \n",
      "<F#1.Ob#7(book:0.87)>                                      0.0   \n",
      "<F#1.Ob#8(chair:0.85)>                                     0.0   \n",
      "<F#1.Ob#9(laptop:0.81)>                                    0.0   \n",
      "<F#1.Ob#10(book:0.81)>                                     0.0   \n",
      "<F#1.Ob#11(dining table:0.76)>                             1.0   \n",
      "<F#1.Ob#12(book:0.72)>                                     0.0   \n",
      "\n",
      "                                <F#2.Ob#21(laptop:0.78)>  \n",
      "<F#1.Ob#1(cup:0.99)>                                 0.0  \n",
      "<F#1.Ob#2(keyboard:0.99)>                            0.0  \n",
      "<F#1.Ob#3(book:0.98)>                                0.0  \n",
      "<F#1.Ob#4(cell phone:0.97)>                          0.0  \n",
      "<F#1.Ob#5(tv:0.97)>                                  0.0  \n",
      "<F#1.Ob#6(book:0.87)>                                0.0  \n",
      "<F#1.Ob#7(book:0.87)>                                0.0  \n",
      "<F#1.Ob#8(chair:0.85)>                               0.0  \n",
      "<F#1.Ob#9(laptop:0.81)>                              1.0  \n",
      "<F#1.Ob#10(book:0.81)>                               0.0  \n",
      "<F#1.Ob#11(dining table:0.76)>                       0.0  \n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<F#1.Ob#12(book:0.72)>                               0.0  \n",
      "8 mtches, 3 unmtched landmarks, 0 unmtched detections\n",
      "2020-04-01 01:14:16,528 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 967 >> <Landmark 1> update projected pose from [234 369 301 416] => [239 365 307 415]\n",
      "2020-04-01 01:14:16,529 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 967 >> <Landmark 2> update projected pose from [298 410 448 636] => [310 409 456 632]\n",
      "2020-04-01 01:14:16,530 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 967 >> <Landmark 3> update projected pose from [101  84 183 206] => [100  58 195 207]\n",
      "2020-04-01 01:14:16,530 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 967 >> <Landmark 5> update projected pose from [ 11 369 231 636] => [  9 370 239 635]\n",
      "2020-04-01 01:14:16,531 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 967 >> <Landmark 9> update projected pose from [165 357 451 640] => [ 10 349 247 638]\n",
      "2020-04-01 01:14:16,531 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 967 >> <Landmark 10> update projected pose from [204 214 276 283] => [213 214 285 283]\n",
      "2020-04-01 01:14:16,532 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 967 >> <Landmark 11> update projected pose from [ 49  15 480 640] => [ 72  11 477 640]\n",
      "2020-04-01 01:14:16,532 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 967 >> <Landmark 12> update projected pose from [328 287 476 456] => [193  73 287 184]\n",
      "2020-04-01 01:14:16,533 [INFO]:<ipython-input-150-637944d7d6ca>.tracker, in line 832 >> RuntimeBlock No new landmarks found.\n",
      "Plot matches between cur frame <Frame 2> and frame <Frame 1>\n",
      "2020-04-01 01:14:16,611 [WARNING]:<ipython-input-150-637944d7d6ca>.root, in line 1244 >> No module named 'google.colab'\n",
      "Filtering out samples ...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages/ipykernel_launcher.py:97: RuntimeWarning: divide by zero encountered in true_divide\n",
      "/home/yiakwy/anaconda3/envs/py36/lib/python3.6/site-packages/ipykernel_launcher.py:1244: DeprecationWarning: The 'warn' function is deprecated, use 'warning' instead\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2020-04-01 01:14:16,804 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 1719 >> key points matched (kps_mtched) : 0\n",
      "2020-04-01 01:14:16,809 [WARNING]:<ipython-input-150-637944d7d6ca>.root, in line 1244 >> No module named 'google.colab'\n",
      "num of cur frame keypoints: 48\n",
      "num of last frame keypoints: 70\n",
      "num of mtches: 9\n",
      "K [[517.306408   0.       318.64304 ]\n",
      " [  0.       516.469215   0.      ]\n",
      " [  0.         0.         1.      ]]\n",
      "F [[-1.67713269e-05 -9.20019943e-04 -1.18760135e-01]\n",
      " [ 9.75357944e-04 -7.02169883e-05  6.54228829e-02]\n",
      " [ 1.08151914e-01 -5.67653489e-02  1.00000000e+00]]\n",
      "R: [[ 0.99946452 -0.02628584  0.01948649]\n",
      " [ 0.02540096  0.9986933   0.04434505]\n",
      " [-0.02062667 -0.04382633  0.99882621]]\n",
      "t: [[-0.57482   ]\n",
      " [-0.17312099]\n",
      " [ 0.79975689]]\n",
      "2020-04-01 01:14:16,855 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 1772 >> check solver relative epipolar constraint precision, test cases 9\n",
      "Epipolar contrain eqution <2(cur), 2(last_frame)>: -0.042088\n",
      "Epipolar contrain eqution <3(cur), 3(last_frame)>: -0.045618\n",
      "Epipolar contrain eqution <5(cur), 3(last_frame)>: -0.046228\n",
      "Epipolar contrain eqution <6(cur), 0(last_frame)>: -0.042708\n",
      "Epipolar contrain eqution <25(cur), 19(last_frame)>: -0.040019\n",
      "Epipolar contrain eqution <27(cur), 19(last_frame)>: -0.046298\n",
      "Epipolar contrain eqution <31(cur), 49(last_frame)>: -0.015212\n",
      "Epipolar contrain eqution <33(cur), 53(last_frame)>: -0.014883\n",
      "Epipolar contrain eqution <37(cur), 33(last_frame)>: -0.008422\n",
      "Avervage error of epipolar constrain error : -0.033497\n",
      "p1.shape (3, 9)\n",
      "2020-04-01 01:14:16,861 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 1814 >> check reprojection error ... \n",
      "last frame reproj err: (0.012699, -0.013590)\n",
      "(R*v1).shape: (3, 1)\n",
      "t.shape (3, 1)\n",
      "p_cur_camera.shape: (3, 1)\n",
      "cur_cam_pts[0].x: 0.28826612\n",
      "cur_cam_pts[0].y: 0.76967305\n",
      "dp2.x [-0.01297423]\n",
      "dp2.y [0.01321856]\n",
      "cur frame reproj err: (-0.012974, 0.013219)\n",
      "2020-04-01 01:14:16,864 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 1814 >> check reprojection error ... \n",
      "last frame reproj err: (0.012373, -0.015145)\n",
      "(R*v1).shape: (3, 1)\n",
      "t.shape (3, 1)\n",
      "p_cur_camera.shape: (3, 1)\n",
      "cur_cam_pts[1].x: 0.38882834\n",
      "cur_cam_pts[1].y: 0.7247933\n",
      "dp2.x [-0.01261577]\n",
      "dp2.y [0.0148097]\n",
      "cur frame reproj err: (-0.012616, 0.014810)\n",
      "2020-04-01 01:14:16,866 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 1814 >> check reprojection error ... \n",
      "last frame reproj err: (0.012536, -0.015347)\n",
      "(R*v1).shape: (3, 1)\n",
      "t.shape (3, 1)\n",
      "p_cur_camera.shape: (3, 1)\n",
      "cur_cam_pts[2].x: 0.38920084\n",
      "cur_cam_pts[2].y: 0.72442704\n",
      "dp2.x [-0.01277987]\n",
      "dp2.y [0.0150128]\n",
      "cur frame reproj err: (-0.012780, 0.015013)\n",
      "2020-04-01 01:14:16,868 [INFO]:<ipython-input-150-637944d7d6ca>.root, in line 1814 >> check reprojection error ... \n",
      "last frame reproj err: (0.012459, -0.014127)\n",
      "(R*v1).shape: (3, 1)\n",
      "t.shape (3, 1)\n",
      "p_cur_camera.shape: (3, 1)\n",
      "cur_cam_pts[3].x: 0.31947768\n",
      "cur_cam_pts[3].y: 0.7410893\n",
      "dp2.x [-0.01280522]\n",
      "dp2.y [0.01385614]\n",
      "cur frame reproj err: (-0.012805, 0.013856)\n"
     ]
    }
   ],
   "source": [
    "import cv2\n",
    "import numpy as np\n",
    "from skimage.measure import find_contours\n",
    "import logging\n",
    "from io import StringIO\n",
    "import sys\n",
    "# logging.basicConfig(level=logging.INFO,\n",
    "#                     format=u\"%(asctime)s [%(levelname)s]:%(filename)s.%(name)s, in line %(lineno)s >> %(message)s\".encode('utf-8'))\n",
    "# logging.basicConfig(level=logging.INFO)\n",
    "\n",
    "# create stream handler and add it to root logger, otherwise your logging in the colab won't work\n",
    "console = logging.StreamHandler(stream=sys.stdout)\n",
    "\n",
    "root = logging.getLogger()\n",
    "for handler in root.handlers:\n",
    "  root.removeHandler(handler)\n",
    "\n",
    "fmt = logging.Formatter(\"%(asctime)s [%(levelname)s]:%(filename)s.%(name)s, in line %(lineno)s >> %(message)s\")\n",
    "console.setFormatter(fmt)\n",
    "console.setLevel(logging.INFO)\n",
    "root.addHandler(console)\n",
    "\n",
    "# The codes have been deprecated in favor of new implementation of WebImageRenderer\n",
    "def save_instances(image, boxes, masks, class_ids, class_names, scores):\n",
    "  n_instances = boxes.shape[0]\n",
    "  colors = visualize.random_colors(n_instances)\n",
    "  \n",
    "  if not n_instances:\n",
    "    print(\"No instances to display!\")\n",
    "  else:\n",
    "    assert boxes.shape[0] == masks.shape[-1] == class_ids.shape[0]\n",
    "    \n",
    "  masked_image = image.copy()\n",
    "  for i in range(n_instances):\n",
    "    color = colors[i]\n",
    "    \n",
    "    # Bounding box\n",
    "    if not np.any(boxes[i]):\n",
    "      # Skip this instance. Has no bbox. Likely lost in image cropping.\n",
    "      continue\n",
    "    \n",
    "    y1, x1, y2, x2 = boxes[i]\n",
    "    mask = masks[:, :, i]\n",
    "    \n",
    "    class_id = class_ids[i]\n",
    "    score = scores[i]\n",
    "    label = class_names[class_id]\n",
    "    \n",
    "    if label is not 'person':\n",
    "      continue;\n",
    "    \n",
    "    caption = \"{} {:.3f}\".format(label, score) if score else label\n",
    "    \n",
    "    masked_image = visualize.apply_mask(masked_image, mask, color)\n",
    "    masked_image_with_boxes = cv2.rectangle(masked_image, (x1, y1), (x2, y2), color, 2)\n",
    "    \n",
    "    # Mask Polygon\n",
    "    padded_mask = np.zeros(\n",
    "      (mask.shape[0] + 2, mask.shape[1] + 2), dtype=np.uint8\n",
    "    )\n",
    "    padded_mask[1:-1, 1:-1] = mask\n",
    "    # contours = find_contours(padded_mask, 0.5)\n",
    "    _, contours, _ = cv2.findContours(padded_mask, \n",
    "cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)\n",
    "    \n",
    "    masked_image_with_contours_plus_boxes = cv2.drawContours(masked_image_with_boxes, contours, -1, (0, 255, 0), 1)\n",
    "    \n",
    "    out = cv2.putText(\n",
    "      masked_image_with_contours_plus_boxes, caption, (x1, y1-2), cv2.FONT_HERSHEY_COMPLEX, 1.2, color, 2\n",
    "    )\n",
    "    \n",
    "    masked_image = out\n",
    "  return out\n",
    "  \n",
    "\n",
    "VIDEO_DIR=\"{project_base}/log/video\".format(project_base=Project_base)\n",
    "OUTPUT=VIDEO_DIR\n",
    "SAVER=os.path.join(OUTPUT, \"saver\")\n",
    "\n",
    "import os\n",
    "\n",
    "if not os.path.isdir(SAVER):\n",
    "  os.makedirs(SAVER)\n",
    "\n",
    "capture = cv2.VideoCapture(os.path.join(VIDEO_DIR, \"freiburg1_xyz.mp4\"))\n",
    "\n",
    "batch_size = 1 # 10 GB GPU, which can fit 1 image, i.e. your batch should be set to 1. In the future I will develop a dynamic porgrame to decide this value using CUDA compute capability api intelligently.\n",
    "# frames = []\n",
    "\n",
    "# construct a map block, where typically raw map data should be read from this point\n",
    "block = RuntimeBlock()\n",
    "block.load_device(\"{project_base}/data/tum/camera1.yaml\".format(project_base=Project_base))\n",
    "\n",
    "# initialize a tracker\n",
    "tracker = SVSOTracker().set_FromMap(block)\n",
    "\n",
    "# clear onehot encoder cache\n",
    "SemanticFeatureExtractor.oneHotEncoder = None\n",
    "\n",
    "print(\"\\n\\n\")\n",
    "\n",
    "cnt = 0\n",
    "cnt0 = 0\n",
    "while True:\n",
    "  ret, frame = capture.read()\n",
    "  if not ret:\n",
    "    break\n",
    "  \n",
    "  cnt += 1\n",
    "  if cnt % 5 != 1:\n",
    "    pass #continue\n",
    "\n",
    "  if cnt < 55:\n",
    "    continue\n",
    "\n",
    "  # @todo : TODO fix encoding error\n",
    "  logging.info(\"exec tracker to track motions\")\n",
    "  # Hybrid of OpticalFlow and Kalman Filter predictor & deep features based Hungarian algorithm implementation, Updated on Feb 26 2020 by Author Lei Wang\n",
    "  tracker.track(frame)\n",
    "  cnt0 += 1\n",
    "  \n",
    "  if cnt >= 70:\n",
    "    break\n",
    "\n",
    "  if cnt0 >= 2:\n",
    "    break\n",
    "    \n",
    "  ### Deprecated codes, in favor of new implementation of WebImagaRenderer ###\n",
    "  # offline task \n",
    "  # rets = model.detect([frame], verbose=1)\n",
    "  rets = []\n",
    "\n",
    "  for i, ret in enumerate(rets):\n",
    "    r = ret\n",
    "    # visualize.display_instances(frame, r['rois'], r['masks'], r['class_ids'], class_names, r['scores'])\n",
    "    out_frame = save_instances(frame, r['rois'], r['masks'], r['class_ids'], class_names, r['scores'])\n",
    "  \n",
    "  # name = os.path.join(SAVER, \"{}.jpg\".format(cnt))\n",
    "  # cv2.imwrite(name, out_frame)\n",
    "  # frames.append(out_frame)\n",
    "  \n",
    "capture.release()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 34
    },
    "colab_type": "code",
    "id": "GpVgH5Ub2cWn",
    "outputId": "3c036d64-8f77-4c54-fe3a-12dac9b40d2c"
   },
   "outputs": [],
   "source": [
    "SAVER=os.path.join(Project_base, \"video/saver\")\n",
    "print(SAVER)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 918
    },
    "colab_type": "code",
    "id": "V1jj_xOS2ZeX",
    "outputId": "1a492ad8-f4d7-4eaf-e486-e75a7e1100b2"
   },
   "outputs": [],
   "source": [
    "!pwd\n",
    "!ls ./video/saver"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 644
    },
    "colab_type": "code",
    "id": "m6DaCLds83A-",
    "outputId": "a0b01e3d-da2f-490a-cccf-ed928b2e171c"
   },
   "outputs": [],
   "source": [
    "!rm ./video/saver/out.mp4\n",
    "!rm ./video/saver/out.avi\n",
    "\n",
    "import cv2\n",
    "import glob\n",
    "\n",
    "labeled_images = list(glob.iglob(os.path.join(SAVER, \"*.jpg\")))\n",
    "labeled_images = sorted(labeled_images, key=lambda x: int(os.path.split(x)[1].split('.')[0]))\n",
    "figsize = (16, 16)\n",
    "_, ax = plt.subplots(1, figsize=figsize)\n",
    "im = cv2.imread(labeled_images[200])\n",
    "height, width = im.shape[:2]\n",
    "size=(width, height)\n",
    "ax.set_ylim(height + 10, -10)\n",
    "ax.set_xlim(-10, width + 10)\n",
    "ax.axis('off')\n",
    "ax.imshow(im.astype(np.uint8))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Om1oYJVMZB6S"
   },
   "source": [
    "##### Check the FPS to be used\n",
    "\n",
    "The parameter will be used inside `cv2.VideoWriter` to compose an output video from generated images."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 34
    },
    "colab_type": "code",
    "id": "lLwobgs635Jj",
    "outputId": "1d28c59d-4809-45ec-c8a0-535b7d0ccd7a"
   },
   "outputs": [],
   "source": [
    "video = cv2.VideoCapture(os.path.join(VIDEO_DIR, 'homework.mp4'));\n",
    "\n",
    "# Find OpenCV version\n",
    "(major_ver, minor_ver, subminor_ver) = (cv2.__version__).split('.')\n",
    "\n",
    "if int(major_ver)  < 3 :\n",
    "    fps = video.get(cv2.cv.CV_CAP_PROP_FPS)\n",
    "    print(\"Frames per second using video.get(cv2.cv.CV_CAP_PROP_FPS): {0}\".format(fps))\n",
    "else :\n",
    "    fps = video.get(cv2.CAP_PROP_FPS)\n",
    "    print(\"Frames per second using video.get(cv2.CAP_PROP_FPS) : {0}\".format(fps))\n",
    "\n",
    "video.release();"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "0W53vdU-sPeZ"
   },
   "source": [
    "##### Generate Video with OpenCV2\n",
    "\n",
    "I extracted some useful [comments](https://github.com/ContinuumIO/anaconda-issues/issues/223) by Github user \\<jveitchmichaelis\\>\n",
    "\n",
    "- Only use '.avi', it's just a container, the codec is the important thing.\n",
    "- Be careful with specifying frame sizes. In the constructor you need to pass the frame size as (column, row) e.g. 640x480. However the array you pass in, is indexed as (row, column). See in the above example how it's switched?\n",
    "- If your input image has a different size to the VideoWriter, it will fail (often silently)\n",
    "- Only pass in 8 bit images, manually cast your arrays if you have to (.astype('uint8'))\n",
    "- In fact, never mind, just always cast. Even if you load in images using cv2.imread, you need to cast to uint8...\n",
    "- MJPG will fail if you don't pass in a 3 channel, 8-bit image. I get an assertion failure for this at least.\n",
    "- XVID also requires a 3 channel image but fails silently if you don't do this.\n",
    "- H264 seems to be fine with a single channel image\n",
    "- If you need raw output, say from a machine vision camera, you can use 'DIB '. 'RAW ' or an empty codec sometimes works. Oddly if I use DIB, I get an ffmpeg error, but the video is saved fine. If I use RAW, there isn't an error, but Windows Video player won't open it. All are fine in VLC.\n",
    "\n",
    "My personal experience is that you need to make sure that size is in the ordier (img.shape[1], img.shape[0]). And image should be RBG images not a  single binary image is allowed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "7QRGZsa-hTcU"
   },
   "outputs": [],
   "source": [
    "from cv2 import VideoWriter, VideoWriter_fourcc, imread, resize\n",
    "\n",
    "fourcc = VideoWriter_fourcc(*'MJPG')\n",
    "outputfn = os.path.join(SAVER, \"out.avi\")\n",
    "fps = 24 # reset manually\n",
    "\n",
    "vw = VideoWriter(outputfn, fourcc, float(fps), size)\n",
    "\n",
    "print(\"the size of images is : {}\".format(size))\n",
    "\n",
    "for im in labeled_images:\n",
    "  im = cv2.imread(im)\n",
    "  height, width, channel = im.shape\n",
    "  assert channel == 3\n",
    "  if size[0] != im.shape[1] or size[1] != im.shape[0]:\n",
    "    im = resize(im, size)\n",
    "  vw.write(im.astype(np.uint8))\n",
    "\n",
    "cv2.destroyAllWindows()\n",
    "vw.release()\n",
    "print(vw)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "Nmd0BHJR5DX-"
   },
   "source": [
    "##### Using FFMpeg\n",
    "\n",
    "FFMpeg is fast software used for images processing. We could use its binaries and libraries to process images such like converting sequences of images to a video."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 564
    },
    "colab_type": "code",
    "id": "lawaKZgUmm2V",
    "outputId": "b4f66046-50d9-4711-d371-ec3b7dac0607"
   },
   "outputs": [],
   "source": [
    "# you can also generate the video using FFmpeg \n",
    "def save():\n",
    "    os.system(\"ffmpeg -r 24 -i '{input_dir}/%d.jpg' -vcodec mpeg4 -y {output_dir}\".format(input_dir=SAVER, output_dir=os.path.join(SAVER, \"out.mp4\")))\n",
    "\n",
    "!echo $SAVER/out.mp4\n",
    "# save()\n",
    "!ffmpeg -r 24 -i '$SAVER/%d.jpg' -vcodec mpeg4 -y $SAVER/out.mp4\n",
    "!ls $SAVER/*.mp4 -h"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "AZPRfhqOB050"
   },
   "source": [
    "##### Ouput to local system"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "27Jayfk12vqi"
   },
   "outputs": [],
   "source": [
    "from google.colab import files\n",
    "files.download(os.path.join(SAVER, \"out.mp4\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 171
    },
    "colab_type": "code",
    "id": "JWso5YaDXY0j",
    "outputId": "89fd1368-2ca4-4f1b-8b34-38a8f11aa0e4"
   },
   "outputs": [],
   "source": [
    "import io\n",
    "import base64\n",
    "from IPython.display import HTML\n",
    "\n",
    "video = io.open(os.path.join(SAVER, \"out.mp4\"), 'r+b').read()\n",
    "encoded = base64.b64encode(video)\n",
    "\n",
    "HTML(data='''<video alt=\"test\" controls>\n",
    "                <source src=\"data:video/mp4;base64,{0}\" type=\"video/mp4\">\n",
    "             </video>'''.format(encoded.decode('ascii')))"
   ]
  }
 ],
 "metadata": {
  "accelerator": "GPU",
  "celltoolbar": "Raw Cell Format",
  "colab": {
   "collapsed_sections": [],
   "name": "semantic_tracker.ipynb",
   "provenance": [],
   "toc_visible": true
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
