{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "papermill": {
     "duration": 0.005475,
     "end_time": "2023-09-28T06:31:36.113528",
     "exception": false,
     "start_time": "2023-09-28T06:31:36.108053",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "# Category-level Object Pose Estimation using TAO CenterPose\n",
    "\n",
    "Transfer learning is the process of transferring learned features from one application to another. It is a commonly used training technique where you use a model trained on one task and re-train to use it on a different task. \n",
    "\n",
    "Train Adapt Optimize (TAO) Toolkit  is a simple and easy-to-use Python based AI toolkit for taking purpose-built AI models and customizing them with users' own data.\n",
    "\n",
    "<img align=\"center\" src=\"https://d29g4g2dyqv443.cloudfront.net/sites/default/files/akamai/TAO/tlt-tao-toolkit-bring-your-own-model-diagram.png\" width=\"1080\">\n",
    "\n",
    "## What is CenterPose?\n",
    "\n",
    "[CenterPose](https://arxiv.org/abs/2109.06161) a single-stage, keypoint-based approach for category-level object pose estimation, which operates on unknown object instances within a known category using a single RGB image input. The proposed network performs 2D object detection, detects 2D keypoints, estimates 6-DoF pose, and regresses relative 3D bounding cuboid dimensions.\n",
    "\n",
    "In TAO, two different types of backbone networks are supported: [DLA34](https://arxiv.org/pdf/1707.06484.pdf) and [FAN](https://arxiv.org/abs/2204.12451). We not only provide the standard Convolutional Neural Network (CNN) backbone, but also provide the most advanced network called FAN, which is also a transformer-based classification network. For more details about training FAN backbones, please refer to the classification pytorch notebook.\n",
    "\n",
    "### Sample prediction of CenterPose model\n",
    "| **Shoes** | **Bottle** |\n",
    "| :------:  | :------: |\n",
    "|<img align=\"center\" title=\"Shoes\" src=\"https://github.com/vpraveen-nv/model_card_images/blob/main/cv/purpose_built_models/centerpose/image%202.png?raw=true\" width=\"300\" height=\"400\"> |<img align=\"center\" title=\"Bottle\" src=\"https://github.com/vpraveen-nv/model_card_images/blob/main/cv/purpose_built_models/centerpose/image.png?raw=true\" width=\"300\" height=\"400\">|"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "papermill": {
     "duration": 0.003275,
     "end_time": "2023-09-28T06:31:36.121470",
     "exception": false,
     "start_time": "2023-09-28T06:31:36.118195",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "## Learning Objectives\n",
    "\n",
    "In this notebook, you will learn how to leverage the simplicity and convenience of TAO to:\n",
    "\n",
    "* Take a pretrained model and train a CenterPose model on the Google Objectron dataset\n",
    "* Evaluate the trained model\n",
    "* Run inference with the trained model and visualize the result\n",
    "* Export the trained model to a .onnx file for deployment to DeepStream\n",
    "* Generate TensorRT engine using tao-deploy and verify the engine through evaluation\n",
    "\n",
    "At the end of this notebook, you will have generated a trained `centerpose` model\n",
    "which you may deploy via [DeepStream](https://developer.nvidia.com/deepstream-sdk).\n",
    "\n",
    "## Table of Contents\n",
    "\n",
    "This notebook shows an example usecase of CenterPose using Train Adapt Optimize (TAO) Toolkit.\n",
    "\n",
    "0. [Set up env variables and map drives](#head-0)\n",
    "1. [Installing the TAO launcher](#head-1)\n",
    "2. [Prepare dataset and pre-trained model](#head-2)\n",
    "3. [Provide training specification](#head-3)\n",
    "4. [Run TAO training](#head-4)\n",
    "5. [Evaluate a trained model](#head-5)\n",
    "6. [Visualize inferences](#head-6)\n",
    "7. [Deploy](#head-7)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "papermill": {
     "duration": 0.002853,
     "end_time": "2023-09-28T06:31:36.127828",
     "exception": false,
     "start_time": "2023-09-28T06:31:36.124975",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "## 0. Set up env variables and map drives <a class=\"anchor\" id=\"head-0\"></a>\n",
    "\n",
    "The following notebook requires the user to set an env variable called the `$LOCAL_PROJECT_DIR` as the path to the users workspace. Please note that the dataset to run this notebook is expected to reside in the `$LOCAL_PROJECT_DIR/data`, while the TAO experiment generated collaterals will be output to `$LOCAL_PROJECT_DIR/centerpose/results`. More information on how to set up the dataset and the supported steps in the TAO workflow are provided in the subsequent cells.\n",
    "\n",
    "The TAO launcher uses docker containers under the hood, and **for our data and results directory to be visible to the docker, they need to be mapped**. The launcher can be configured using the config file `~/.tao_mounts.json`. Apart from the mounts, you can also configure additional options like the Environment Variables and amount of Shared Memory available to the TAO launcher. <br>\n",
    "\n",
    "`IMPORTANT NOTE:` The code below creates a sample `~/.tao_mounts.json`  file. Here, we can map directories in which we save the data, specs, results and cache. You should configure it for your specific case so these directories are correctly visible to the docker container.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-09-28T06:31:36.134970Z",
     "iopub.status.busy": "2023-09-28T06:31:36.134429Z",
     "iopub.status.idle": "2023-09-28T06:31:36.151027Z",
     "shell.execute_reply": "2023-09-28T06:31:36.149850Z"
    },
    "papermill": {
     "duration": 0.022865,
     "end_time": "2023-09-28T06:31:36.153016",
     "exception": false,
     "start_time": "2023-09-28T06:31:36.130151",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "# Please define this local project directory that needs to be mapped to the TAO docker session.\n",
    "%env LOCAL_PROJECT_DIR=/path/to/local/tao-experiments\n",
    "\n",
    "os.environ[\"HOST_DATA_DIR\"] = os.path.join(os.getenv(\"LOCAL_PROJECT_DIR\", os.getcwd()), \"data\", \"centerpose\")\n",
    "os.environ[\"HOST_RESULTS_DIR\"] = os.path.join(os.getenv(\"LOCAL_PROJECT_DIR\", os.getcwd()), \"centerpose\", \"results\")\n",
    "\n",
    "# Set this path if you don't run the notebook from the samples directory.\n",
    "# %env NOTEBOOK_ROOT=~/tao-samples/centerpose\n",
    "\n",
    "# The sample spec files are present in the same path as the downloaded samples.\n",
    "os.environ[\"HOST_SPECS_DIR\"] = os.path.join(\n",
    "    os.getenv(\"NOTEBOOK_ROOT\", os.getcwd()),\n",
    "    \"specs\"\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-09-28T06:31:36.158563Z",
     "iopub.status.busy": "2023-09-28T06:31:36.157979Z",
     "iopub.status.idle": "2023-09-28T06:31:37.143646Z",
     "shell.execute_reply": "2023-09-28T06:31:37.142270Z"
    },
    "papermill": {
     "duration": 0.992229,
     "end_time": "2023-09-28T06:31:37.147071",
     "exception": false,
     "start_time": "2023-09-28T06:31:36.154842",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "! mkdir -p $HOST_DATA_DIR\n",
    "! mkdir -p $HOST_SPECS_DIR\n",
    "! mkdir -p $HOST_RESULTS_DIR"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-09-28T06:31:37.159684Z",
     "iopub.status.busy": "2023-09-28T06:31:37.159076Z",
     "iopub.status.idle": "2023-09-28T06:31:37.170468Z",
     "shell.execute_reply": "2023-09-28T06:31:37.169350Z"
    },
    "papermill": {
     "duration": 0.020526,
     "end_time": "2023-09-28T06:31:37.172894",
     "exception": false,
     "start_time": "2023-09-28T06:31:37.152368",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "# Mapping up the local directories to the TAO docker.\n",
    "import json\n",
    "import os\n",
    "mounts_file = os.path.expanduser(\"~/.tao_mounts.json\")\n",
    "tao_configs = {\n",
    "   \"Mounts\":[\n",
    "         # Mapping the Local project directory\n",
    "        {\n",
    "            \"source\": os.environ[\"LOCAL_PROJECT_DIR\"],\n",
    "            \"destination\": \"/workspace/tao-experiments\"\n",
    "        },\n",
    "       {\n",
    "           \"source\": os.environ[\"HOST_DATA_DIR\"],\n",
    "           \"destination\": \"/data\"\n",
    "       },\n",
    "       {\n",
    "           \"source\": os.environ[\"HOST_SPECS_DIR\"],\n",
    "           \"destination\": \"/specs\"\n",
    "       },\n",
    "       {\n",
    "           \"source\": os.environ[\"HOST_RESULTS_DIR\"],\n",
    "           \"destination\": \"/results\"\n",
    "       }\n",
    "   ],\n",
    "   \"DockerOptions\": {\n",
    "        \"shm_size\": \"16G\",\n",
    "        \"ulimits\": {\n",
    "            \"memlock\": -1,\n",
    "            \"stack\": 67108864\n",
    "         },\n",
    "        \"user\": \"{}:{}\".format(os.getuid(), os.getgid()),\n",
    "        \"network\": \"host\"\n",
    "   }\n",
    "}\n",
    "# Writing the mounts file.\n",
    "with open(mounts_file, \"w\") as mfile:\n",
    "    json.dump(tao_configs, mfile, indent=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-09-28T06:31:37.181775Z",
     "iopub.status.busy": "2023-09-28T06:31:37.181330Z",
     "iopub.status.idle": "2023-09-28T06:31:37.516592Z",
     "shell.execute_reply": "2023-09-28T06:31:37.515009Z"
    },
    "papermill": {
     "duration": 0.343835,
     "end_time": "2023-09-28T06:31:37.520078",
     "exception": false,
     "start_time": "2023-09-28T06:31:37.176243",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "!cat ~/.tao_mounts.json"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "papermill": {
     "duration": 0.00481,
     "end_time": "2023-09-28T06:31:37.530385",
     "exception": false,
     "start_time": "2023-09-28T06:31:37.525575",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "## 1. Installing the TAO launcher <a class=\"anchor\" id=\"head-1\"></a>\n",
    "The TAO launcher is a python package distributed as a python wheel listed in the `nvidia-pyindex` python index. You may install the launcher by executing the following cell.\n",
    "\n",
    "Please note that TAO Toolkit recommends users to run the TAO launcher in a virtual env with python 3.6.9. You may follow the instruction in this [page](https://virtualenvwrapper.readthedocs.io/en/latest/install.html) to set up a python virtual env using the `virtualenv` and `virtualenvwrapper` packages. Once you have setup virtualenvwrapper, please set the version of python to be used in the virtual env by using the `VIRTUALENVWRAPPER_PYTHON` variable. You may do so by running\n",
    "\n",
    "```sh\n",
    "export VIRTUALENVWRAPPER_PYTHON=/path/to/bin/python3.x\n",
    "```\n",
    "where x >= 6 and <= 8\n",
    "\n",
    "We recommend performing this step first and then launching the notebook from the virtual environment. In addition to installing TAO python package, please make sure of the following software requirements:\n",
    "* python >=3.7, <=3.10.x\n",
    "* docker-ce > 19.03.5\n",
    "* docker-API 1.40\n",
    "* nvidia-container-toolkit > 1.3.0-1\n",
    "* nvidia-container-runtime > 3.4.0-1\n",
    "* nvidia-docker2 > 2.5.0-1\n",
    "* nvidia-driver > 455+\n",
    "\n",
    "Once you have installed the pre-requisites, please log in to the docker registry nvcr.io by following the command below\n",
    "\n",
    "```sh\n",
    "docker login nvcr.io\n",
    "```\n",
    "\n",
    "You will be triggered to enter a username and password. The username is `$oauthtoken` and the password is the API key generated from `ngc.nvidia.com`. Please follow the instructions in the [NGC setup guide](https://docs.nvidia.com/ngc/ngc-overview/index.html#generating-api-key) to generate your own API key.\n",
    "\n",
    "Please note that TAO Toolkit recommends users to run the TAO launcher in a virtual env with python >=3.6.9. You may follow the instruction in this [page](https://virtualenvwrapper.readthedocs.io/en/latest/install.html) to set up a python virtual env using the virtualenv and virtualenvwrapper packages."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-09-28T06:31:37.539705Z",
     "iopub.status.busy": "2023-09-28T06:31:37.539066Z",
     "iopub.status.idle": "2023-09-28T06:31:40.612100Z",
     "shell.execute_reply": "2023-09-28T06:31:40.610503Z"
    },
    "papermill": {
     "duration": 3.081828,
     "end_time": "2023-09-28T06:31:40.615712",
     "exception": false,
     "start_time": "2023-09-28T06:31:37.533884",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "# SKIP this step IF you have already installed the TAO launcher.\n",
    "!pip3 install nvidia-pyindex\n",
    "!pip3 install nvidia-tao"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-09-28T06:31:40.632020Z",
     "iopub.status.busy": "2023-09-28T06:31:40.631670Z",
     "iopub.status.idle": "2023-09-28T06:31:41.324687Z",
     "shell.execute_reply": "2023-09-28T06:31:41.323092Z"
    },
    "papermill": {
     "duration": 0.701693,
     "end_time": "2023-09-28T06:31:41.328397",
     "exception": false,
     "start_time": "2023-09-28T06:31:40.626704",
     "status": "completed"
    },
    "tags": []
   },
   "outputs": [],
   "source": [
    "# View the versions of the TAO launcher\n",
    "!tao info --verbose"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "papermill": {
     "duration": 0.004888,
     "end_time": "2023-09-28T06:31:41.345632",
     "exception": false,
     "start_time": "2023-09-28T06:31:41.340744",
     "status": "completed"
    },
    "tags": []
   },
   "source": [
    "## 2. Prepare dataset and pre-trained model <a class=\"anchor\" id=\"head-2\"></a>\n",
    "### 2.1 Download and preprocess the training, validation and testing dataset\n",
    " We will be using the Google Objectron dataset for the tutorial. The following script will download Google Objectron dataset automatically. \n",
    "\n",
    "Here's a description of the structure:\n",
    "\n",
    "    |--category_dataset_root:\n",
    "        |--train\n",
    "            |--train_video1\n",
    "                |--image1.jpg\n",
    "                |--image1.json\n",
    "                |--image2.jpg\n",
    "                |--image2.json\n",
    "            |--train_video2\n",
    "                |--image1.jpg\n",
    "                |--image1.json\n",
    "                |--image2.jpg\n",
    "                |--image2.json\n",
    "        |--test/validation\n",
    "            |--test_video1\n",
    "                |--image1.jpg\n",
    "                |--image1.json\n",
    "                |--image2.jpg\n",
    "                |--image2.json\n",
    "            |--test_video2\n",
    "                |--image1.jpg\n",
    "                |--image1.json\n",
    "                |--image2.jpg\n",
    "                |--image2.json\n",
    "\n",
    "* The ``category_dataset_root`` directory of the specific category, which contains the following:\n",
    "    * ``train``: Contains training images and its related ground truth. The images are extrated from the videos. \n",
    "    * ``test/validation``: Contains testing/validation images and its related ground truth.\n",
    "* If Python version < 3.10, please install `scipy==1.5.2` and `tensorflow==2.11.0`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Install the dataset related dependencies.\n",
    "!pip3 install scipy==1.9.2\n",
    "!pip3 install tensorflow==2.14.0\n",
    "!pip3 install opencv-python==4.8.0.74\n",
    "!pip3 install tqdm==4.65.0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define the decoding functions.\n",
    "import numpy as np\n",
    "import cv2\n",
    "\n",
    "def get_image(feature, shape=None):\n",
    "    \"\"\"Decode the tensorflow image example.\"\"\"\n",
    "    image = cv2.imdecode(\n",
    "        np.asarray(bytearray(feature.bytes_list.value[0]), dtype=np.uint8),\n",
    "        cv2.IMREAD_ANYCOLOR | cv2.IMREAD_ANYDEPTH)\n",
    "    if len(image.shape) > 2 and image.shape[2] > 1:\n",
    "        image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\n",
    "    if shape is not None:\n",
    "        image = cv2.resize(image, shape)\n",
    "    return image\n",
    "\n",
    "def parse_plane(example):\n",
    "    \"\"\"Parses plane from a tensorflow example.\"\"\"\n",
    "    fm = example.features.feature\n",
    "    if \"plane/center\" in fm and \"plane/normal\" in fm:\n",
    "        center = fm[\"plane/center\"].float_list.value\n",
    "        center = np.asarray(center)\n",
    "        normal = fm[\"plane/normal\"].float_list.value\n",
    "        normal = np.asarray(normal)\n",
    "        return center, normal\n",
    "    else:\n",
    "        return None\n",
    "    \n",
    "def parse_example(example):\n",
    "    \"\"\"Parse the image example data\"\"\"\n",
    "    fm = example.features.feature\n",
    "\n",
    "    # Extract images, setting the input shape for Objectron Dataset\n",
    "    image = get_image(fm[\"image/encoded\"], shape=(600, 800))\n",
    "    filename = fm[\"image/filename\"].bytes_list.value[0].decode(\"utf-8\")\n",
    "    filename = filename.replace('/', '_')\n",
    "    image_id = np.asarray(fm[\"image/id\"].int64_list.value)[0]\n",
    "\n",
    "    label = {}\n",
    "    visibilities = fm[\"object/visibility\"].float_list.value\n",
    "    visibilities = np.asarray(visibilities)\n",
    "    index = visibilities > 0.1\n",
    "\n",
    "    if \"point_2d\" in fm:\n",
    "        points_2d = fm[\"point_2d\"].float_list.value\n",
    "        points_2d = np.asarray(points_2d).reshape((-1, 9, 3))[..., :2]\n",
    "\n",
    "    if \"point_3d\" in fm:\n",
    "        points_3d = fm[\"point_3d\"].float_list.value\n",
    "        points_3d = np.asarray(points_3d).reshape((-1, 9, 3))\n",
    "\n",
    "    if \"object/scale\" in fm:\n",
    "        obj_scale = fm[\"object/scale\"].float_list.value\n",
    "        obj_scale = np.asarray(obj_scale).reshape((-1, 3))\n",
    "\n",
    "    if \"object/translation\" in fm:\n",
    "        obj_trans = fm[\"object/translation\"].float_list.value\n",
    "        obj_trans = np.asarray(obj_trans).reshape((-1, 3))\n",
    "\n",
    "    if  \"object/orientation\" in fm:\n",
    "        obj_ori = fm[\"object/orientation\"].float_list.value\n",
    "        obj_ori = np.asarray(obj_ori).reshape((-1, 3, 3))\n",
    "\n",
    "    label[\"2d_instance\"] = points_2d[index]\n",
    "    label[\"3d_instance\"] = points_3d[index]\n",
    "    label[\"scale_instance\"] = obj_scale[index]\n",
    "    label[\"translation\"] = obj_trans[index]\n",
    "    label[\"orientation\"] = obj_ori[index]\n",
    "    label[\"image_id\"] = image_id\n",
    "    label[\"visibility\"] = visibilities[index]\n",
    "    label['ORI_INDEX'] = np.argwhere(index).flatten()\n",
    "    label['ORI_NUM_INSTANCE'] = len(index)\n",
    "    return image, label, filename\n",
    "\n",
    "def parse_camera(example):\n",
    "    \"\"\"Parse the camera calibration data\"\"\"\n",
    "    fm = example.features.feature\n",
    "    if \"camera/projection\" in fm:\n",
    "        proj = fm[\"camera/projection\"].float_list.value\n",
    "        proj = np.asarray(proj).reshape((4, 4))\n",
    "    else:\n",
    "        proj = None\n",
    "        \n",
    "    if \"camera/view\" in fm:\n",
    "        view = fm[\"camera/view\"].float_list.value\n",
    "        view = np.asarray(view).reshape((4, 4))\n",
    "    else:\n",
    "        view = None\n",
    "    \n",
    "    if \"camera/intrinsics\" in fm:\n",
    "        intrinsic = fm[\"camera/intrinsics\"].float_list.value\n",
    "        intrinsic = np.asarray(intrinsic).reshape((3, 3))\n",
    "    else:\n",
    "        intrinsic = None\n",
    "    return proj, view, intrinsic\n",
    "\n",
    "def partition(lst, n):\n",
    "    \"\"\"Equally split the video lists.\"\"\"\n",
    "    division = len(lst) / float(n) if n else len(lst)\n",
    "    return [lst[int(np.round(division * i)): int(np.round(division * (i + 1)))] for i in range(n)]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* Note\n",
    "    * Please select the **specific categories** you want to use for training the CenterPose model.\n",
    "    * The cell will take several minutes to run because it involves dataset downloading and preprocessing.\n",
    "    * Each category contains approximately 10,000 to 30,000 training images. Downloading all categories would require a large amount of drive space. The total size for downloading all 8 categories is 4.4TB.\n",
    "    * The default setting is downloading the training set and validation set. The validation set is a subset of the testing set, downsampled to 30 frames per second.\n",
    "    * If you are using your own dataset, please ensure that the camera calibration information is correct.\n",
    "    * **Note that the sample spec is not meant to produce SOTA (state-of-the-art) accuracy on Objectron dataset. To reproduce SOTA, you should set `TRAIN_FR` as 15, `epoch` as 140 and `DATA_DOWNLOAD` as -1 to match the original parameters.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import glob\n",
    "import tqdm\n",
    "import json\n",
    "import requests\n",
    "import shutil\n",
    "import tensorflow as tf\n",
    "import warnings\n",
    "from scipy.spatial.transform import Rotation as R\n",
    "\n",
    "OBJECTRON_BUCKET = \"gs://objectron/v1/records_shuffled\"\n",
    "PUBLIC_URL = \"https://storage.googleapis.com/objectron\"\n",
    "SAVE_DIR = os.getenv(\"HOST_DATA_DIR\", os.getcwd())\n",
    "\n",
    "# Please add the \"test\" into the array if you want to evaluate the whole testing set. It requires at least 30GB to download the bike category. \n",
    "# DATA_DISTRIBUTION = ['train', 'val', 'test']\n",
    "DATA_DISTRIBUTION = ['train', 'val']\n",
    "\n",
    "# Note that the sample spec is not meant to produce SOTA accuracy on Objectron dataset. \n",
    "# To reproduce SOTA, you should set `TRAIN_FR` as 15 and `DATA_DOWNLOAD` as -1 to match the original parameters.\n",
    "TRAIN_FR = 30\n",
    "VAL_FR = 60\n",
    "TEST_FR = 1\n",
    "DATA_DOWNLOAD = 10000\n",
    "\n",
    "# Please select the specific categories that you want to train the CenterPose model. \n",
    "# CATEGORIES = ['bike', 'book', 'bottle', 'camera', 'cereal_box', 'chair', 'laptop', 'shoe']\n",
    "CATEGORIES = ['bike']\n",
    "\n",
    "memory_free = shutil.disk_usage(SAVE_DIR).free\n",
    "if len(CATEGORIES) >= 8 and memory_free < 4.4E12:\n",
    "    warnings.warn(\"No enough space for downloading all 8 categories.\")\n",
    "\n",
    "for c in CATEGORIES:\n",
    "    for dist in DATA_DISTRIBUTION:\n",
    "        # Download the tfrecord files\n",
    "        if dist in ['test', 'val']:\n",
    "            eval_data = f'/{c}/{c}_test*'\n",
    "            blob_path = PUBLIC_URL + f\"/v1/index/{c}_annotations_test\"\n",
    "        elif dist in ['train']:\n",
    "            eval_data = f'/{c}/{c}_train*'\n",
    "            blob_path = PUBLIC_URL + f\"/v1/index/{c}_annotations_train\"\n",
    "        else:\n",
    "            raise ValueError(\"No specific data distribution settings.\")\n",
    "\n",
    "        eval_shards = tf.io.gfile.glob(OBJECTRON_BUCKET + eval_data)\n",
    "        ds = tf.data.TFRecordDataset(eval_shards).take(DATA_DOWNLOAD)\n",
    "\n",
    "        with tf.io.TFRecordWriter(f'{SAVE_DIR}/{c}_{dist}.tfrecord') as file_writer:\n",
    "            for serialized in tqdm.tqdm(ds): \n",
    "                example = tf.train.Example.FromString(serialized.numpy())\n",
    "                record_bytes = example.SerializeToString()\n",
    "                file_writer.write(record_bytes)\n",
    "\n",
    "        # Get the video ids\n",
    "        video_ids = requests.get(blob_path).text\n",
    "        video_ids = [i.replace('/', '_') for i in video_ids.split('\\n')]\n",
    "        \n",
    "        # Work on a subset of the videos for each round, where the subset is equally split\n",
    "        video_ids_split = partition(video_ids, int(np.floor(len(video_ids) / int(len(video_ids) / 2))))\n",
    "\n",
    "        # Decode the tfrecord files\n",
    "        tfdata = f'{SAVE_DIR}/{c}_{dist}*'\n",
    "        eval_shards = tf.io.gfile.glob(tfdata)\n",
    "\n",
    "        new_ds = tf.data.TFRecordDataset(eval_shards).take(-1)\n",
    "\n",
    "        for subset in video_ids_split:\n",
    "            videos = {}\n",
    "            for serialized in tqdm.tqdm(new_ds):\n",
    "\n",
    "                example = tf.train.Example.FromString(serialized.numpy())\n",
    "\n",
    "                # Group according to video_id & image_id\n",
    "                fm = example.features.feature\n",
    "                filename = fm[\"image/filename\"].bytes_list.value[0].decode(\"utf-8\")\n",
    "                video_id = filename.replace('/', '_')\n",
    "                image_id = np.asarray(fm[\"image/id\"].int64_list.value)[0]\n",
    "                \n",
    "                # Sometimes, data is too big to save, so we only focus on a small subset instead.\n",
    "                if video_id not in subset:\n",
    "                    continue\n",
    "                \n",
    "                if video_id in videos:\n",
    "                    videos[video_id].append((image_id, example))\n",
    "                else:\n",
    "                    videos[video_id] = []\n",
    "                    videos[video_id].append((image_id, example))\n",
    "            \n",
    "            # Saved the decoded tfrecord files. \n",
    "            save_tfrecords = f'{SAVE_DIR}/{c}/tfrecords/{dist}'\n",
    "            if not os.path.exists(save_tfrecords):\n",
    "                os.makedirs(save_tfrecords)\n",
    "            for video_id in tqdm.tqdm(videos):\n",
    "                with tf.io.TFRecordWriter(f'{save_tfrecords}/{video_id}.tfrecord') as file_writer:\n",
    "                    for image_data in videos[video_id]:\n",
    "                        record_bytes = image_data[1].SerializeToString()\n",
    "                        file_writer.write(record_bytes)\n",
    "\n",
    "        # Extract the images and ground truth.\n",
    "        videos = [os.path.splitext(os.path.basename(i))[0] for i in glob.glob(f'{save_tfrecords}/*.tfrecord')]\n",
    "        if dist in ['train']:\n",
    "            frame_rate = TRAIN_FR\n",
    "        elif dist in ['val']:\n",
    "            frame_rate = VAL_FR\n",
    "        elif dist in ['test']:\n",
    "            frame_rate = TEST_FR\n",
    "        else:\n",
    "            raise ValueError(\"No specific data distribution settings.\")\n",
    "        \n",
    "        for idx, key in enumerate(videos):\n",
    "            print(f'Video {idx}, {key}:')\n",
    "            ds = tf.data.TFRecordDataset(f'{save_tfrecords}/{key}.tfrecord').take(-1)\n",
    "\n",
    "            for serialized in tqdm.tqdm(ds):\n",
    "                example = tf.train.Example.FromString(serialized.numpy())\n",
    "\n",
    "                image, label, prefix = parse_example(example)\n",
    "                frame_id = label['image_id']\n",
    "\n",
    "                if int(frame_id) % frame_rate == 0:\n",
    "                    \n",
    "                    proj, view, cam_intrinsic = parse_camera(example)\n",
    "                    plane = parse_plane(example)\n",
    "\n",
    "                    cam_intrinsic[:2, :3] = cam_intrinsic[:2, :3] / 2.4\n",
    "                    center, normal = plane\n",
    "                    height, width, _ = image.shape\n",
    "\n",
    "                    im_bgr = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)\n",
    "                    \n",
    "                    dict_out = {\n",
    "                        \"camera_data\" : {\n",
    "                            \"width\" : width,\n",
    "                            'height' : height,\n",
    "                            'camera_view_matrix':view.tolist(),\n",
    "                            'camera_projection_matrix':proj.tolist(),\n",
    "                            'intrinsics':{\n",
    "                                'fx':cam_intrinsic[1][1],\n",
    "                                'fy':cam_intrinsic[0][0],\n",
    "                                'cx':cam_intrinsic[1][2],\n",
    "                                'cy':cam_intrinsic[0][2]\n",
    "                            }\n",
    "                        }, \n",
    "                        \"objects\" : [],\n",
    "                        \"AR_data\":{\n",
    "                            'plane_center':[center[0],\n",
    "                                            center[1],\n",
    "                                            center[2]],\n",
    "                            'plane_normal':[normal[0],\n",
    "                                            normal[1],\n",
    "                                            normal[2]]\n",
    "                        }\n",
    "                    }\n",
    "                    \n",
    "                    for object_id in range(len(label['2d_instance'])):\n",
    "                        object_categories = c\n",
    "                        quaternion = R.from_matrix(label['orientation'][object_id]).as_quat()\n",
    "                        trans = label['translation'][object_id]\n",
    "\n",
    "                        projected_keypoints = label['2d_instance'][object_id]\n",
    "                        projected_keypoints[:, 0] *= width\n",
    "                        projected_keypoints[:, 1] *= height\n",
    "\n",
    "                        object_scale = label['scale_instance'][object_id]\n",
    "                        keypoints_3d = label['3d_instance'][object_id]\n",
    "                        visibility = label['visibility'][object_id]\n",
    "\n",
    "                        dict_obj={\n",
    "                            'class': object_categories,\n",
    "                            'name': object_categories+'_'+str(object_id),\n",
    "                            'provenance': 'objectron',\n",
    "                            'location': trans.tolist(),\n",
    "                            'quaternion_xyzw': quaternion.tolist(),\n",
    "                            'projected_cuboid': projected_keypoints.tolist(),\n",
    "                            'scale': object_scale.tolist(),\n",
    "                            'keypoints_3d': keypoints_3d.tolist(),\n",
    "                            'visibility': visibility.tolist()\n",
    "                        }\n",
    "                        # Final export\n",
    "                        dict_out['objects'].append(dict_obj)\n",
    "\n",
    "                    save_path = f\"{SAVE_DIR}/{c}/{dist}/{prefix}/\"\n",
    "                    if not os.path.exists(save_path):\n",
    "                        os.makedirs(save_path)\n",
    "\n",
    "                    filename = f\"{save_path}/{str(frame_id).zfill(5)}.json\"\n",
    "                    with open(filename, 'w+') as fp:\n",
    "                        json.dump(dict_out, fp, indent=4, sort_keys=True)\n",
    "                \n",
    "                    cv2.imwrite(f\"{save_path}/{str(frame_id).zfill(5)}.png\", im_bgr)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2 Download the pre-trained model\n",
    "We will use NGC CLI to get the pre-trained models. For more details, go to [ngc.nvidia.com](ngc.nvidia.com) and click the SETUP on the navigation bar."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Installing NGC CLI on the local machine.\n",
    "## Download and install\n",
    "%env CLI=ngccli_cat_linux.zip\n",
    "!mkdir -p $LOCAL_PROJECT_DIR/ngccli\n",
    "\n",
    "# Remove any previously existing CLI installations\n",
    "!rm -rf $LOCAL_PROJECT_DIR/ngccli/*\n",
    "!wget \"https://ngc.nvidia.com/downloads/$CLI\" -P $LOCAL_PROJECT_DIR/ngccli\n",
    "!unzip -u \"$LOCAL_PROJECT_DIR/ngccli/$CLI\" -d $LOCAL_PROJECT_DIR/ngccli/\n",
    "!rm $LOCAL_PROJECT_DIR/ngccli/*.zip \n",
    "os.environ[\"PATH\"]=\"{}/ngccli/ngc-cli:{}\".format(os.getenv(\"LOCAL_PROJECT_DIR\", \"\"), os.getenv(\"PATH\", \"\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Pull pretrained model from NGC\n",
    "!mkdir -p $HOST_RESULTS_DIR/pretrained_models\n",
    "!ngc registry model download-version \"nvidia/tao/pretrained_fan_classification_nvimagenet:fan_small_hybrid_nvimagenet\" --dest $HOST_RESULTS_DIR/pretrained_models\n",
    "\n",
    "print(\"Check if model is downloaded into dir.\")\n",
    "!ls -l $HOST_RESULTS_DIR/pretrained_models/pretrained_fan_classification_nvimagenet_vfan_small_hybrid_nvimagenet/"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Provide training specification <a class=\"anchor\" id=\"head-3\"></a>\n",
    "\n",
    "We provide specification files to configure the training parameters including:\n",
    "\n",
    "* dataset: configure the dataset and augmentation methods\n",
    "    * train_data: images and annotation files for train data. Required to have correct camera calibration data\n",
    "    * val_data: images and annotation files for validation data. Required to have correct camera calibration data\n",
    "    * num_classes: number of categories, default is 1. The CenterPose is a category-based method\n",
    "    * batch_size: batch size for dataloader\n",
    "    * workers: number of workers to do data loading\n",
    "    * category: category name of the training object\n",
    "    * num_symmetry: number of symmetric rotations for the specific categories, e.g. bottle\n",
    "    * max_objs: maximum number of training objects in one image\n",
    "* model: configure the model setting\n",
    "    * down_ratio: down sample ratio for the input image, default is 4\n",
    "    * use_pretrained: flag to enable using the pretrained weights\n",
    "    * model_type: backbone types of the CenterPose, including FAN-variants and the DLA34 backbone\n",
    "    * pretrained_backbone_path: path to the pretrained backbone model. FAN-variants is supported. DLA34 backbone loads the pretrained weight automatically. \n",
    "* train: configure the training hyperparameters\n",
    "    * num_gpus: number of gpus \n",
    "    * validation_interval: validation interval\n",
    "    * checkpoint_interval: interval of saving the checkpoint\n",
    "    * num_epochs: number of epochs\n",
    "    * clip_grad_val: the value of cliping the gradient, default is 100.0\n",
    "    * seed: random seed for reproducing the accuracy\n",
    "    * resume_training_checkpoint_path: resume the training from the checkpoint path\n",
    "    * precision: If set to fp16, the training is run on Automatic Mixed Precision (AMP)\n",
    "    * optim:\n",
    "        * lr: learning rate for training the model\n",
    "        * lr_steps: learning rate decay step milestone (MultiStep)\n",
    "\n",
    "Please refer to the TAO documentation about CenterPose to get all the parameters that are configurable.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0775e53f",
   "metadata": {},
   "outputs": [],
   "source": [
    "!cat $HOST_SPECS_DIR/train.yaml"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. Run TAO training <a class=\"anchor\" id=\"head-4\"></a>\n",
    "* Provide the sample spec file and the output directory location for models\n",
    "* Evaluation mainly uses 3D IoU and 2D MPE (mean pixel errors) metrics. For more info, please refer to: https://github.com/google-research-datasets/Objectron\n",
    "* For this demonstration, we set the training epoch equals to 1 so that the training can be completed faster.\n",
    "* Unlike the [original CenterPose paper](https://arxiv.org/abs/2109.06161), we also provided a more advanced backbone called [FAN](https://arxiv.org/abs/2204.12451) that has proven to achieve higher downstream results compared to DLA34. \n",
    "* If you wish to speed up training, you may try to set `train.precision=fp16` for mixed precision training."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# NOTE: The following paths are set from the perspective of the TAO Docker.\n",
    "\n",
    "# The data is saved here\n",
    "%env DATA_DIR = /data\n",
    "%env MODEL_DIR = /model\n",
    "%env SPECS_DIR = /specs\n",
    "%env RESULTS_DIR = /results"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!echo $HOST_DATA_DIR"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"For multi-GPU, change train.num_gpus in train.yaml based on your machine.\")\n",
    "# If you face out of memory issue, you may reduce the batch size in the spec file by passing dataset.batch_size=2\n",
    "!tao model centerpose train \\\n",
    "          -e $SPECS_DIR/train.yaml \\\n",
    "          results_dir=$RESULTS_DIR/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('Trained checkpoints:')\n",
    "print('---------------------')\n",
    "!ls -ltrh $HOST_RESULTS_DIR/train"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "477d5db0",
   "metadata": {},
   "outputs": [],
   "source": [
    "# You can set NUM_EPOCH to the epoch corresponding to any saved checkpoint\n",
    "# %env NUM_EPOCH=029\n",
    "\n",
    "# Get the name of the checkpoint corresponding to your set epoch\n",
    "# tmp=!ls $HOST_RESULTS_DIR/train/*.pth | grep epoch_$NUM_EPOCH\n",
    "# %env CHECKPOINT={tmp[0]}\n",
    "\n",
    "# Or get the latest checkpoint\n",
    "os.environ[\"CHECKPOINT\"] = os.path.join(os.getenv(\"HOST_RESULTS_DIR\"), \"train/centerpose_model_latest.pth\")\n",
    "\n",
    "print('Rename a trained model: ')\n",
    "print('---------------------')\n",
    "!cp $CHECKPOINT $HOST_RESULTS_DIR/train/centerpose_model.pth\n",
    "!ls -ltrh $HOST_RESULTS_DIR/train/centerpose_model.pth"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5. Evaluate a trained model <a class=\"anchor\" id=\"head-5\"></a>\n",
    "\n",
    "In this section, we run the `evaluate` tool to evaluate the trained model and produce the 3D IoU and 2D MPE metric.\n",
    "\n",
    "We provide evaluate.yaml specification files to configure the evaluate parameters including:\n",
    "\n",
    "* model: configure the model setting\n",
    "    * this config should remain same as your trained model's configuration\n",
    "* dataset: configure the dataset and augmentation methods\n",
    "    * test_data: images and annotation files for validation data. Required to have correct camera calibration data\n",
    "    * num_classes: number of category used for training, default is 1 because CenterPose is category-based method\n",
    "    * batch_size: batch size for dataloader\n",
    "    * workers: number of workers to do data loading\n",
    "* evaluate:\n",
    "    * num_gpus: number of gpus\n",
    "    * checkpoint: load the saved trained CenterPose model\n",
    "    * opencv: if True, returns the OpenCV format 3D keypoints (use for inference); if False, returns the OpenGL format 3D keypoints (use for evaluation)\n",
    "    * eval_num_symmetry: evaluate the best accuracy by calculating different symmetric rotations (use for the symmetric objects)\n",
    "    * results_dir: the directory of exporting the detailed accuracy report\n",
    "\n",
    "* **NOTE: You need to change the evaluate.yaml file based on your setting.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Evaluate on TAO model\n",
    "!tao model centerpose evaluate \\\n",
    "            -e $SPECS_DIR/evaluate.yaml \\\n",
    "            evaluate.checkpoint=$RESULTS_DIR/train/centerpose_model.pth \\\n",
    "            results_dir=$RESULTS_DIR/"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 6. Visualize Inferences <a class=\"anchor\" id=\"head-6\"></a>\n",
    "In this section, we run the `inference` tool to generate inferences on the trained models and visualize the results. The `inference` tool produces annotated image outputs and json files that contain prediction information.\n",
    "\n",
    "We provide evaluate.yaml specification files to configure the evaluate parameters including:\n",
    "\n",
    "* model: configure the model setting\n",
    "    * this config should remain same as your trained model's configuration\n",
    "* dataset: configure the dataset and augmentation methods\n",
    "    * inference_data: inference images. Not require the json file but require to have correct camera intrinsic matrix\n",
    "    * num_classes: number of category used for training, default is 1 because CenterPose is category-based method\n",
    "    * batch_size: batch size for dataloader\n",
    "    * workers: number of workers to do data loading\n",
    "* inference\n",
    "    * checkpoint: load the saved trained CenterPose model\n",
    "    * visualization_threshold: the confidence score threshold\n",
    "    * principle_point_x: principle points (camera intrinsic matrix)\n",
    "    * principle_point_y: principle points (camera intrinsic matrix)\n",
    "    * focal_length_x: focal length (camera intrinsic matrix)\n",
    "    * focal_length_y: focal length (camera intrinsic matrix)\n",
    "    * skew: skew value (camera intrinsic matrix)\n",
    "    * use_pnp: flag to enable using the PnP algorithm\n",
    "    * save_json: flag to enable saving the result infomation to json file\n",
    "    * save_visualization: flag to enable saving the visualization results to local\n",
    "    * opencv: if True, returns the OpenCV format 3D keypoints (use for inference); if False, returns the OpenGL format 3D keypoints (use for evaluation)\n",
    "\n",
    "* **NOTE: You need to change the infer.yaml file based on your setting.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!tao model centerpose inference \\\n",
    "        -e $SPECS_DIR/infer.yaml \\\n",
    "        inference.checkpoint=$RESULTS_DIR/train/centerpose_model.pth \\\n",
    "        results_dir=$RESULTS_DIR/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Simple grid visualizer\n",
    "!pip3 install 'matplotlib>=3.3.3, <4.0'\n",
    "import matplotlib.pyplot as plt\n",
    "import os\n",
    "from math import ceil\n",
    "valid_image_ext = ['.png']\n",
    "\n",
    "def visualize_images(output_path, num_cols=4, num_images=10):\n",
    "    num_rows = int(ceil(float(num_images) / float(num_cols)))\n",
    "    f, axarr = plt.subplots(num_rows, num_cols, figsize=[40,30])\n",
    "    f.tight_layout()\n",
    "    a = [os.path.join(output_path, image) for image in os.listdir(output_path) \n",
    "         if os.path.splitext(image)[1].lower() in valid_image_ext]\n",
    "    for idx, img_path in enumerate(a[:num_images]):\n",
    "        col_id = idx % num_cols\n",
    "        row_id = idx // num_cols\n",
    "        img = plt.imread(img_path)\n",
    "        axarr[row_id, col_id].imshow(img) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Visualizing the sample images.\n",
    "# Note that the sample spec is not meant to produce SOTA (state-of-the-art) accuracy on Objectron dataset.\n",
    "IMAGE_DIR = os.path.join(os.environ['HOST_RESULTS_DIR'], \"inference\")\n",
    "COLS = 2 # number of columns in the visualizer grid.\n",
    "IMAGES = 4 # number of images to visualize.\n",
    "\n",
    "visualize_images(IMAGE_DIR, num_cols=COLS, num_images=IMAGES)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 7. Deploy <a class=\"anchor\" id=\"head-7\"></a>\n",
    "In this section, it includes the ONNX model exportation and the TensorRT deployment.\n",
    "\n",
    "### 7.1 Export the trained model to ONNX model\n",
    "The `export` tool exports the trained CenterPose model to ONNX model.\n",
    "\n",
    "We provide export.yaml specification files to configure the exportation parameters including:\n",
    "\n",
    "* model: configure the model setting\n",
    "    * this config should remain same as your trained model's configuration\n",
    "* export: configure the exportation settings\n",
    "    * checkpoint: load the saved trained CenterPose model\n",
    "    * onnx_file: the ONNX model exportation path\n",
    "    * input_channel: the number of channels of the ONNX model\n",
    "    * input_width: the input width of the ONNX model\n",
    "    * input_height: the input height of the ONNX model\n",
    "    * opset_version: the opset version of exporting the ONNX model\n",
    "    * do_constant_folding: flag that enable the constant folding (set to True if TensorRT version < 8.6)\n",
    "\n",
    "* **NOTE: You need to change the export.yaml file based on your setting.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!mkdir -p $HOST_RESULTS_DIR/export"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Export the RGB model to ONNX model\n",
    "!tao model centerpose export \\\n",
    "        -e $SPECS_DIR/export.yaml \\\n",
    "            export.checkpoint=$RESULTS_DIR/train/centerpose_model.pth \\\n",
    "            export.onnx_file=$RESULTS_DIR/export/centerpose_model.onnx"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 7.2 Generate the TensorRT engine from the ONNX model\n",
    "We provide gen_trt_engine.yaml specification files to configure the generation of TensorRT engine parameters including:\n",
    "\n",
    "* gen_trt_engine: configure the exportation settings\n",
    "    * onnx_file: the ONNX model loading path\n",
    "    * trt_engine: the TensorRT engine exportation path\n",
    "    * input_channel: the number of channels of the TensorRT engine\n",
    "    * input_width: the input width of the TensorRT engine\n",
    "    * input_height: the input height of the TensorRT engine\n",
    "    \n",
    "    * tensorrt: configure the TensorRT exportation settings\n",
    "        * data_type: the precision of the TensorRT engine, including \"fp32\", \"fp16\", \"int8\"\n",
    "        * min_batch_size: minimum number of batch size of the TensorRT engine\n",
    "        * opt_batch_size: option number of batch size of the TensorRT engine\n",
    "        * max_batch_size: maxiumum number of batch size of the TensorRT engine\n",
    "        * calibration: TensorRT calibration settings (only on \"int8\" mode)\n",
    "            * cal_image_dir: image directory for calculating the calibration file\n",
    "            * cal_cache_file: calibration cache file for the above image directory\n",
    "            * cal_batch_size: batch size of the calibration calculation\n",
    "\n",
    "* **NOTE: You need to change the gen_trt_engine.yaml file based on your setting.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Generate TensorRT engine using tao deploy\n",
    "!tao deploy centerpose gen_trt_engine -e $SPECS_DIR/gen_trt_engine.yaml \\\n",
    "                               gen_trt_engine.onnx_file=$RESULTS_DIR/export/centerpose_model.onnx \\\n",
    "                               gen_trt_engine.trt_engine=$RESULTS_DIR/gen_trt_engine/centerpose_model.engine \\\n",
    "                               results_dir=$RESULTS_DIR"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 7.3 Evaluate with the generated TensorRT engine\n",
    "The TAO deploy provides the tool that evaluate the data with the generated TensorRT engine.\n",
    "\n",
    "* **NOTE: You need to change the evalute.yaml file based on your setting.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Evaluate with generated TensorRT engine\n",
    "!tao deploy centerpose evaluate -e $SPECS_DIR/evaluate.yaml \\\n",
    "                              evaluate.trt_engine=$RESULTS_DIR/gen_trt_engine/centerpose_model.engine \\\n",
    "                              results_dir=$RESULTS_DIR/"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 7.4 Inference the images with the generated TensorRT engine\n",
    "The TAO deploy provides the tool that test the data with the generated TensorRT engine, outputing the visualization results and the related json file.\n",
    "\n",
    "* **NOTE: You need to change the infer.yaml file based on your setting.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Inference with generated TensorRT engine\n",
    "!tao deploy centerpose inference -e $SPECS_DIR/infer.yaml \\\n",
    "                              inference.trt_engine=$RESULTS_DIR/gen_trt_engine/centerpose_model.engine \\\n",
    "                              results_dir=$RESULTS_DIR/"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This notebook has come to an end."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.18"
  },
  "papermill": {
   "default_parameters": {},
   "duration": 6.348297,
   "end_time": "2023-09-28T06:31:41.570413",
   "environment_variables": {},
   "exception": null,
   "input_path": "../../cv/resource/notebooks/tao_launcher_starter_kit/centerpose/centerpose.ipynb",
   "output_path": "../../cv/resource/notebooks/tao_launcher_starter_kit/centerpose/centerpose.ipynb",
   "parameters": {},
   "start_time": "2023-09-28T06:31:35.222116",
   "version": "2.4.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
