{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Bodypose Estimation using TAO BodyposeNet\n",
    "\n",
    "Transfer learning is the process of transferring learned features from one application to another. It is a commonly used training technique where you use a model trained on one task and re-train to use it on a different task. \n",
    "\n",
    "Train Adapt Optimize (TAO) Toolkit is a simple and easy-to-use Python based AI toolkit for taking purpose-built AI models and customizing them with users' own data.\n",
    "\n",
    "<img align=\"center\" src=\"https://d29g4g2dyqv443.cloudfront.net/sites/default/files/akamai/TAO/tlt-tao-toolkit-bring-your-own-model-diagram.png\" width=\"1080\">\n",
    "\n",
    "## Sample output predictions from a trained BodyPoseNet model\n",
    "\n",
    "<img align=\"center\" src=\"https://docscontent.nvidia.com/dims4/default/5f14ac8/2147483647/strip/true/crop/1672x1080+0+0/resize/2880x1860!/format/webp/quality/90/?url=https%3A%2F%2Fk3-prod-nvidia-docs.s3.amazonaws.com%2Fbrightspot%2Fsphinx%2F00000187-579f-dfeb-adf7-77df80550000%2Ftao%2Ftao-toolkit%2F_images%2Fbodypose2d.png\" width=\"1080\">\n",
    "\n",
    "## Learning Objectives\n",
    "In this notebook, you will learn how to leverage the simplicity and convenience of TAO to:\n",
    "\n",
    "* Train a Bodypose Estimation model on the Common Objects in Context (COCO) dataset\n",
    "* Evaluate the model's performance\n",
    "* Run Inference on the trained model\n",
    "* Prune and re-train the pruned model\n",
    "* Export the model to a .onnx file for deployment to DeepStream SDK\n",
    "* Optimize the standard fp32 model into an int8 TensorRT Engine for optimized deployment for the system GPU\n",
    "\n",
    "At the end of this notebook, you will have a trained and optimized `bodypose estimation` model that you\n",
    "may deploy via [DeepStream](https://developer.nvidia.com/deepstream-sdk).\n",
    "\n",
    "### Table of Contents\n",
    "\n",
    "1. [Set up env variables, map drives, and install dependencies](#head-1) <br>\n",
    "2. [Install the TAO Launcer](#head-2) <br>\n",
    "3. [Prepare dataset and pre-trained model](#head-3) <br>\n",
    "    3.1 [Generate masks and tfrecords from labels in json format](#head-3-1) <br>\n",
    "    3.2 [Convert dataset format](#head-3-2) <br>\n",
    "    3.3 [Download pre-trained model](#head-3-3) <br>\n",
    "4. [Provide training specification](#head-4) <br>\n",
    "5. [Run TAO training](#head-5) <br>\n",
    "6. [Evaluate trained models](#head-6) <br>\n",
    "7. [Run inference for a set of images](#head-7) <br>\n",
    "    7.1 [Visualize annotations](#head-7-1) <br>\n",
    "    7.2 [Visualize annotations manually from detections](#head-7-2) <br>\n",
    "8. [Pruning workflow](#head-8) <br>\n",
    "    8.1 [Prune trained models](#head-8-1) <br>\n",
    "    8.2 [Retrain pruned models](#head-8-2) <br>\n",
    "    8.3 [Evaluate retrained model](#head-8-3) <br>\n",
    "    8.4 [Inference using retrained model](#head-8-4) <br>\n",
    "    8.5 [Visualize retrained model inferences](#head-8-5) <br>\n",
    "9. [Model Export and INT8 Quantization](#head-9) <br>\n",
    "    9.1 [Choose network input resolution for deployment](#head-9-1) <br>\n",
    "    9.2 [Export `.onnx` model](#head-9-2) <br>\n",
    "    9.3 [Int8 Optimization](#head-9-3) <br>\n",
    "    9.4 [Generate TensorRT Engine](#head-9-4) <br>\n",
    "10. [Verify TensorRT models and Deploy](#head-10) <br>\n",
    "    10.1 [Inference using TensorRT Engine](#head-10-1) <br>\n",
    "    10.2 [Visualize TensorRT Inferences](#head-10-2) <br>\n",
    "    10.3 [Evaluate the TensorRT engine](#head-10-3) <br>\n",
    "    10.4 [Export Deployable Model](#head-10-4) <br>"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Set up env variables, map drives, and install dependencies <a class=\"anchor\" id=\"head-1\"></a>\n",
    "\n",
    "The following notebook requires the user to set an env variable called the `$LOCAL_PROJECT_DIR` as the path to the users' workspace. Please note that the dataset to run this notebook is expected to reside in the `$LOCAL_PROJECT_DIR/bpnet/data`, while the TAO experiment generated collaterals will be output to `$LOCAL_PROJECT_DIR/bpnet`. More information on how to set up the dataset and the supported steps in the TAO workflow are provided in the subsequent cells.\n",
    "\n",
    "*Note: This notebook currently is by default set up to run training using 1 GPU. To use more GPU's please update the env variable `$NUM_GPUS` accordingly*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Setting up env variables for cleaner command line commands.\n",
    "import os\n",
    "\n",
    "%env KEY=nvidia_tlt\n",
    "%env NUM_GPUS=1\n",
    "\n",
    "# Set this path if you don't run the notebook from the samples directory.\n",
    "# %env NOTEBOOK_ROOT=~/tao-samples/bpnet\n",
    "\n",
    "# Please define this local project directory that needs to be mapped to the TAO docker session.\n",
    "# The dataset is expected to be present in $LOCAL_PROJECT_DIR/bpnet/data, while the results for the steps\n",
    "# in this notebook will be stored at $LOCAL_PROJECT_DIR/bpnet\n",
    "# !PLEASE MAKE SURE TO UPDATE THIS PATH!.\n",
    "%env LOCAL_PROJECT_DIR=FIXME\n",
    "\n",
    "# $SAMPLES_DIR is the path to the sample notebook folder and the dependency folder\n",
    "# $SAMPLES_DIR/deps should exist for dependency installation\n",
    "%env SAMPLES_DIR=FIXME\n",
    "\n",
    "os.environ[\"LOCAL_DATA_DIR\"] = os.path.join(\n",
    "    os.getenv(\"LOCAL_PROJECT_DIR\", os.getcwd()),\n",
    "    \"bpnet/data\"\n",
    ")\n",
    "os.environ[\"LOCAL_EXPERIMENT_DIR\"] = os.path.join(\n",
    "    os.getenv(\"LOCAL_PROJECT_DIR\", os.getcwd()),\n",
    "    \"bpnet\"\n",
    ")\n",
    "\n",
    "# The sample spec files are present in the same path as the downloaded samples.\n",
    "os.environ[\"LOCAL_SPECS_DIR\"] = os.path.join(\n",
    "    os.getenv(\"NOTEBOOK_ROOT\", os.getcwd()),\n",
    "    \"specs\"\n",
    ")\n",
    "\n",
    "os.environ[\"LOCAL_DATA_POSE_SPECS_DIR\"] = os.path.join(\n",
    "    os.getenv(\"NOTEBOOK_ROOT\", os.getcwd()),\n",
    "    \"data_pose_config\"\n",
    ")\n",
    "\n",
    "os.environ[\"LOCAL_MODEL_POSE_SPECS_DIR\"] = os.path.join(\n",
    "    os.getenv(\"NOTEBOOK_ROOT\", os.getcwd()),\n",
    "    \"model_pose_config\"\n",
    ")\n",
    "\n",
    "%env USER_EXPERIMENT_DIR=/workspace/tao-experiments/bpnet\n",
    "%env DATA_DIR=/workspace/tao-experiments/bpnet/data\n",
    "%env SPECS_DIR=/workspace/examples/bpnet/specs\n",
    "%env DATA_POSE_SPECS_DIR=/workspace/examples/bpnet/data_pose_config\n",
    "%env MODEL_POSE_SPECS_DIR=/workspace/examples/bpnet/model_pose_config\n",
    "\n",
    "# Showing list of specification files.\n",
    "!ls -rlt $LOCAL_SPECS_DIR\n",
    "!ls -rlt $LOCAL_DATA_POSE_SPECS_DIR\n",
    "!ls -rlt $LOCAL_MODEL_POSE_SPECS_DIR"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The cell below maps the project directory on your local host to a workspace directory in the TAO docker instance, so that the data and the results are mapped from in and out of the docker. For more information please refer to the [launcher instance](https://docs.nvidia.com/tao/tao-toolkit/text/tao_launcher.html) in the user guide.\n",
    "\n",
    "When running this cell on AWS, update the drive_map entry with the dictionary defined below, so that you don't have permission issues when writing data into folders created by the TAO docker.\n",
    "\n",
    "```json\n",
    "drive_map = {\n",
    "    \"Mounts\": [\n",
    "            # Mapping the data directory\n",
    "            {\n",
    "                \"source\": os.environ[\"LOCAL_PROJECT_DIR\"],\n",
    "                \"destination\": \"/workspace/tao-experiments\"\n",
    "            },\n",
    "            # Mapping the specs directory.\n",
    "            {\n",
    "                \"source\": os.environ[\"LOCAL_SPECS_DIR\"],\n",
    "                \"destination\": os.environ[\"SPECS_DIR\"]\n",
    "            },\n",
    "            {\n",
    "                \"source\": os.environ[\"LOCAL_DATA_POSE_SPECS_DIR\"],\n",
    "                \"destination\": os.environ[\"DATA_POSE_SPECS_DIR\"]\n",
    "            },\n",
    "            {\n",
    "                \"source\": os.environ[\"LOCAL_MODEL_POSE_SPECS_DIR\"],\n",
    "                \"destination\": os.environ[\"MODEL_POSE_SPECS_DIR\"]\n",
    "            },\n",
    "        ],\n",
    "    \"DockerOptions\": {\n",
    "        \"user\": \"{}:{}\".format(os.getuid(), os.getgid())\n",
    "    }\n",
    "}\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Mapping up the local directories to the TAO docker.\n",
    "import json\n",
    "mounts_file = os.path.expanduser(\"~/.tao_mounts.json\")\n",
    "\n",
    "# Define the dictionary with the mapped drives\n",
    "drive_map = {\n",
    "    \"Mounts\": [\n",
    "        # Mapping the data directory\n",
    "        {\n",
    "            \"source\": os.environ[\"LOCAL_PROJECT_DIR\"],\n",
    "            \"destination\": \"/workspace/tao-experiments\"\n",
    "        },\n",
    "        # Mapping the specs directory.\n",
    "        {\n",
    "            \"source\": os.environ[\"LOCAL_SPECS_DIR\"],\n",
    "            \"destination\": os.environ[\"SPECS_DIR\"]\n",
    "        },\n",
    "        {\n",
    "            \"source\": os.environ[\"LOCAL_DATA_POSE_SPECS_DIR\"],\n",
    "            \"destination\": os.environ[\"DATA_POSE_SPECS_DIR\"]\n",
    "        },\n",
    "        {\n",
    "            \"source\": os.environ[\"LOCAL_MODEL_POSE_SPECS_DIR\"],\n",
    "            \"destination\": os.environ[\"MODEL_POSE_SPECS_DIR\"]\n",
    "        },\n",
    "    ],\n",
    "    \"DockerOptions\": {\n",
    "        \"user\": \"{}:{}\".format(os.getuid(), os.getgid())\n",
    "    }\n",
    "}\n",
    "\n",
    "# Writing the mounts file.\n",
    "with open(mounts_file, \"w\") as mfile:\n",
    "    json.dump(drive_map, mfile, indent=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!cat ~/.tao_mounts.json "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Install requirement\n",
    "!pip3 install Cython==0.29.36\n",
    "!pip3 install -r $SAMPLES_DIR/deps/requirements-pip.txt"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. Install the TAO launcher <a class=\"anchor\" id=\"head-2\"></a>\n",
    "The TAO launcher is a python package distributed as a python wheel listed in PyPI. You may install the launcher by executing the following cell.\n",
    "\n",
    "Please note that TAO Toolkit recommends users to run the TAO launcher in a virtual env with python 3.6.9. You may follow the instruction in this [page](https://virtualenvwrapper.readthedocs.io/en/latest/install.html) to set up a python virtual env using the `virtualenv` and `virtualenvwrapper` packages. Once you have setup virtualenvwrapper, please set the version of python to be used in the virtual env by using the `VIRTUALENVWRAPPER_PYTHON` variable. You may do so by running\n",
    "\n",
    "```sh\n",
    "export VIRTUALENVWRAPPER_PYTHON=/path/to/bin/python3.x\n",
    "```\n",
    "where x >= 6 and <= 8\n",
    "\n",
    "We recommend performing this step first and then launching the notebook from the virtual environment. In addition to installing TAO python package, please make sure of the following software requirements:\n",
    "* python >=3.7, <=3.10.x\n",
    "* docker-ce > 19.03.5\n",
    "* docker-API 1.40\n",
    "* nvidia-container-toolkit > 1.3.0-1\n",
    "* nvidia-container-runtime > 3.4.0-1\n",
    "* nvidia-docker2 > 2.5.0-1\n",
    "* nvidia-driver > 455+\n",
    "\n",
    "Once you have installed the pre-requisites, please log in to the docker registry nvcr.io by following the command below\n",
    "\n",
    "```sh\n",
    "docker login nvcr.io\n",
    "```\n",
    "\n",
    "You will be triggered to enter a username and password. The username is `$oauthtoken` and the password is the API key generated from `ngc.nvidia.com`. Please follow the instructions in the [NGC setup guide](https://docs.nvidia.com/ngc/ngc-overview/index.html#generating-api-key) to generate your own API key. <a class=\"anchor\" id=\"head-1\"></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Please replace fixme with the path to the wheel file that you downloaded from the developer zone link mentioned above.\n",
    "# SKIP this step IF you have already installed the TAO launcher wheel.\n",
    "!pip3 install nvidia-tao"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Initialize the TAO launcher\n",
    "!tao info"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Prepare dataset and pre-trained model <a class=\"anchor\" id=\"head-3\"></a>"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will be using the COCO (common objects on context) 2017 dataset for this example. To find more details please visit https://cocodataset.org/#keypoints-2017 and https://cocodataset.org/#keypoints-eval.\n",
    "Please download the dataset and extract as per instructions below.\n",
    "\n",
    "Links to download the data: [train_data](http://images.cocodataset.org/zips/train2017.zip), [val_data](http://images.cocodataset.org/zips/val2017.zip) and [annotations](http://images.cocodataset.org/annotations/annotations_trainval2017.zip). Please unzip the images into the `$LOCAL_DATA_DIR` directory and the annotations into the `$LOCAL_DATA_DIR/annotations`. You may use this notebook with your own dataset as well. To use this example with your own dataset, please refer to `Use your own dataset` section below.\n",
    "\n",
    "*Note: There are no labels for the testing images, therefore we use COCO validation set to evaluate the trained model.*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Modify dataset_config for data preparation\n",
    "# verify all paths\n",
    "!cat $LOCAL_DATA_POSE_SPECS_DIR/coco_spec.json"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Check the dataset is present\n",
    "!if [ ! -d $LOCAL_DATA_DIR ]; then echo 'Data folder not found, please download.'; else echo 'Found Data folder.';fi\n",
    "!if [ ! -d $LOCAL_DATA_DIR/annotations ]; then echo 'Annotations folder not found, please download.'; else echo 'Found Annotations folder.';fi\n",
    "!if [ ! -d $LOCAL_DATA_DIR/train2017 ]; then echo 'Train Images folder not found, please download.'; else echo 'Found Train Images folder.';fi\n",
    "!if [ ! -d $LOCAL_DATA_DIR/val2017 ]; then echo 'Val Images folder not found, please download.'; else echo 'Found Val Images folder.';fi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Check the labels are present\n",
    "!if [ ! -f $LOCAL_DATA_DIR/annotations/person_keypoints_train2017.json ]; then echo 'Train labels not found, please regenerate.'; else echo 'Found Train Labels.';fi\n",
    "!if [ ! -f $LOCAL_DATA_DIR/annotations/person_keypoints_val2017.json ]; then echo 'Val labels not found, please regenerate.'; else echo 'Found Val Labels.';fi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Sample json label.\n",
    "!sed -n 1,201p $LOCAL_DATA_DIR/annotations/person_keypoints_val2017.json"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "os.path.join(os.getenv(\"LOCAL_DATA_DIR\"), \"train2017/000000304473.jpg\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Sample image.\n",
    "import os\n",
    "from IPython.display import Image\n",
    "Image(filename=os.path.join(\n",
    "    os.getenv(\"LOCAL_DATA_DIR\"), \"train2017/000000304473.jpg\"))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1. Generate segmentation masks and tfrecords from annotations <a class=\"anchor\" id=\"head-3-1\"></a>\n",
    "* Create the tfrecords using the `bpnet dataset_convert` tool\n",
    "* Generate and save masks of regions with unlabeled people - used to mask out the loss for those regions duirng training.\n",
    "* Mask folder is created based on the `coco_spec.json` file path. `mask_root_dir_path` directory is relative to `root_directory_path`. Similarly for `images_root_dir_path` and `annotation_root_dir_path`\n",
    "* Use `-m 'train'` to process data specified under `train_data` in `coco_spec.json`. Similarly, `-m 'test'` for `test_data`.\n",
    "\n",
    "*Note: TfRecords and masks only need to be generated once.*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Generate TFRecords for training dataset\n",
    "!tao model bpnet dataset_convert \\\n",
    "        -m 'train' \\\n",
    "        -o $DATA_DIR/train \\\n",
    "        -r $USER_EXPERIMENT_DIR/ \\\n",
    "        --generate_masks \\\n",
    "        --dataset_spec $DATA_POSE_SPECS_DIR/coco_spec.json"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Generate TFRecords for validation dataset\n",
    "!tao model bpnet dataset_convert \\\n",
    "        -m 'test' \\\n",
    "        -o $DATA_DIR/val \\\n",
    "        -r $USER_EXPERIMENT_DIR/ \\\n",
    "        --generate_masks \\\n",
    "        --dataset_spec $DATA_POSE_SPECS_DIR/coco_spec.json"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# check the tfrecords are generated\n",
    "!if [ ! -f $LOCAL_DATA_DIR/train-fold-000-of-001 ]; then echo 'Train Tfrecords not found, please generate.'; else echo 'Found train Tfrecords.';fi\n",
    "!if [ ! -f $LOCAL_DATA_DIR/val-fold-000-of-001 ]; then echo 'Val Tfrecords not found, please generate.'; else echo 'Found val Tfrecords.';fi"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.2. Use your own dataset by converting to COCO dataset format <a class=\"anchor\" id=\"head-3-2\"></a>\n",
    "\n",
    "You may use this notebook with your own dataset as well. This section briefly talks about how you can use your own dataset with BodyposeNet. \n",
    "\n",
    "*Note: If you are already using the coco dataset for this notebook example, you may skip this section.*\n",
    "\n",
    "To use this example with your own dataset:\n",
    "* Prepare the data and annotations in a format similar to COCO dataset.\n",
    "* Create a dataset spec under `data_pose_config` similar to `coco_spec.json` which includes the dataset paths, pose configuration, occlusion labeling convention etc.\n",
    "* Convert your annotations to COCO annotations format.\n",
    "* Follow same instructions from section 3 through last section.\n",
    "\n",
    "This section outlines COCO annotations dataset format that the data must be in for BodyposeNet. Although COCO annotations have many fields (please see snippet view of annotations above), only the attributes that are needed by BodyposeNet are specified here. The dataset should use the following overall structure (in a `.json` format):\n",
    "```\n",
    "{\n",
    "    \"images\": [...],\n",
    "    \"annotations\": [...],\n",
    "    \"categories\": [...]\n",
    "}\n",
    "```\n",
    "\n",
    "The `images` section contains the complete list of images in the dataset with some metadata. *Note: image ids need to be unique among other images.*\n",
    "\n",
    "```\n",
    "\"images\": [\n",
    "    {\n",
    "        \"file_name\": \"000000001000.jpg\",\n",
    "        \"height\": 480,\n",
    "        \"width\": 640,\n",
    "        \"id\": 1000\n",
    "    },\n",
    "    {\n",
    "        \"file_name\": \"000000580197.jpg\",\n",
    "        \"height\": 480,\n",
    "        \"width\": 640,\n",
    "        \"id\": 580197\n",
    "    },\n",
    "    ...\n",
    "]\n",
    "```\n",
    "\n",
    "The \"annotations\" section follow this format:\n",
    "```\n",
    "\"annotations\": [\n",
    "    {\n",
    "        \"segmentation\": [[162.46,152.13,150.73,...173.92,156.23]],\n",
    "        \"num_keypoints\": 17,\n",
    "        \"area\": 8720.28915,\n",
    "        \"iscrowd\": 0,\n",
    "        \"keypoints\": [162,174,2,...,149,352,2],\n",
    "        \"image_id\": 1000,\n",
    "        \"bbox\": [115.16,152.13,83.23,228.41],\n",
    "        \"category_id\": 1,\n",
    "        \"id\": 1234574\n",
    "    }\n",
    "]\n",
    "```\n",
    "\n",
    "Where:\n",
    "* `segmentation` is a list of polygons which has a list of vertices - for a given person / group.\n",
    "* `num_keypoints` is the number of keypoints that are labeled\n",
    "* `iscrowd` if `1` indicates that the annotation mask is for multiple people\n",
    "* `category_id` is always `1` which is for a `person`\n",
    "* `id` is the id of the annotation and `image_id` is the id of the associated image\n",
    "* `keypoints` is a list of keypoints with format as follows `[x1, y1, v1, x2, y2, v2 ...]` where `x` and `y` are pixel locations and `v` is visibility/occlusion flag. \n",
    "\n",
    "Each keypoint annotation adheres to the following format below. The keypoint convention in your dataset needs to be converted to this format.\n",
    "```\n",
    "\"categories\": [\n",
    "    {\n",
    "        \"supercategory\": \"person\",\n",
    "        \"id\": 1,\n",
    "        \"name\": \"person\",\n",
    "        \"keypoints\": [\n",
    "            \"nose\",\"left_eye\",\"right_eye\",\"left_ear\",\"right_ear\",\n",
    "            \"left_shoulder\",\"right_shoulder\",\"left_elbow\",\"right_elbow\",\n",
    "            \"left_wrist\",\"right_wrist\",\"left_hip\",\"right_hip\",\n",
    "            \"left_knee\",\"right_knee\",\"left_ankle\",\"right_ankle\"\n",
    "        ],\n",
    "        \"skeleton\": [\n",
    "            [16,14],[14,12],[17,15],[15,13],[12,13],[6,12],[7,13],[6,7],\n",
    "            [6,8],[7,9],[8,10],[9,11],[2,3],[1,2],[1,3],[2,4],[3,5],[4,6],[5,7]\n",
    "        ]\n",
    "    }\n",
    "]\n",
    "```\n",
    "\n",
    "COCO dataset follows the given visibility flag convention:\n",
    "```\n",
    "\"visibility_flags\": {\n",
    "    \"value\": {\n",
    "        \"visible\": 2,\n",
    "        \"occluded\": 1,\n",
    "        \"not_labeled\": 0\n",
    "    },\n",
    "    \"mapping\": {\n",
    "        \"visible\": \"visible\",\n",
    "        \"occluded\": \"occluded\",\n",
    "        \"not_labeled\": \"not_labeled\"\n",
    "    }\n",
    "}\n",
    "```\n",
    "You can either convert your dataset to this format, or provide the mapping as above. `value` maps the visibility flag to value. `mapping` maps your naming convention with the convention used in BodyposeNet. You need to map all your states to these three categories: (`visible`, `occluded`, `not_labeled`)\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.3. Download pre-trained model <a class=\"anchor\" id=\"head-2-3\"></a>\n",
    "\n",
    "Download the correct pretrained model from the NGC model registry for your experiment. For optimum results please download model templates from `nvidia/tao/bodyposenet`. The templates are now organized as version strings. For example, to download a pretrained model suitable for bpnet please resolve to the ngc object shown as `nvidia/tao/bodyposenet:trainable_v1.0`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Installing NGC CLI on the local machine.\n",
    "## Download and install\n",
    "%env CLI=ngccli_cat_linux.zip\n",
    "!mkdir -p $LOCAL_PROJECT_DIR/ngccli\n",
    "\n",
    "# Remove any previously existing CLI installations\n",
    "!rm -rf $LOCAL_PROJECT_DIR/ngccli/*\n",
    "!wget \"https://ngc.nvidia.com/downloads/$CLI\" -P $LOCAL_PROJECT_DIR/ngccli\n",
    "!unzip -u \"$LOCAL_PROJECT_DIR/ngccli/$CLI\" -d $LOCAL_PROJECT_DIR/ngccli/\n",
    "!rm $LOCAL_PROJECT_DIR/ngccli/*.zip \n",
    "os.environ[\"PATH\"]=\"{}/ngccli/ngc-cli:{}\".format(os.getenv(\"LOCAL_PROJECT_DIR\", \"\"), os.getenv(\"PATH\", \"\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# List models available in the model registry.\n",
    "!ngc registry model list nvidia/tao/bodyposenet:*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create the target destination to download the model.\n",
    "!mkdir -p $LOCAL_EXPERIMENT_DIR/pretrained_model/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    " # Download the pretrained model from NGC\n",
    "!ngc registry model download-version nvidia/tao/bodyposenet:trainable_v1.0 \\\n",
    "    --dest $LOCAL_EXPERIMENT_DIR/pretrained_model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Check if the pretrained model is present \n",
    "!ls -rlt $LOCAL_EXPERIMENT_DIR/pretrained_model/bodyposenet_vtrainable_v1.0"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. Provide training specification <a class=\"anchor\" id=\"head-4\"></a>\n",
    "\n",
    "Update the training spec at `$SPECS_DIR/bpnet_train_m1_coco.yaml` as needed. Some guidelines:\n",
    "* Tfrecords for the train datasets\n",
    "    * In order to use the newly generated tfrecords for training, update the 'tfrecords_directory_path' and 'train_records_path' parameters of 'dataset_config' section in the spec file at `$SPECS_DIR/bpnet_train_m1_coco.yaml`\n",
    "* Update `pose_config_path` with spec file at `$MODEL_POSE_SPECS_DIR/bpnet_18joints.json`.\n",
    "* Update `dataset_specs` with `{'coco': $DATA_POSE_SPECS_DIR/coco_spec.json}`. If using other datasets, `{'<dataset>': $DATA_POSE_SPECS_DIR/<dataset>_spec.json}` \n",
    "* Augmentation parameters for on the fly data augmentation\n",
    "* Other training (hyper-)parameters such as batch size, number of epochs, learning rate etc."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!cat $LOCAL_SPECS_DIR/bpnet_train_m1_coco.yaml"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5. Run TAO training <a class=\"anchor\" id=\"head-5\"></a>\n",
    "\n",
    "* Provide the sample spec file and the output directory location for models\n",
    "\n",
    "*Note: The training may take days to complete. Also, the remaining notebook, assumes that the training was done in single-GPU mode. In multi-gpu mode, training time will roughly decrease by a factor of `$NUM_GPUS`.*\n",
    "\n",
    "When running the training in multi-GPU mode (`$NUM_GPUS` > 1), you may need to modify the `learning_rate` and/or `batch_size` to get similar accuracy as a 1GPU training run. In most cases, scaling down the batch-size by a factor of `$NUM_GPUS` or scaling up the learning rate by a factor of `$NUM_GPUS` would be a good place to start.\n",
    "\n",
    "BodyposeNet supports restart from checkpoint. In case the training job is killed prematurely, you may resume training from the closest checkpoint by simply re-running the same command line. Please do make sure to use the same number of GPUs when restarting the training.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!tao model bpnet train -e $SPECS_DIR/bpnet_train_m1_coco.yaml \\\n",
    "                 -r $USER_EXPERIMENT_DIR/models/exp_m1_unpruned \\\n",
    "                 -k $KEY \\\n",
    "                 --gpus $NUM_GPUS"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# check the training folder for generated files\n",
    "!ls -lh $LOCAL_EXPERIMENT_DIR/models/exp_m1_unpruned"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Set env for model to use for the remaining steps: infer, evaluate, export.\n",
    "# NOTE: The last epoch model will tagged/saved as `bpnet_model.hdf5` additionally.\n",
    "# If you want to evaluate model from any other step, please change the below env\n",
    "# variable accordingly with the filename of the checkpoint.\n",
    "# Example:\n",
    "# %set_env MODEL_CHECKPOINT=model.step-1152500.hdf5\n",
    "%set_env MODEL_CHECKPOINT=bpnet_model.hdf5"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 6. Evaluate the trained model <a class=\"anchor\" id=\"head-6\"></a>\n",
    "\n",
    "Evaluate the trained model using the latest checkpoint (or any other checkpoint). \n",
    "\n",
    "To keep the evaluation consistent with bottom-up human pose estimation research, we have two modes to evaluate the model.\n",
    "* `infer_spec.yaml`: This configuration does a single-scale inference on the input image. Aspect ratio of the input image is retained by fixing one of the sides of the network input (height or width), and adjusting the other side to match the aspect ratio of the input image. \n",
    "* `infer_spec_refine.yaml`: This configuration does a multi-scale inference on the input image. The scales are configurable. By default, the following scales are used: (0.5, 1.0, 1.5, 2.0)\n",
    "\n",
    "We also have another mode which is used primarily to verify against the final exported TRT models. *We will be using this in the later sections*  \n",
    "* `infer_spec_strict.yaml`: This configuration does a single-scale inference on the input image. Aspect ratio of the input image is retained by padding the image on the sides as needed to fit the network input size since the TRT model input dims are fixed.  \n",
    "\n",
    "*Note: The `--model_filename` arg will override the `model_path` in the `infer_spec.yaml`*\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Single-scale inference\n",
    "!tao model bpnet evaluate  --inference_spec $SPECS_DIR/infer_spec.yaml \\\n",
    "                     --model_filename $USER_EXPERIMENT_DIR/models/exp_m1_unpruned/$MODEL_CHECKPOINT \\\n",
    "                     --dataset_spec $DATA_POSE_SPECS_DIR/coco_spec.json \\\n",
    "                     --results_dir $USER_EXPERIMENT_DIR/results/exp_m1_unpruned/eval_default\n",
    "\n",
    "# Uncomment this section for Multi-scale inference\n",
    "# !tao model bpnet evaluate  --inference_spec $SPECS_DIR/infer_spec_refine.yaml \\\n",
    "#                      --model_filename $USER_EXPERIMENT_DIR/models/exp_m1_unpruned/$MODEL_CHECKPOINT \\\n",
    "#                      --dataset_spec $DATA_POSE_SPECS_DIR/coco_spec.json \\\n",
    "#                      --results_dir $USER_EXPERIMENT_DIR/results/exp_m1_unpruned/eval_refine"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Check if the bodypose evaluation results file is generated.\n",
    "!if [ ! -f $LOCAL_EXPERIMENT_DIR/results/exp_m1_unpruned/eval_default/results.csv ]; then echo 'Bodypose Evaluation results file not found!'; else cat $LOCAL_EXPERIMENT_DIR/results/exp_m1_unpruned/eval_default/results.csv;fi"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 7. Run inference on validation set <a class=\"anchor\" id=\"head-7\"></a>\n",
    "\n",
    "In this section, we run the inference tool to generate inferences on the trained models.\n",
    "\n",
    "Set-up:\n",
    "* The model to be used for inference can be either specified in the `model_path` of the `infer_spec.yaml` or as command line argument.\n",
    "* The `train_spec` in the `inference_spec` file should be the bodypose training spec file used for training. \n",
    "\n",
    "The inference tool produces two outputs\n",
    "* Overlaid images in `$USER_EXPERIMENT_DIR/results/exp_m1_unpruned/infer_default/images_annotated` (this is when `--dump_visualizations` is enabled)\n",
    "* Frame by frame keypoint labels in `$USER_EXPERIMENT_DIR/results/exp_m1_unpruned/infer_default/detection.json`. *Visualize annotations manually from detections* shows how to parse this result json.\n",
    "\n",
    "*Note: This supports multiple input types: (`image`, `dir` and `json`). To run inferences for any of these, set the `input_type` and add the path to `input`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "filenames = ['000000214720.jpg', '000000283520.jpg', '000000239537.jpg', '000000001000.jpg',\n",
    "             '000000006954.jpg', '000000032081.jpg', '000000033759.jpg', '000000076468.jpg',\n",
    "             '000000121673.jpg', '000000130599.jpg', '000000160864.jpg', '000000140270.jpg']\n",
    "data = [os.path.join(os.getenv(\"DATA_DIR\"), \"val2017\", filename) for filename in filenames]\n",
    "with open(os.path.join(os.getenv(\"LOCAL_DATA_DIR\"), \"viz_example_data.json\"), 'w') as f:\n",
    "    json.dump(data, f)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Single-scale inference\n",
    "!tao model bpnet inference  --inference_spec $SPECS_DIR/infer_spec.yaml \\\n",
    "                      --model_filename $USER_EXPERIMENT_DIR/models/exp_m1_unpruned/$MODEL_CHECKPOINT \\\n",
    "                      --input_type json \\\n",
    "                      --input $USER_EXPERIMENT_DIR/data/viz_example_data.json \\\n",
    "                      --results_dir $USER_EXPERIMENT_DIR/results/exp_m1_unpruned/infer_default \\\n",
    "                      --dump_visualizations\n",
    "\n",
    "# Uncomment this section for Multi-scale inference\n",
    "# !tao model bpnet inference  --inference_spec $SPECS_DIR/infer_spec_refine.yaml \\\n",
    "#                       --model_filename $USER_EXPERIMENT_DIR/models/exp_m1_unpruned/$MODEL_CHECKPOINT \\\n",
    "#                       --input_type json \\\n",
    "#                       --input $USER_EXPERIMENT_DIR/data/viz_example_data.json \\\n",
    "#                       --results_dir $USER_EXPERIMENT_DIR/results/exp_m1_unpruned/infer_refine \\\n",
    "#                       --dump_visualizations"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# check the results file is generated\n",
    "!if [ ! -f $LOCAL_EXPERIMENT_DIR/results/exp_m1_unpruned/infer_default/detections.json ]; then echo 'Results file not found!'; else cat $LOCAL_EXPERIMENT_DIR/results/exp_m1_unpruned/infer_default/detections.json;fi"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 7.1. Visualize annotations <a class=\"anchor\" id=\"head-7-1\"></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!ls $LOCAL_EXPERIMENT_DIR/results/exp_m1_unpruned/infer_default/images_annotated"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Simple grid visualizer\n",
    "%matplotlib inline\n",
    "import matplotlib.pyplot as plt\n",
    "import os\n",
    "from math import ceil\n",
    "valid_image_ext = ['.jpg', '.png', '.jpeg', '.ppm']\n",
    "\n",
    "def visualize_images(output_path, num_cols=2, num_images=4):\n",
    "    num_rows = int(ceil(float(num_images) / float(num_cols)))\n",
    "    f, axarr = plt.subplots(num_rows, num_cols, figsize=[80,30])\n",
    "    f.tight_layout()\n",
    "    a = [os.path.join(output_path, image) for image in os.listdir(output_path) \n",
    "         if os.path.splitext(image)[1].lower() in valid_image_ext]\n",
    "    for idx, img_path in enumerate(a[:num_images]):\n",
    "        col_id = idx % num_cols\n",
    "        row_id = idx // num_cols\n",
    "        img = plt.imread(img_path)\n",
    "        axarr[row_id, col_id].imshow(img) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    " # Visualizing sampled images.\n",
    "OUTPUT_PATH = os.path.join(os.getenv(\"LOCAL_EXPERIMENT_DIR\"), 'results/exp_m1_unpruned/infer_default/images_annotated/')\n",
    "\n",
    "# Uncomment to visualize multi-scale results\n",
    "# OUTPUT_PATH = os.path.join(os.getenv(\"LOCAL_EXPERIMENT_DIR\"), 'results/exp_m1_unpruned/infer_refine/images_annotated/')\n",
    "\n",
    "COLS = 3 # number of columns in the visualizer grid.\n",
    "IMAGES = 9 # number of images to visualize.\n",
    "\n",
    "visualize_images(OUTPUT_PATH, num_cols=COLS, num_images=IMAGES)\n",
    "# Note that the accuracy is not gauranteed for these visualization examples."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 7.2. Visualize annotations manually from detections <a class=\"anchor\" id=\"head-7-2\"></a>\n",
    "\n",
    "This section illustrates the following:\n",
    "* How the result `detections.json` can be parsed\n",
    "* How the skeleton is built from the `categories` inside the result file.\n",
    "* How to visualize the skeleton on an image\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Helper Functions\n",
    "def gen_topology(skeleton):\n",
    "    \"\"\"Generate skeleton topology.\"\"\"\n",
    "    K = len(skeleton)\n",
    "    topology = np.zeros((K, 4), dtype=np.int)\n",
    "    for k in range(K):\n",
    "        topology[k][0] = 2 * k\n",
    "        topology[k][1] = 2 * k + 1\n",
    "        topology[k][2] = skeleton[k][0] - 1\n",
    "        topology[k][3] = skeleton[k][1] - 1\n",
    "    return topology\n",
    "\n",
    "def draw_on_image(img, topology, keypoints):\n",
    "    peak_color = (0, 150, 255)\n",
    "    edge_color = (190, 0, 254)\n",
    "    stick_width = 2\n",
    "\n",
    "    # loop through keypoints and draw on image\n",
    "    for i in range(topology.shape[0]):\n",
    "        start_idx = topology[i][2]\n",
    "        end_idx = topology[i][3]\n",
    "        for n in range(len(keypoints)):\n",
    "            start_joint = keypoints[n][start_idx]\n",
    "            end_joint = keypoints[n][end_idx]\n",
    "            if 0 in start_joint or 0 in end_joint:\n",
    "                continue\n",
    "            cv2.circle(\n",
    "                image, (int(\n",
    "                    start_joint[0]), int(\n",
    "                    start_joint[1])), 4, peak_color, thickness=-1)\n",
    "            cv2.circle(\n",
    "                image, (int(\n",
    "                    end_joint[0]), int(\n",
    "                    end_joint[1])), 4, peak_color, thickness=-1)\n",
    "            cv2.line(\n",
    "                image, (int(\n",
    "                    start_joint[0]), int(\n",
    "                    start_joint[1])), (int(\n",
    "                        end_joint[0]), int(\n",
    "                        end_joint[1])), edge_color, thickness=stick_width)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import cv2\n",
    "import IPython.display\n",
    "import PIL.Image\n",
    "import json\n",
    "import numpy as np\n",
    "# read results\n",
    "results_file = os.path.join(os.getenv(\"LOCAL_EXPERIMENT_DIR\"), \"results/exp_m1_unpruned/infer_default/detections.json\")\n",
    "with open(results_file) as f:\n",
    "    results = json.load(f)\n",
    "\n",
    "# Generate the topology\n",
    "skeleton = results['categories'][0]['skeleton']\n",
    "topology = gen_topology(skeleton)\n",
    "\n",
    "# get predictions\n",
    "image_data = results['images'][10]\n",
    "keypoints = image_data['keypoints']\n",
    "image_path = image_data['full_image_path'] \\\n",
    "                .replace(os.getenv(\"USER_EXPERIMENT_DIR\"), os.getenv(\"LOCAL_EXPERIMENT_DIR\"))\n",
    "\n",
    "# read image\n",
    "img = cv2.imread(image_path)\n",
    "image = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n",
    "draw_on_image(image, topology, keypoints)\n",
    "\n",
    "# display image\n",
    "IPython.display.display(PIL.Image.fromarray(image))\n",
    "# Note that the accuracy is not gauranteed for this visualization example."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 8. Pruning Workflow <a class=\"anchor\" id=\"head-8\"></a>\n",
    "\n",
    "### 8.1. Prune the trained model <a class=\"anchor\" id=\"head-8-1\"></a>\n",
    "\n",
    "* Specify pre-trained model\n",
    "* Equalization criterion (Applicable for resnets and mobilenets)\n",
    "* Threshold for pruning.\n",
    "* Output directory to store the model\n",
    "\n",
    "*Usually, you just need to adjust -pth (threshold) for accuracy and model size trade off. Higher pth gives you smaller model (and thus higher inference speed) but worse accuracy. The threshold to use depends on the dataset. A pth value 5.2e-6 is just a start point. If the retrain accuracy is good, you can increase this value to get smaller models. Otherwise, lower this value to get better accuracy.*\n",
    "\n",
    "For some internal studies, we have noticed that a pth value of 0.05 is a good starting point for bodyposenet models."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    " # Create an output directory if it doesn't exist.\n",
    "!mkdir -p $LOCAL_EXPERIMENT_DIR/models/exp_m1_pruned"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!tao model bpnet prune -m $USER_EXPERIMENT_DIR/models/exp_m1_unpruned/$MODEL_CHECKPOINT \\\n",
    "                 -o $USER_EXPERIMENT_DIR/models/exp_m1_pruned/bpnet_model.pruned-0.2.hdf5 \\\n",
    "                 -r $USER_EXPERIMENT_DIR/ \\\n",
    "                 -eq union \\\n",
    "                 -pth 0.2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Check if the file exists\n",
    "!ls -rlt $LOCAL_EXPERIMENT_DIR/models/exp_m1_pruned/"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 8.2. Retrain the pruned model <a class=\"anchor\" id=\"head-8-2\"></a>\n",
    "\n",
    "* Model needs to be re-trained to bring back accuracy after pruning\n",
    "* Specify re-training specification with pretrained weights as pruned model.\n",
    "* Follow the same instructions as in *Run TAO Training* section for multi-gpu support\n",
    "\n",
    "*Note: For retraining, please set the load_graph option to true in the model_config to load the pruned model graph. Also, if after retraining, the model shows some decrease in mAP, it could be that the originally trained model, was pruned a little too much. Please try reducing the pruning threshold, thereby reducing the pruning ratio, and use the new model to retrain.*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Printing the retrain experiment file. \n",
    "# Note: We have updated the experiment file to include the \n",
    "# newly pruned model as a pretrained weights and, the\n",
    "# load_graph option is set to true \n",
    "!cat $LOCAL_SPECS_DIR/bpnet_retrain_m1_coco.yaml"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Retraining using the pruned model as model graph \n",
    "!tao model bpnet train -e $SPECS_DIR/bpnet_retrain_m1_coco.yaml \\\n",
    "                 -r $USER_EXPERIMENT_DIR/models/exp_m1_retrain \\\n",
    "                 --gpus $NUM_GPUS"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    " # Listing the newly retrained model.\n",
    "!ls -rlt $LOCAL_EXPERIMENT_DIR/models/exp_m1_retrain"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Set env for model to use for the remaining steps: infer, evaluate, export.\n",
    "# NOTE: The last epoch model will tagged/saved as `bpnet_model.hdf5` additionally.\n",
    "# If you want to evaluate model from any other step, please change the below env\n",
    "# variable accordingly with the filename of the checkpoint.\n",
    "# Example:\n",
    "# %set_env MODEL_CHECKPOINT=model.step-1152500.hdf5\n",
    "%set_env RETRAIN_MODEL_CHECKPOINT=bpnet_model.hdf5"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 8.3. Evaluate the retrained model <a class=\"anchor\" id=\"head-8-3\"></a>\n",
    "\n",
    "This section evaluates the pruned and retrained model, using bpnet evaluate. If you see large drop in accuracy, please adjust pruning threshold or retraining params accordingly. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Single-scale inference\n",
    "!tao model bpnet evaluate --inference_spec $SPECS_DIR/infer_spec_retrained.yaml \\\n",
    "                    --model_filename $USER_EXPERIMENT_DIR/models/exp_m1_retrain/$RETRAIN_MODEL_CHECKPOINT \\\n",
    "                    --dataset_spec $DATA_POSE_SPECS_DIR/coco_spec.json \\\n",
    "                    --results_dir $USER_EXPERIMENT_DIR/results/exp_m1_retrain/eval_default\n",
    "\n",
    "# Uncomment this section for Multi-scale inference\n",
    "# !tao model bpnet evaluate  --inference_spec $SPECS_DIR/infer_spec_retrained_refine.yaml \\\n",
    "#                      --model_filename $USER_EXPERIMENT_DIR/models/exp_m1_retrain/$RETRAIN_MODEL_CHECKPOINT \\\n",
    "#                      --dataset_spec $DATA_POSE_SPECS_DIR/coco_spec.json \\\n",
    "#                      --results_dir $USER_EXPERIMENT_DIR/results/exp_m1_retrain/eval_refine"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Check if the bodypose evaluation results file is generated.\n",
    "!if [ ! -f $LOCAL_EXPERIMENT_DIR/results/exp_m1_retrain/eval_default/results.csv ]; then echo 'Bodypose Evaluation results file not found!'; else cat $LOCAL_EXPERIMENT_DIR/results/exp_m1_retrain/eval_default/results.csv;fi"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 8.4. Inference using retrained model <a class=\"anchor\" id=\"head-8-4\"></a>\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Single-scale inference\n",
    "!tao model bpnet inference  --inference_spec $SPECS_DIR/infer_spec_retrained.yaml \\\n",
    "                      --model_filename $USER_EXPERIMENT_DIR/models/exp_m1_retrain/$RETRAIN_MODEL_CHECKPOINT \\\n",
    "                      --input_type json \\\n",
    "                      --input $USER_EXPERIMENT_DIR/data/viz_example_data.json \\\n",
    "                      --results_dir $USER_EXPERIMENT_DIR/results/exp_m1_retrain/infer_default \\\n",
    "                      --dump_visualizations\n",
    "\n",
    "# Uncomment this section for Multi-scale inference\n",
    "# !tao model bpnet inference  --inference_spec $SPECS_DIR/infer_spec_retrained_refine.yaml \\\n",
    "#                       --model_filename $USER_EXPERIMENT_DIR/models/exp_m1_retrain/$RETRAIN_MODEL_CHECKPOINT \\\n",
    "#                       --input_type json \\\n",
    "#                       --input $USER_EXPERIMENT_DIR/data/viz_example_data.json \\\n",
    "#                       --results_dir $USER_EXPERIMENT_DIR/results/exp_m1_retrain/infer_refine \\\n",
    "#                       --dump_visualizations"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 8.5. Visualize retrained model inferences <a class=\"anchor\" id=\"head-8-5\"></a>\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Visualize retrained model inferences\n",
    "OUTPUT_PATH = os.path.join(os.getenv(\"LOCAL_EXPERIMENT_DIR\"), 'results/exp_m1_retrain/infer_default/images_annotated/')\n",
    "\n",
    "# Uncomment to visualize multi-scale results\n",
    "# OUTPUT_PATH = os.path.join(os.getenv(\"LOCAL_EXPERIMENT_DIR\"), 'results/exp_m1_retrain/infer_refine/images_annotated/')\n",
    "\n",
    "COLS = 3 # number of columns in the visualizer grid.\n",
    "IMAGES = 9 # number of images to visualize.\n",
    "\n",
    "visualize_images(OUTPUT_PATH, num_cols=COLS, num_images=IMAGES)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 9. Model Export and INT8 Quantization <a class=\"anchor\" id=\"head-9\"></a>\n",
    "\n",
    "### 9.1. Choose network input resolution for deployment  <a class=\"anchor\" id=\"head-9-1\"></a>\n",
    "\n",
    "Network input resolution of the model is one of the major factors that determine the accuracy of bottom-up approaches. Bottom-up methods have to feed the whole image at once, resulting in smaller resolution per person. Hence, higher input resolution would yield better accuracy, especially on small and medium scale persons (w.r.t the image scale). But also note that with higher input resolution, the runtime of the CNN also would be higher. So the accuracy/runtime tradeoff should be decided based on the accuracy and runtime requirements for the target use case.\n",
    "\n",
    "**Height of the desired network**\n",
    "Depending on the target use case and the compute or latency constraints, you would need to choose a resolution that works best for you. If your application involves pose estimation for one or more persons close to camera such that the scale of the person are relatively large, then you could go with a smaller network input resolution. Whereas if you are targeting to use for persons with smaller relative scales like crowded scenes, you might want to go with a higher network input resolution. For instance, if your application has person with height of about 25% of the image, the final resized height would be -> (56px for network height of 224, 72px for network height of 288, and 80px for network height of 320). The network with 320 height has maximum resolution for the person and hence, would be more accurate.\n",
    "\n",
    "**Width of the desired network**\n",
    "Once you freeze the height of the network, the width can be decided based on the aspect ratio for your input data used during deployment time. Or you can also follow a standard multiple of 32/64 closest to the aspect ratio.\n",
    "\n",
    "*NOTE: The height and width should be a multiple of 8. Preferably, a multiple of 16/32/64*\n",
    "\n",
    "**Illustration of accuracy/runtime variation for different resolutions**\n",
    "\n",
    "*Note: These are approximate runtimes/accuracies for the default architecture and spec used in the notebook. Any changes to the architecture or params will yield different results. This is primarily to get a better sense of which resolution would suit your needs. The runtimes provided are for the CNN*\n",
    "\n",
    "| Input Resolution | Precision | Runtime (GeForce RTX 2080) | Runtime (Jetson AGX) |\n",
    "| :-----------: | :-----------: | :-----------: | :-----------: |\n",
    "| 320x448     | FP16        | 3.13ms    | 18.8ms    |\n",
    "| 288x384     | FP16        | 2.58ms    | 12.8ms    |\n",
    "| 224x320     | FP16        | 2.27ms    | 10.1ms    |\n",
    "| 320x448     | INT8        | 1.80ms    | 8.90ms    |\n",
    "| 288x384     | INT8        | 1.56ms    | 6.38ms    |\n",
    "| 224x320     | INT8        | 1.33ms    | 5.07ms    |\n",
    "\n",
    "You can expect to see a 7-10% mAP increase in `area=medium` category when going from 224x320 to 288x384 and an additional 7-10% mAP when you go to 320x448. The accuracy for `area=large` remains almost same across these resolutions, so you can stick to lower resolution if this is what you need. As per [COCO keypoint evaluation](https://cocodataset.org/#keypoints-eval), `medium` area is defined as persons occupying less than area between 36^2 to 96^2. Anything above it is categorized as `large`.\n",
    "\n",
    "\n",
    "**Default size used in the notebook**\n",
    "We use a default size of `288x384` for tradeoff between good accuracy and runtime. For the remainder of the notebook, we assume this configuration. If you would like to use a different resolution, you would need the following changes:\n",
    "1. Update the environment variables in the cell below with the desired shape.\n",
    "2. Update the `input_shape` in `infer_spec_strict.yaml` and `infer_spec_retrained_strict.yaml` which will allow you do a sanity evaluation of the exported TRT model. By default, it is set to `[288, 384]`\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Set dimensions of desired output model for inference/deployment\n",
    "%set_env IN_HEIGHT=288\n",
    "%set_env IN_WIDTH=384\n",
    "%set_env IN_CHANNELS=3\n",
    "%set_env INPUT_SHAPE=288x384x3\n",
    "\n",
    "# Set input name\n",
    "%set_env INPUT_NAME=input_1:0"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 9.2. Export `.onnx` model.<a class=\"anchor\" id=\"head-9-2\"></a>\n",
    "\n",
    "Use the export functionality to export an encrypted model in `fp32` format without any optimizations."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!mkdir -p $LOCAL_EXPERIMENT_DIR/models/exp_m1_final\n",
    "# Removing a pre-existing copy of the onnx if there has been any.\n",
    "output_file = os.path.join(os.getenv(\"LOCAL_EXPERIMENT_DIR\"), \"models/exp_m1_final/bpnet_model.onnx\")\n",
    "\n",
    "if os.path.exists(output_file):\n",
    "    os.system(\"rm {}\".format(output_file))\n",
    "\n",
    "# Export the pruned model as is with fp32 with no optimizations.\n",
    "!tao model bpnet export -m $USER_EXPERIMENT_DIR/models/exp_m1_retrain/$RETRAIN_MODEL_CHECKPOINT \\\n",
    "                  -e $SPECS_DIR/bpnet_retrain_m1_coco.yaml \\\n",
    "                  -r $USER_EXPERIMENT_DIR/ \\\n",
    "                  -o $USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.onnx \\\n",
    "                  -t tfonnx\n",
    "\n",
    "# Use command below, if you'd like to export the unpruned version.\n",
    "# !tao model bpnet export -m $USER_EXPERIMENT_DIR/models/exp_m1_unpruned/$MODEL_CHECKPOINT \\\n",
    "#                   -e $SPECS_DIR/bpnet_train_m1_coco.yaml \\\n",
    "#                   -r $USER_EXPERIMENT_DIR/ \\\n",
    "#                   -o $USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.onnx \\\n",
    "#                   -t tfonnx"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# check the deployment file is presented\n",
    "!if [ ! -f $LOCAL_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.onnx ]; then echo 'Deployment file not found, please generate.'; else echo 'Found deployment file.';fi"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 9.3. Int8 Optimization<a class=\"anchor\" id=\"head-9-3\"></a>\n",
    "\n",
    "BodyposeNet model supports int8 inference mode in TensorRT. In order to do this, the model is first calibrated to run 8-bit inferences. This is the process:\n",
    "* Provide a directory with set of images to be used for calibration. \n",
    "* A calibration tensorfile is generated and saved in `--cal_data_file`\n",
    "* This tensorfile is use to calibrate the model and the calibration table is stored in `--cal_cache_file`\n",
    "* The calibration table in addition to the model is used to generate the int8 tensorrt engine to the path `--engine_file`\n",
    "\n",
    "Since the COCO dataset contains a lot of non-person images as well which might not be useful for the calibration process, we use a sampling script which parses the annotations and samples required number of images at random based on certain criteria. The following command ensures that there is at least one person in the image being picked. (`pth` corresponds to threshold for minimum number of persons per image). You can choose to remove the `--randomize` flag to always pick the same subset of qualified images. \n",
    "\n",
    "*Note: For this example, we generate a calibration tensorfile containing 2000 batches of training data. Ideally, it is best to use at least 10-20% of the training data to do so. The more data provided during calibration, the closer int8 inferences are to fp32 inferences.*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Number of calibration samples to use\n",
    "%set_env NUM_CALIB_SAMPLES=2000"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!python3 sample_calibration_images.py \\\n",
    "    -a $LOCAL_EXPERIMENT_DIR/data/annotations/person_keypoints_train2017.json \\\n",
    "    -i $LOCAL_EXPERIMENT_DIR/data/train2017/ \\\n",
    "    -o $LOCAL_EXPERIMENT_DIR/data/calibration_samples/ \\\n",
    "    -n $NUM_CALIB_SAMPLES \\\n",
    "    -pth 1 \\\n",
    "    --randomize"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "output_file = os.path.join(os.getenv(\"LOCAL_EXPERIMENT_DIR\"), \"models/exp_m1_final/bpnet_model.onnx\")\n",
    "# NOTE: If you are trying to re-run calibration, please remove the calibration table (cal_cache_file).\n",
    "# If you are trying to re-generate calibration data, please remove cal_data_file as well.\n",
    "\n",
    "if os.path.exists(output_file):\n",
    "    os.system(\"rm {}\".format(output_file))\n",
    "\n",
    "!tao model bpnet export \\\n",
    "    -m $USER_EXPERIMENT_DIR/models/exp_m1_retrain/$RETRAIN_MODEL_CHECKPOINT \\\n",
    "    -o $USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.onnx \\\n",
    "    -r $USER_EXPERIMENT_DIR/ \\\n",
    "    -d $IN_HEIGHT,$IN_WIDTH,$IN_CHANNELS \\\n",
    "    -e $SPECS_DIR/bpnet_retrain_m1_coco.yaml \\\n",
    "    -t tfonnx \\\n",
    "    --data_type int8 \\\n",
    "    --engine_file $USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.$IN_HEIGHT.$IN_WIDTH.int8.engine \\\n",
    "    --cal_image_dir $USER_EXPERIMENT_DIR/data/calibration_samples/ \\\n",
    "    --cal_cache_file $USER_EXPERIMENT_DIR/models/exp_m1_final/calibration.$IN_HEIGHT.$IN_WIDTH.bin  \\\n",
    "    --cal_data_file $USER_EXPERIMENT_DIR/models/exp_m1_final/coco.$IN_HEIGHT.$IN_WIDTH.tensorfile \\\n",
    "    --batch_size 1 \\\n",
    "    --batches $NUM_CALIB_SAMPLES \\\n",
    "    --max_batch_size 1 \\\n",
    "    --data_format channels_last"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 9.4. Generate TensorRT engine<a class=\"anchor\" id=\"head-9-4\"></a>\n",
    "Here, we use another method of generating the TensorRT engine model. If you have another Nvidia GPU device where you'd like to optimize the `.onnx` model for, you can use the `trtexec` command on that device alongside your `.onnx` model and calibration cache to generate the optimized TensorRT engine.\n",
    "\n",
    "Verify engine generation using the `trtexec` utility included with the docker.\n",
    "\n",
    "The `trtexec` produces optimized tensorrt engines for the platform that it resides on. Therefore, to get maximum performance, please instantiate this docker and execute the `trtexec` command, with the exported `.onnx` file and calibration cache (for int8 mode) on your target device. \n",
    "\n",
    "The `trtexec` utility included in this docker only works for x86 devices, with discrete NVIDIA GPU's. For the jetson devices, please download the `tao-converter` for jetson from the dev zone link [here](https://developer.nvidia.com/tao-converter). \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Set opt profile shapes\n",
    "%set_env MAX_BATCH_SIZE=1\n",
    "%set_env OPT_BATCH_SIZE=1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Convert to TensorRT engine(INT8).\n",
    "!tao model bpnet run trtexec --onnx=$USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.onnx \\\n",
    "                             --minShapes=${INPUT_NAME}:1x${INPUT_SHAPE} \\\n",
    "                             --maxShapes=${INPUT_NAME}:${MAX_BATCH_SIZE}x${INPUT_SHAPE} \\\n",
    "                             --optShapes=${INPUT_NAME}:${OPT_BATCH_SIZE}x${INPUT_SHAPE} \\\n",
    "                             --int8 \\\n",
    "                             --calib=$USER_EXPERIMENT_DIR/models/exp_m1_final/calibration.$IN_HEIGHT.$IN_WIDTH.bin \\\n",
    "                             --saveEngine=$USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.$IN_HEIGHT.$IN_WIDTH.int8.engine"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 10. Verify TensorRT model and Deploy <a class=\"anchor\" id=\"head-10\"></a>\n",
    "\n",
    "Verify the exported model by visualizing inferences on TensorRT.\n",
    "In addition to running inference on a `.hdf5` model, the inference tool is also capable of consuming the converted TensorRT engine.\n",
    "\n",
    "\n",
    "### 10.1. Inference Using TensorRt Engine <a class=\"anchor\" id=\"head-10-1\"></a>\n",
    "\n",
    "Please make sure to update the inference_spec file if you are using a different resolution other than default."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Set helper envs\n",
    "%set_env INFER_DIR_NAME=infer_strict_${IN_HEIGHT}_${IN_WIDTH}\n",
    "%set_env EVAL_DIR_NAME=eval_strict_${IN_HEIGHT}_${IN_WIDTH}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# INT8 inference\n",
    "!tao model bpnet inference --inference_spec $SPECS_DIR/infer_spec_retrained_strict.yaml \\\n",
    "                     --model_filename $USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.$IN_HEIGHT.$IN_WIDTH.int8.engine \\\n",
    "                     --input_type json \\\n",
    "                     --input $USER_EXPERIMENT_DIR/data/viz_example_data.json \\\n",
    "                     --results_dir $USER_EXPERIMENT_DIR/results/exp_m1_final/${INFER_DIR_NAME}_int8 \\\n",
    "                     --dump_visualizations\n",
    "\n",
    "# FP16 inference\n",
    "# !tao model bpnet inference --inference_spec $SPECS_DIR/infer_spec_retrained_strict.yaml \\\n",
    "#                      --model_filename $USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.$IN_HEIGHT.$IN_WIDTH.fp16.engine \\\n",
    "#                      --input_type json \\\n",
    "#                      --input $USER_EXPERIMENT_DIR/data/viz_example_data.json \\\n",
    "#                      --results_dir $USER_EXPERIMENT_DIR/results/exp_m1_final/${INFER_DIR_NAME}_fp16 \\\n",
    "#                      --dump_visualizations\n",
    "\n",
    "# FP32 inference\n",
    "# !tao model bpnet inference --inference_spec $SPECS_DIR/infer_spec_retrained_strict.yaml \\\n",
    "#                      --model_filename $USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.$IN_HEIGHT.$IN_WIDTH.fp32.engine \\\n",
    "#                      --input_type json \\\n",
    "#                      --input $USER_EXPERIMENT_DIR/data/viz_example_data.json \\\n",
    "#                      --results_dir $USER_EXPERIMENT_DIR/results/exp_m1_final/${INFER_DIR_NAME}_fp32 \\\n",
    "#                      --dump_visualizations"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 10.2. Visualize TensorRT Inferences<a class=\"anchor\" id=\"head-10-2\"></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Visualize trt inferences\n",
    "OUTPUT_PATH = os.path.join(\n",
    "    os.getenv(\"LOCAL_EXPERIMENT_DIR\"), 'results/exp_m1_final/infer_strict_{}_{}_int8/images_annotated/'.format(\n",
    "        os.getenv(\"IN_HEIGHT\"), os.getenv(\"IN_WIDTH\")))\n",
    "\n",
    "# Uncomment to visualize FP16/FP32 results\n",
    "# OUTPUT_PATH = os.path.join(\n",
    "#     os.getenv(\"LOCAL_EXPERIMENT_DIR\"), 'results/exp_m1_final/infer_strict_{}_{}_fp16/images_annotated/'.format(\n",
    "#         os.getenv(\"IN_HEIGHT\"), os.getenv(\"IN_WIDTH\")))\n",
    "# OUTPUT_PATH = os.path.join(\n",
    "#     os.getenv(\"LOCAL_EXPERIMENT_DIR\"), 'results/exp_m1_final/infer_strict_{}_{}_fp32/images_annotated/'.format(\n",
    "#         os.getenv(\"IN_HEIGHT\"), os.getenv(\"IN_WIDTH\")))\n",
    "\n",
    "COLS = 3 # number of columns in the visualizer grid.\n",
    "IMAGES = 9 # number of images to visualize.\n",
    "\n",
    "visualize_images(OUTPUT_PATH, num_cols=COLS, num_images=IMAGES)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 10.3. Evaluate the TensorRT Engine<a class=\"anchor\" id=\"head-10-3\"></a>\n",
    "\n",
    "*Note: This evaluation is mainly used as a sanity check for the exported TRT (INT8/FP16) models. This doesn't reflect the true accuracy of the model as the input aspect ratio here can vary a lot from the aspect ratio of the images in the validation set (which has a collection of images with various resolutions). Here, we retain a strict input resolution and pad the image to retrain the aspect ratio. So the accuracy here might vary based on the aspect ratio and the network resolution you choose.*\n",
    "\n",
    "We run the evaluation of the `.hdf5` model in strict mode as well to compare with the accuracies of the INT8/FP16/FP32 models for any drop in accuracy. \n",
    "\n",
    "The FP16/FP32 models should have no or minimal drop in accuracy when compared to the `.hdf5` model in this step. The INT8 models would have similar accuracies (or comparable within 2-3% mAP range) to the `.hdf5` model. \n",
    "\n",
    "Note: If after INT8 calibration the accuracy of the INT8 inferences seem to degrade, it could be because of a couple of reasons:\n",
    "- There wasn't enough data in the calibration tensorfile used to calibrate the model\n",
    "- The training data is not entirely representative of your test images, and the calibration may be incorrect. Therefore, you may either regenerate the calibration tensorfile with more batches of the training data and recalibrate the model, or add a few images from the test set. \n",
    "- When using calibration data sampling, it is possible that the randomly sampled subset of data is not a good representative of the test dataset. So this could lead to a poor calibration as well. You can either re-try the sampling script, or increase the number of samples / modify the criterion like min person and min keypoint thresholds.  \n",
    "\n",
    "*For more information, please follow the instructions in the USER GUIDE. Alternatively, you can opt for corresponding `fp16` model instead of `int8`.*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# .hdf5 model evaluation in strict mode\n",
    "# Single-scale inference\n",
    "!tao model bpnet evaluate --inference_spec $SPECS_DIR/infer_spec_retrained_strict.yaml \\\n",
    "                    --model_filename $USER_EXPERIMENT_DIR/models/exp_m1_retrain/$RETRAIN_MODEL_CHECKPOINT \\\n",
    "                    --dataset_spec $DATA_POSE_SPECS_DIR/coco_spec.json \\\n",
    "                    --results_dir $USER_EXPERIMENT_DIR/results/exp_m1_retrain/$EVAL_DIR_NAME"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Check if the tao bodypose evaluation results file is generated.\n",
    "!if [ ! -f $LOCAL_EXPERIMENT_DIR/results/exp_m1_retrain/eval_strict_${IN_HEIGHT}_${IN_WIDTH}/results.csv ]; then echo '.hdf5 model evaluation results file not found!'; else cat $LOCAL_EXPERIMENT_DIR/results/exp_m1_retrain/eval_strict_${IN_HEIGHT}_${IN_WIDTH}/results.csv;fi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# INT8 evaluation\n",
    "!tao model bpnet evaluate --inference_spec $SPECS_DIR/infer_spec_retrained_strict.yaml \\\n",
    "                    --model_filename $USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.$IN_HEIGHT.$IN_WIDTH.int8.engine \\\n",
    "                    --dataset_spec $DATA_POSE_SPECS_DIR/coco_spec.json \\\n",
    "                    --results_dir $USER_EXPERIMENT_DIR/results/exp_m1_final/${EVAL_DIR_NAME}_int8"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Check if the INT8/FP16 model evaluation results file are generated.\n",
    "!if [ ! -f $LOCAL_EXPERIMENT_DIR/results/exp_m1_final/eval_strict_${IN_HEIGHT}_${IN_WIDTH}_int8/results.csv ]; then echo 'INT8 model evaluation results file not found!'; else cat $LOCAL_EXPERIMENT_DIR/results/exp_m1_final/eval_strict_${IN_HEIGHT}_${IN_WIDTH}_int8/results.csv;fi\n",
    "!if [ ! -f $LOCAL_EXPERIMENT_DIR/results/exp_m1_final/eval_strict_${IN_HEIGHT}_${IN_WIDTH}_fp16/results.csv ]; then echo 'FP16 model evaluation results file not found!'; else cat $LOCAL_EXPERIMENT_DIR/results/exp_m1_final/eval_strict_${IN_HEIGHT}_${IN_WIDTH}_fp16/results.csv;fi"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 10.4. Export Deployable Model <a class=\"anchor\" id=\"head-10-4\"></a>\n",
    "\n",
    "Once the model is verified, now we need to re-export the model so it can be used to run on our inference platforms like TAO CV Inference or Deepstream. It's the same guidelines as `Export .onnx` and `INT8 Optimization` sections, but we need to add `--sdk_compatible_model` flag to the export command. This adds a few non-trainable post-process layers to the model.\n",
    "\n",
    "Please make sure to re-use the already generated calibration tensorfile (`--cal_data_file`) in the previous step to keep it consistent, but you will need to regenerate the `cal_cache_file` and the `.onnx` model. \n",
    "\n",
    "*NOTE: This model will not work with the bpnet inference / evaluate commands. This is for deployment only*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "output_file = os.path.join(os.getenv(\"LOCAL_EXPERIMENT_DIR\"), \"models/exp_m1_final/bpnet_model.deploy.onnx\")\n",
    "if os.path.exists(output_file):\n",
    "    os.system(\"rm {}\".format(output_file))\n",
    "\n",
    "!tao model bpnet export \\\n",
    "    -m $USER_EXPERIMENT_DIR/models/exp_m1_retrain/$RETRAIN_MODEL_CHECKPOINT \\\n",
    "    -o $USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.deploy.onnx \\\n",
    "    -r $USER_EXPERIMENT_DIR/ \\\n",
    "    -d $IN_HEIGHT,$IN_WIDTH,$IN_CHANNELS \\\n",
    "    -e $SPECS_DIR/bpnet_retrain_m1_coco.yaml \\\n",
    "    -t tfonnx \\\n",
    "    --data_type int8 \\\n",
    "    --engine_file $USER_EXPERIMENT_DIR/models/exp_m1_final/bpnet_model.$IN_HEIGHT.$IN_WIDTH.int8.deploy.engine \\\n",
    "    --cal_image_dir $USER_EXPERIMENT_DIR/data/calibration_samples/ \\\n",
    "    --cal_cache_file $USER_EXPERIMENT_DIR/models/exp_m1_final/calibration.$IN_HEIGHT.$IN_WIDTH.deploy.bin  \\\n",
    "    --cal_data_file $USER_EXPERIMENT_DIR/models/exp_m1_final/coco.$IN_HEIGHT.$IN_WIDTH.tensorfile \\\n",
    "    --batch_size 1 \\\n",
    "    --batches $NUM_CALIB_SAMPLES \\\n",
    "    --max_batch_size 1 \\\n",
    "    --data_format channels_last \\\n",
    "    --sdk_compatible_model"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can now generate the corresponding TRT engines for the target platforms using `trtexec` as shown in the  previous section (for INT8 / FP16 / FP32) using the generated onnx model (`bpnet_model.deploy.onnx`) and calibration table (`calibration.*.deploy.bin`)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.8.10 64-bit",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.10"
  },
  "vscode": {
   "interpreter": {
    "hash": "767d51c1340bd893661ea55ea3124f6de3c7a262a8b4abca0554b478b1e2ff90"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
