{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Using Tensorflow through ONNX:\n",
    "\n",
    "The ONNX path to getting a TensorRT engine is a high-performance approach to TensorRT conversion that works with a variety of frameworks - including Tensorflow and Tensorflow 2.\n",
    "\n",
    "TensorRT's ONNX parser is an all-or-nothing parser for ONNX models that ensures an optimal, single TensorRT engine and is great for exporting to the TensorRT API runtimes. ONNX models can be easily generated from Tensorflow models using the ONNX project's keras2onnx and tf2onnx tools.\n",
    "\n",
    "In this notebook we will take a look at how ONNX models can be generated from a Keras/TF2 ResNet50 model, how we can convert those ONNX models to TensorRT engines using trtexec, and finally how we can use the native Python TensorRT runtime to feed a batch of data into the TRT engine at inference time.\n",
    "\n",
    "Essentially, we will follow this path to convert and deploy our model:\n",
    "\n",
    "![Tensorflow+ONNX](./images/tf_onnx.png)\n",
    "\n",
    "__Use this when:__\n",
    "- You want the most efficient runtime performance possible out of an automatic parser\n",
    "- You have a network consisting of mostly supported operations -  including operations and layers that the ONNX parser uniquely supports (Such as RNNs/LSTMs/GRUs)\n",
    "- You are willing to write custom C++ plugins for any unsupported operations (if your network has any)\n",
    "- You do not want to use the manual layer builder API"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Checking your GPU status:__\n",
    "\n",
    "Lets see what GPU hardware we are working with. Our hardware can matter a lot because different cards have different performance profiles and precisions they tend to operate best in. For example, a V100 is relatively strong as FP16 processing vs a T4, which tends to operate best in the INT8 mode."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 377
    },
    "id": "IJBfZsGo8yaV",
    "outputId": "f4c4e20d-fcfd-43a2-b10d-c6978c25c91f"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Sat Jan 30 01:21:04 2021       \n",
      "+-----------------------------------------------------------------------------+\n",
      "| NVIDIA-SMI 450.80.02    Driver Version: 450.80.02    CUDA Version: 11.1     |\n",
      "|-------------------------------+----------------------+----------------------+\n",
      "| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\n",
      "| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\n",
      "|                               |                      |               MIG M. |\n",
      "|===============================+======================+======================|\n",
      "|   0  Tesla V100-DGXS...  On   | 00000000:07:00.0 Off |                    0 |\n",
      "| N/A   38C    P0    36W / 300W |    126MiB / 16155MiB |      0%      Default |\n",
      "|                               |                      |                  N/A |\n",
      "+-------------------------------+----------------------+----------------------+\n",
      "|   1  Tesla V100-DGXS...  On   | 00000000:08:00.0 Off |                    0 |\n",
      "| N/A   39C    P0    37W / 300W |      6MiB / 16158MiB |      0%      Default |\n",
      "|                               |                      |                  N/A |\n",
      "+-------------------------------+----------------------+----------------------+\n",
      "|   2  Tesla V100-DGXS...  On   | 00000000:0E:00.0 Off |                    0 |\n",
      "| N/A   38C    P0    37W / 300W |      6MiB / 16158MiB |      0%      Default |\n",
      "|                               |                      |                  N/A |\n",
      "+-------------------------------+----------------------+----------------------+\n",
      "|   3  Tesla V100-DGXS...  On   | 00000000:0F:00.0 Off |                    0 |\n",
      "| N/A   39C    P0    36W / 300W |      6MiB / 16158MiB |      0%      Default |\n",
      "|                               |                      |                  N/A |\n",
      "+-------------------------------+----------------------+----------------------+\n",
      "                                                                               \n",
      "+-----------------------------------------------------------------------------+\n",
      "| Processes:                                                                  |\n",
      "|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |\n",
      "|        ID   ID                                                   Usage      |\n",
      "|=============================================================================|\n",
      "+-----------------------------------------------------------------------------+\n"
     ]
    }
   ],
   "source": [
    "!nvidia-smi"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Remember to sucessfully deploy a TensorRT model, you have to make __five key decisions__:\n",
    "\n",
    "1. __What format should I save my model in?__\n",
    "2. __What batch size(s) am I running inference at?__\n",
    "3. __What precision am I running inference at?__\n",
    "4. __What TensorRT path am I using to convert my model?__\n",
    "5. __What runtime am I targeting?__"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. What format should I save my model in?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Our first step is to load up a pretrained ResNet50 model. This can be done easily using keras.applications - a collection of pretrained image model classifiers that can additionally be used as backbones for detection and other deep learning problems.\n",
    "\n",
    "We can load up a pretrained classifier with batch size 32 as follows:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "id": "iVRVItvR8quS"
   },
   "outputs": [],
   "source": [
    "from tensorflow.keras.applications import ResNet50\n",
    "\n",
    "BATCH_SIZE = 32"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "id": "cKT07xPV8qua"
   },
   "outputs": [],
   "source": [
    "model = ResNet50(weights='imagenet')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For the purposes of checking our non-optimized model, we can use a dummy batch of data to verify our performance and the consistency of our results across precisions. 224x224 RGB images are a common  format, so lets generate a batch of them.\n",
    "\n",
    "Once we generate a batch of them, we will feed it through the model using .predict() to \"warm up\" the model. The first batch you feed through a deep learning model often takes a lot longer as just-in-time compilation and other runtime optimizations are performed. Once you get that first batch through, further performance tends to be more consistent."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 238
    },
    "id": "8ETjzebW8qu2",
    "outputId": "03a8e7c1-215c-404a-8cac-8030132657dd"
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[1.6964252e-04, 3.3007501e-04, 6.1350627e-05, ..., 1.4622418e-05,\n",
       "        1.4449919e-04, 6.6087063e-04],\n",
       "       [1.6964252e-04, 3.3007501e-04, 6.1350627e-05, ..., 1.4622418e-05,\n",
       "        1.4449919e-04, 6.6087063e-04],\n",
       "       [1.6964252e-04, 3.3007501e-04, 6.1350627e-05, ..., 1.4622418e-05,\n",
       "        1.4449919e-04, 6.6087063e-04],\n",
       "       ...,\n",
       "       [1.6964252e-04, 3.3007501e-04, 6.1350627e-05, ..., 1.4622418e-05,\n",
       "        1.4449919e-04, 6.6087063e-04],\n",
       "       [1.6964252e-04, 3.3007501e-04, 6.1350627e-05, ..., 1.4622418e-05,\n",
       "        1.4449919e-04, 6.6087063e-04],\n",
       "       [1.6964252e-04, 3.3007501e-04, 6.1350627e-05, ..., 1.4622418e-05,\n",
       "        1.4449919e-04, 6.6087063e-04]], dtype=float32)"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "dummy_input_batch = np.zeros((BATCH_SIZE, 224, 224, 3))\n",
    "\n",
    "model.predict(dummy_input_batch) # warm up"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Baseline Timing:__\n",
    "\n",
    "Once we have warmed up our non-optimized model, we can get a rough timing estimate of our model using %%timeit, which runs the cell several times and reports timing information.\n",
    "\n",
    "Lets take a look at how long our model takes to run at baseline before doing any TensorRT optimization:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 85
    },
    "id": "eMu3dZlM96bh",
    "outputId": "537a88e2-ad7d-413a-f815-abd91f010e21"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "51.2 ms ± 4.42 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n"
     ]
    }
   ],
   "source": [
    "%%timeit\n",
    "\n",
    "result = model.predict_on_batch(dummy_input_batch) # Check default performance"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can now take a look at the resulting batch:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([[1.6964252e-04, 3.3007501e-04, 6.1350627e-05, ..., 1.4622418e-05,\n",
       "        1.4449919e-04, 6.6087063e-04],\n",
       "       [1.6964252e-04, 3.3007501e-04, 6.1350627e-05, ..., 1.4622418e-05,\n",
       "        1.4449919e-04, 6.6087063e-04],\n",
       "       [1.6964252e-04, 3.3007501e-04, 6.1350627e-05, ..., 1.4622418e-05,\n",
       "        1.4449919e-04, 6.6087063e-04],\n",
       "       ...,\n",
       "       [1.6964252e-04, 3.3007501e-04, 6.1350627e-05, ..., 1.4622418e-05,\n",
       "        1.4449919e-04, 6.6087063e-04],\n",
       "       [1.6964252e-04, 3.3007501e-04, 6.1350627e-05, ..., 1.4622418e-05,\n",
       "        1.4449919e-04, 6.6087063e-04],\n",
       "       [1.6964252e-04, 3.3007501e-04, 6.1350627e-05, ..., 1.4622418e-05,\n",
       "        1.4449919e-04, 6.6087063e-04]], dtype=float32)"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "result = model.predict_on_batch(dummy_input_batch)\n",
    "result[:10] # The probabilities for the first ten Imagenet classes in the first sample of the batch"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Okay - now that we have a baseline model, lets convert it to the format TensorRT understands best: ONNX. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Convert Keras model to ONNX intermediate model and save:__"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The ONNX format is a framework-agnostic way of describing and saving the structure and state of deep learning models. We can convert Tensorflow 2 Keras models to ONNX using the keras2onnx tool provided by the ONNX project. (You can find the ONNX project here: https://onnx.ai or on GitHub here: https://github.com/onnx/onnx)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "id": "aG3tXUEx8quf"
   },
   "outputs": [],
   "source": [
    "import onnx, keras2onnx"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Converting a model with default parameters to an ONNX model is fairly straightforward:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 68
    },
    "id": "QxLAvWp68quk",
    "outputId": "d750962a-d098-4a63-c195-c3442211cdc1"
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "tf executing eager_mode: True\n",
      "tf.keras model eager_mode: False\n",
      "The ONNX operator number change on the optimization: 458 -> 127\n"
     ]
    }
   ],
   "source": [
    "onnx_model = keras2onnx.convert_keras(model, model.name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That said, we do need to make one change for our model to work with TensorRT. Keras by default uses a dynamic input shape in its networks - where it can handle arbitrary batch sizes at every update. While TensorRT can do this, it requires extra configuration. \n",
    "\n",
    "Instead, we will just set the input size to be fixed to our batch size. This will work with TensorRT out of the box!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Configure ONNX File Batch Size:__"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Note:__ We need to do two things to set our batch size with ONNX. The first is to modify our ONNX file to change its default batch size to our target batch size. The second is setting our converter to use the __explicit batch__ mode, which will use this default batch size as our final batch size."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "inputs = onnx_model.graph.input\n",
    "for input in inputs:\n",
    "    dim1 = input.type.tensor_type.shape.dim[0]\n",
    "    dim1.dim_value = BATCH_SIZE"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Save Model:__"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "id": "jFT6-13f8qup"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Done saving!\n"
     ]
    }
   ],
   "source": [
    "model_name = \"resnet50_onnx_model.onnx\"\n",
    "onnx.save_model(onnx_model, model_name)\n",
    "print(\"Done saving!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Once we get our model into ONNX format, we can convert it efficiently using TensorRT. For this, TensorRT needs exclusive access to your GPU. If you so much as import Tensorflow, it will generally consume all of your GPU memory. To get around this, before moving on go ahead and shut down this notebook and restart it. (You can do this in the menu: Kernel -> Restart Kernel)\n",
    "\n",
    "Make sure not to import Tensorflow at any point after restarting the runtime! \n",
    "\n",
    "(The following cell is a quick shortcut to make your notebook restart:)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "uZUnHVHE8quu"
   },
   "outputs": [],
   "source": [
    "import os, time\n",
    "print(\"Restarting kernel  in three seconds...\")\n",
    "time.sleep(3)\n",
    "print(\"Restarting kernel now\")\n",
    "os._exit(0) # Shut down all kernels so TRT doesn't fight with Tensorflow for GPU memory - TF monopolizes all GPU memory by default"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. What batch size(s) am I running inference at?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We have actually already set our inference batch size - see the note above in section 1!\n",
    "\n",
    "We are going to set our target batch size to a fixed size of 32."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "BATCH_SIZE = 32"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We need to do two things to set our batch size to a fixed batch size with ONNX: \n",
    "\n",
    "1. Modify our ONNX file to change its default batch size to our target batch size, which we did above.\n",
    "2. Use the trtexec --explicitBatch flag, which we also did above."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. What precision am I running inference at?\n",
    "\n",
    "Now, we have a converted TensorRT engine. Great! That means we are ready to load it into the native Python TensorRT runtime. This runtime strikes a balance between the ease of use of the high level Python runtimes and the low level C++ runtimes.\n",
    "\n",
    "First, as before, lets create a dummy batch. Importantly, by default TensorRT will use the input precision you give it as the default precision for the rest of the network. \n",
    "\n",
    "Remember that lower precisions than FP32 tend to run faster. There are two common reduced precision modes - FP16 and INT8. Graphics cards that are designed to do inference well often have an affinity for one of these two types. This guide was developed on an NVIDIA V100, which favors FP16, so we will use that here by default. INT8 is a more complicated process that requires a calibration step."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "USE_FP16 = True\n",
    "\n",
    "target_dtype = np.float16 if USE_FP16 else np.float32\n",
    "dummy_input_batch = np.zeros((BATCH_SIZE, 224, 224, 3), dtype = np.float32) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. What TensorRT path am I using to convert my model?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "TensorRT is able to take ONNX models and convert them entirely into a single, efficient TensorRT engine. Restart your Jupyter kernel, and then start here!\n",
    "\n",
    "We can use trtexec, a command line tool for working with TensorRT, in order to convert an ONNX model to an engine file.\n",
    "\n",
    "To convert the model we saved in the previous steps, we need to point to the ONNX file, give trtexec a name to save the engine as, and last specify that we want to use a fixed batch size instead of a dynamic one.\n",
    "\n",
    "__Remember to shut down all Jupyter notebooks and restart your Jupyter kernel after \"1. What format should I save my model in?\" - otherwise this cell will crash as TensorRT competes with Tensorflow for GPU memory:__"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 34
    },
    "id": "h60Gmotx8quz",
    "outputId": "065384aa-c848-4194-c72c-cad0d80449ca"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "&&&& RUNNING TensorRT.trtexec # trtexec --onnx=resnet50_onnx_model.onnx --saveEngine=resnet_engine.trt --explicitBatch --fp16\n",
      "[01/30/2021-01:47:03] [I] === Model Options ===\n",
      "[01/30/2021-01:47:03] [I] Format: ONNX\n",
      "[01/30/2021-01:47:03] [I] Model: resnet50_onnx_model.onnx\n",
      "[01/30/2021-01:47:03] [I] Output:\n",
      "[01/30/2021-01:47:03] [I] === Build Options ===\n",
      "[01/30/2021-01:47:03] [I] Max batch: explicit\n",
      "[01/30/2021-01:47:03] [I] Workspace: 16 MiB\n",
      "[01/30/2021-01:47:03] [I] minTiming: 1\n",
      "[01/30/2021-01:47:03] [I] avgTiming: 8\n",
      "[01/30/2021-01:47:03] [I] Precision: FP32+FP16\n",
      "[01/30/2021-01:47:03] [I] Calibration: \n",
      "[01/30/2021-01:47:03] [I] Refit: Disabled\n",
      "[01/30/2021-01:47:03] [I] Safe mode: Disabled\n",
      "[01/30/2021-01:47:03] [I] Save engine: resnet_engine.trt\n",
      "[01/30/2021-01:47:03] [I] Load engine: \n",
      "[01/30/2021-01:47:03] [I] Builder Cache: Enabled\n",
      "[01/30/2021-01:47:03] [I] NVTX verbosity: 0\n",
      "[01/30/2021-01:47:03] [I] Tactic sources: Using default tactic sources\n",
      "[01/30/2021-01:47:03] [I] Input(s)s format: fp32:CHW\n",
      "[01/30/2021-01:47:03] [I] Output(s)s format: fp32:CHW\n",
      "[01/30/2021-01:47:03] [I] Input build shapes: model\n",
      "[01/30/2021-01:47:03] [I] Input calibration shapes: model\n",
      "[01/30/2021-01:47:03] [I] === System Options ===\n",
      "[01/30/2021-01:47:03] [I] Device: 0\n",
      "[01/30/2021-01:47:03] [I] DLACore: \n",
      "[01/30/2021-01:47:03] [I] Plugins:\n",
      "[01/30/2021-01:47:03] [I] === Inference Options ===\n",
      "[01/30/2021-01:47:03] [I] Batch: Explicit\n",
      "[01/30/2021-01:47:03] [I] Input inference shapes: model\n",
      "[01/30/2021-01:47:03] [I] Iterations: 10\n",
      "[01/30/2021-01:47:03] [I] Duration: 3s (+ 200ms warm up)\n",
      "[01/30/2021-01:47:03] [I] Sleep time: 0ms\n",
      "[01/30/2021-01:47:03] [I] Streams: 1\n",
      "[01/30/2021-01:47:03] [I] ExposeDMA: Disabled\n",
      "[01/30/2021-01:47:03] [I] Data transfers: Enabled\n",
      "[01/30/2021-01:47:03] [I] Spin-wait: Disabled\n",
      "[01/30/2021-01:47:03] [I] Multithreading: Disabled\n",
      "[01/30/2021-01:47:03] [I] CUDA Graph: Disabled\n",
      "[01/30/2021-01:47:03] [I] Separate profiling: Disabled\n",
      "[01/30/2021-01:47:03] [I] Skip inference: Disabled\n",
      "[01/30/2021-01:47:03] [I] Inputs:\n",
      "[01/30/2021-01:47:03] [I] === Reporting Options ===\n",
      "[01/30/2021-01:47:03] [I] Verbose: Disabled\n",
      "[01/30/2021-01:47:03] [I] Averages: 10 inferences\n",
      "[01/30/2021-01:47:03] [I] Percentile: 99\n",
      "[01/30/2021-01:47:03] [I] Dump refittable layers:Disabled\n",
      "[01/30/2021-01:47:03] [I] Dump output: Disabled\n",
      "[01/30/2021-01:47:03] [I] Profile: Disabled\n",
      "[01/30/2021-01:47:03] [I] Export timing to JSON file: \n",
      "[01/30/2021-01:47:03] [I] Export output to JSON file: \n",
      "[01/30/2021-01:47:03] [I] Export profile to JSON file: \n",
      "[01/30/2021-01:47:03] [I] \n",
      "[01/30/2021-01:47:03] [I] === Device Information ===\n",
      "[01/30/2021-01:47:03] [I] Selected Device: Tesla V100-DGXS-16GB\n",
      "[01/30/2021-01:47:03] [I] Compute Capability: 7.0\n",
      "[01/30/2021-01:47:03] [I] SMs: 80\n",
      "[01/30/2021-01:47:03] [I] Compute Clock Rate: 1.53 GHz\n",
      "[01/30/2021-01:47:03] [I] Device Global Memory: 16155 MiB\n",
      "[01/30/2021-01:47:03] [I] Shared Memory per SM: 96 KiB\n",
      "[01/30/2021-01:47:03] [I] Memory Bus Width: 4096 bits (ECC enabled)\n",
      "[01/30/2021-01:47:03] [I] Memory Clock Rate: 0.877 GHz\n",
      "[01/30/2021-01:47:03] [I] \n",
      "----------------------------------------------------------------\n",
      "Input filename:   resnet50_onnx_model.onnx\n",
      "ONNX IR version:  0.0.7\n",
      "Opset version:    12\n",
      "Producer name:    keras2onnx\n",
      "Producer version: 1.7.0\n",
      "Domain:           onnxmltools\n",
      "Model version:    0\n",
      "Doc string:       \n",
      "----------------------------------------------------------------\n",
      "[01/30/2021-01:47:18] [W] [TRT] /workspace/TensorRT/parsers/onnx/onnx2trt_utils.cpp:218: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.\n",
      "[01/30/2021-01:47:21] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.\n",
      "[01/30/2021-01:47:57] [I] [TRT] Detected 1 inputs and 1 output network tensors.\n",
      "[01/30/2021-01:47:58] [I] Engine built in 55.057 sec.\n",
      "[01/30/2021-01:47:58] [I] Starting inference\n",
      "[01/30/2021-01:48:01] [I] Warmup completed 0 queries over 200 ms\n",
      "[01/30/2021-01:48:01] [I] Timing trace has 0 queries over 3.0199 s\n",
      "[01/30/2021-01:48:01] [I] Trace averages of 10 runs:\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.65514 ms - Host latency: 7.23836 ms (end to end 11.075 ms, enqueue 0.531621 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.65903 ms - Host latency: 7.24824 ms (end to end 10.7612 ms, enqueue 0.532933 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.6107 ms - Host latency: 7.19966 ms (end to end 10.7783 ms, enqueue 0.530985 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.52562 ms - Host latency: 7.11072 ms (end to end 10.9803 ms, enqueue 0.535632 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.52315 ms - Host latency: 7.10591 ms (end to end 10.9716 ms, enqueue 0.531998 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51321 ms - Host latency: 7.09656 ms (end to end 10.9541 ms, enqueue 0.533496 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51168 ms - Host latency: 7.09995 ms (end to end 10.9527 ms, enqueue 0.530963 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.5041 ms - Host latency: 7.09012 ms (end to end 10.934 ms, enqueue 0.535321 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51486 ms - Host latency: 7.10168 ms (end to end 10.9558 ms, enqueue 0.535394 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.52427 ms - Host latency: 7.11294 ms (end to end 10.6066 ms, enqueue 0.532062 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.5337 ms - Host latency: 7.11821 ms (end to end 10.9962 ms, enqueue 0.521625 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51965 ms - Host latency: 7.104 ms (end to end 10.9639 ms, enqueue 0.511304 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51658 ms - Host latency: 7.10247 ms (end to end 10.9651 ms, enqueue 0.51554 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51507 ms - Host latency: 7.10148 ms (end to end 10.9593 ms, enqueue 0.526953 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50646 ms - Host latency: 7.09803 ms (end to end 10.9424 ms, enqueue 0.536926 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50637 ms - Host latency: 7.09098 ms (end to end 10.9392 ms, enqueue 0.533569 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51578 ms - Host latency: 7.10076 ms (end to end 10.9586 ms, enqueue 0.533972 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50808 ms - Host latency: 7.09285 ms (end to end 10.9431 ms, enqueue 0.533411 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51179 ms - Host latency: 7.0976 ms (end to end 10.9521 ms, enqueue 0.532605 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50572 ms - Host latency: 7.09546 ms (end to end 10.9377 ms, enqueue 0.534827 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.513 ms - Host latency: 7.10214 ms (end to end 10.9509 ms, enqueue 0.534814 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51095 ms - Host latency: 7.09944 ms (end to end 10.9487 ms, enqueue 0.536426 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.54097 ms - Host latency: 7.13679 ms (end to end 11.0067 ms, enqueue 0.533936 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.5116 ms - Host latency: 7.10693 ms (end to end 10.9503 ms, enqueue 0.533582 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50464 ms - Host latency: 7.09882 ms (end to end 10.9349 ms, enqueue 0.533911 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51085 ms - Host latency: 7.10813 ms (end to end 10.9506 ms, enqueue 0.536011 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50502 ms - Host latency: 7.10105 ms (end to end 10.9357 ms, enqueue 0.531641 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51158 ms - Host latency: 7.10651 ms (end to end 10.949 ms, enqueue 0.530396 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.5083 ms - Host latency: 7.10482 ms (end to end 10.9452 ms, enqueue 0.530139 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50267 ms - Host latency: 7.102 ms (end to end 10.9325 ms, enqueue 0.531799 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51221 ms - Host latency: 7.11985 ms (end to end 10.9514 ms, enqueue 0.536255 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50853 ms - Host latency: 7.1131 ms (end to end 10.9457 ms, enqueue 0.531018 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51361 ms - Host latency: 7.11932 ms (end to end 10.9546 ms, enqueue 0.540064 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51403 ms - Host latency: 7.12474 ms (end to end 10.9555 ms, enqueue 0.537732 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.5054 ms - Host latency: 7.10837 ms (end to end 10.9357 ms, enqueue 0.540283 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50427 ms - Host latency: 7.1019 ms (end to end 10.9343 ms, enqueue 0.533105 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50696 ms - Host latency: 7.10771 ms (end to end 10.9421 ms, enqueue 0.530884 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50266 ms - Host latency: 7.10273 ms (end to end 10.9326 ms, enqueue 0.5323 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50859 ms - Host latency: 7.12268 ms (end to end 10.9448 ms, enqueue 0.535083 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50703 ms - Host latency: 7.17405 ms (end to end 10.9422 ms, enqueue 0.529346 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51287 ms - Host latency: 7.18401 ms (end to end 10.9546 ms, enqueue 0.516528 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51904 ms - Host latency: 7.19941 ms (end to end 10.9646 ms, enqueue 0.53772 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51038 ms - Host latency: 7.19199 ms (end to end 10.9453 ms, enqueue 0.53501 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51614 ms - Host latency: 7.20454 ms (end to end 10.9604 ms, enqueue 0.542969 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50913 ms - Host latency: 7.18923 ms (end to end 10.945 ms, enqueue 0.532886 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51047 ms - Host latency: 7.19026 ms (end to end 10.9471 ms, enqueue 0.534497 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51228 ms - Host latency: 7.1925 ms (end to end 10.9518 ms, enqueue 0.532471 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51226 ms - Host latency: 7.19956 ms (end to end 10.9507 ms, enqueue 0.539282 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50723 ms - Host latency: 7.18906 ms (end to end 10.941 ms, enqueue 0.539551 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51177 ms - Host latency: 7.1969 ms (end to end 10.9463 ms, enqueue 0.537988 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.5085 ms - Host latency: 7.19199 ms (end to end 10.9473 ms, enqueue 0.535913 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.50857 ms - Host latency: 7.18579 ms (end to end 10.9416 ms, enqueue 0.531152 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51504 ms - Host latency: 7.197 ms (end to end 10.9586 ms, enqueue 0.538794 ms)\n",
      "[01/30/2021-01:48:01] [I] Average on 10 runs - GPU latency: 5.51108 ms - Host latency: 7.19116 ms (end to end 10.9508 ms, enqueue 0.534033 ms)\n",
      "[01/30/2021-01:48:01] [I] Host Latency\n",
      "[01/30/2021-01:48:01] [I] min: 7.0564 ms (end to end 7.14325 ms)\n",
      "[01/30/2021-01:48:01] [I] max: 7.42065 ms (end to end 11.2844 ms)\n",
      "[01/30/2021-01:48:01] [I] mean: 7.13691 ms (end to end 10.9403 ms)\n",
      "[01/30/2021-01:48:01] [I] median: 7.11218 ms (end to end 10.9491 ms)\n",
      "[01/30/2021-01:48:01] [I] percentile: 7.2562 ms at 99% (end to end 11.2548 ms at 99%)\n",
      "[01/30/2021-01:48:01] [I] throughput: 0 qps\n",
      "[01/30/2021-01:48:01] [I] walltime: 3.0199 s\n",
      "[01/30/2021-01:48:01] [I] Enqueue Time\n",
      "[01/30/2021-01:48:01] [I] min: 0.504517 ms\n",
      "[01/30/2021-01:48:01] [I] max: 0.573303 ms\n",
      "[01/30/2021-01:48:01] [I] median: 0.530762 ms\n",
      "[01/30/2021-01:48:01] [I] GPU Compute\n",
      "[01/30/2021-01:48:01] [I] min: 5.45691 ms\n",
      "[01/30/2021-01:48:01] [I] max: 5.82239 ms\n",
      "[01/30/2021-01:48:01] [I] mean: 5.51931 ms\n",
      "[01/30/2021-01:48:01] [I] median: 5.51099 ms\n",
      "[01/30/2021-01:48:01] [I] percentile: 5.66785 ms at 99%\n",
      "[01/30/2021-01:48:01] [I] total compute time: 3.0025 s\n",
      "&&&& PASSED TensorRT.trtexec # trtexec --onnx=resnet50_onnx_model.onnx --saveEngine=resnet_engine.trt --explicitBatch --fp16\n"
     ]
    }
   ],
   "source": [
    "# May need to shut down all kernels and restart before this - otherwise you might get cuDNN initialization errors:\n",
    "if USE_FP16:\n",
    "    !trtexec --onnx=resnet50_onnx_model.onnx --saveEngine=resnet_engine.trt  --explicitBatch --fp16\n",
    "else:\n",
    "    !trtexec --onnx=resnet50_onnx_model.onnx --saveEngine=resnet_engine.trt  --explicitBatch"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "-\n",
    "\n",
    "__The trtexec Logs:__\n",
    "\n",
    "Above, trtexec does a lot of things! Some important things to note:\n",
    "\n",
    "__First__, _\"PASSED\"_ is what you want to see in the last line of the log above. We can see our conversion was successful!\n",
    "\n",
    "__Second__, can see the resnet_engine.trt engine file has indeed been successfully created: "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "total 2547292\n",
      "drwxrwxr-x  8   1000  1000       4096 Jan 30 01:46  .\n",
      "drwxrwxr-x  5   1000  1000       4096 Jan 14 22:29  ..\n",
      "drwxr-xr-x  2   1000  1000       4096 Jan 29 23:39  .ipynb_checkpoints\n",
      "-rw-rw-r--  1   1000  1000       6570 Jan 30 01:10 '0. Running This Guide.ipynb'\n",
      "-rw-r--r--  1 root   root      502649 Jan 30 01:06 '1. Introduction.ipynb'\n",
      "-rw-rw-r--  1   1000  1000      23645 Jan 29 23:47 '2. Using the Tensorflow TensorRT Integration.ipynb'\n",
      "-rw-rw-r--  1   1000  1000      38440 Jan 30 01:46 '3. Using Tensorflow 2 through ONNX.ipynb'\n",
      "-rw-rw-r--  1   1000  1000      11961 Jan 30 01:46 '4. Using PyTorch through ONNX.ipynb'\n",
      "-rw-rw-r--  1   1000  1000       7052 Jan 29 23:41 '5. Understanding TensorRT Runtimes.ipynb'\n",
      "drwxrwxr-x  5   1000  1000       4096 Jan 29 23:41 'Additional Examples'\n",
      "drwxr-xr-x  2 root   root        4096 Jan 30 00:58  __pycache__\n",
      "-rw-rw-r--  1   1000  1000       1091 Jan 14 22:29  benchmark.py\n",
      "-rw-------  1 root   root  2147479552 Jan 27 08:24  core\n",
      "-rw-rw-r--  1   1000  1000       3471 Jan 14 22:29  helper.py\n",
      "drwxrwxr-x  3   1000  1000       4096 Jan 29 23:55  images\n",
      "-rw-rw-r--  1   1000  1000       2613 Jan 20 02:50  onnx_helper.py\n",
      "drwxr-xr-x 11 135397 users       4096 Jun 28  2018  resnet50\n",
      "-rw-r--r--  1 root   root   101706397 Jun 29  2018  resnet50.tar.gz\n",
      "-rw-r--r--  1 root   root   101706397 Jun 29  2018  resnet50.tar.gz.1\n",
      "-rw-r--r--  1 root   root   102145169 Jan 30 01:21  resnet50_onnx_model.onnx\n",
      "-rw-r--r--  1 root   root    51382322 Jan 30 01:47  resnet_engine.trt\n",
      "-rw-r--r--  1 root   root   103330106 Jan 30 00:58  resnet_engine_intro.trt\n",
      "drwxr-xr-x  6 root   root        4096 Jan 29 23:42  tmp_savedmodels\n"
     ]
    }
   ],
   "source": [
    "!ls -la"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Third__, you can see timing details above using trtexec - these are in the ideal case with no overhead. Depending on how you run your model, a considerable amount of overhead can be added to this. We can do timing in our Python runtime below - but keep in mind performing C++ inference would likely be faster."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5. What TensorRT runtime am I targeting?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We want to run our TensorRT inference in Python - so the TensorRT Python API is a great way of testing our model out in Jupyter, and is still quite performant.\n",
    "\n",
    "To use it, we need to do a few steps:\n",
    "\n",
    "__Load our engine into a tensorrt.Runtime:__"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "id": "dX2jFwrA8qu6"
   },
   "outputs": [],
   "source": [
    "import tensorrt as trt\n",
    "import pycuda.driver as cuda\n",
    "import pycuda.autoinit\n",
    "\n",
    "f = open(\"resnet_engine.trt\", \"rb\")\n",
    "runtime = trt.Runtime(trt.Logger(trt.Logger.WARNING)) \n",
    "\n",
    "engine = runtime.deserialize_cuda_engine(f.read())\n",
    "context = engine.create_execution_context()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note: if this cell is having issues, restarting all Jupyter kernels and rerunning only the batch size and precision cells above before trying again often helps"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Allocate input and output memory, give TRT pointers (bindings) to it:__\n",
    "\n",
    "d_input and d_output refer to the memory regions on our 'device' (aka GPU) - as opposed to memory on our normal RAM, where Python holds its variables (such as 'output' below)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "id": "q3UJcdWy8qu8"
   },
   "outputs": [],
   "source": [
    "output = np.empty([BATCH_SIZE, 1000], dtype = target_dtype) # Need to set output dtype to FP16 to enable FP16\n",
    "\n",
    "# Allocate device memory\n",
    "d_input = cuda.mem_alloc(1 * dummy_input_batch.nbytes)\n",
    "d_output = cuda.mem_alloc(1 * output.nbytes)\n",
    "\n",
    "bindings = [int(d_input), int(d_output)]\n",
    "\n",
    "stream = cuda.Stream()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Set up prediction function:__\n",
    "\n",
    "This involves a copy from CPU RAM to GPU VRAM, executing the model, then copying the results back from GPU VRAM to CPU RAM:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "id": "6R-F8JtV8qu-"
   },
   "outputs": [],
   "source": [
    "def predict(batch): # result gets copied into output\n",
    "    # Transfer input data to device\n",
    "    cuda.memcpy_htod_async(d_input, batch, stream)\n",
    "    # Execute model\n",
    "    context.execute_async_v2(bindings, stream.handle, None)\n",
    "    # Transfer predictions back\n",
    "    cuda.memcpy_dtoh_async(output, d_output, stream)\n",
    "    # Syncronize threads\n",
    "    stream.synchronize()\n",
    "    \n",
    "    return output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is all we need to run predictions using our TensorRT engine in a Python runtime!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Performance Comparison:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Last, we can see how quickly we can feed a singular batch to TensorRT, which we can compare to our original Tensorflow experiment from earlier."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "id": "AdKZzW7O8qvB"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Warming up...\n",
      "Done warming up!\n"
     ]
    }
   ],
   "source": [
    "print(\"Warming up...\")\n",
    "\n",
    "predict(dummy_input_batch)\n",
    "\n",
    "print(\"Done warming up!\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We use the %%timeit Jupyter magic again. Note that %%timeit is fairly rough, and for any actual benchmarking better controlled testing is required - preferably outside of Jupyter."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "id": "XAtWnCK38qvD"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "7.23 ms ± 2.98 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n"
     ]
    }
   ],
   "source": [
    "%%timeit\n",
    "\n",
    "pred = predict(dummy_input_batch) # Check TRT performance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Prediction: 74\n"
     ]
    }
   ],
   "source": [
    "print (\"Prediction: \" + str(np.argmax(output)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(32, 1000)"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "pred = predict(dummy_input_batch)\n",
    "\n",
    "pred.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Next Steps:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<h4> Profiling </h4>\n",
    "\n",
    "This is a great next step for further optimizing and debugging models you are working on productionizing\n",
    "\n",
    "You can find it here: https://docs.nvidia.com/deeplearning/tensorrt/best-practices/index.html\n",
    "\n",
    "<h4>  TRT Dev Docs </h4>\n",
    "\n",
    "Main documentation page for the ONNX, layer builder, C++, and legacy APIs\n",
    "\n",
    "You can find it here: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html\n",
    "\n",
    "<h4>  TRT OSS GitHub </h4>\n",
    "\n",
    "Contains OSS TRT components, sample applications, and plugin examples\n",
    "\n",
    "You can find it here: https://github.com/NVIDIA/TensorRT\n",
    "\n",
    "\n",
    "#### TRT Supported Layers:\n",
    "\n",
    "https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/samplePlugin\n",
    "\n",
    "#### TRT ONNX Plugin Example:\n",
    "\n",
    "https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#layers-precision-matrix\n"
   ]
  }
 ],
 "metadata": {
  "accelerator": "GPU",
  "colab": {
   "name": "ONNXExample.ipynb",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
