{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Getting Started with TensorRT"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "TensorRT is an SDK for optimizing trained deep learning models to enable high-performance inference. TensorRT contains a deep learning inference __optimizer__ for trained deep learning models and an optimized __runtime__ for execution. After you have trained your deep learning model in a framework of your choice, TensorRT enables you to run it with higher throughput and lower latency. \n",
    "\n",
    "The TensorRT ecosystem breaks broadly down into two parts:\n",
    "<br><br><br>\n",
    "![TensorRT Landscape](./images/tensorrt_landscape.png)\n",
    "<br><br><br>\n",
    "Essentially,\n",
    "\n",
    "1. The various paths users can follow to convert their models to optimized TensorRT engines\n",
    "2. The various runtimes users can target with TensorRT when deploying their optimized TensorRT engines\n",
    "\n",
    "If you have a model in PyTorch and want to run inference as efficiently as possible - with low latency, high throughput, and less memory consumption - this guide will help you achieve just that!!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## How Do I Use TensorRT:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "TensorRT is a large and flexible project. It can handle a variety of workflows, and which workflow is best for you will depend on your specific use-case and problem setting. Abstractly, the process for deploying a model from a deep learning framework to TensorRT looks like this:\n",
    "\n",
    "![TensorRT Workflow](./images/tensorrt_workflow.png)\n",
    "\n",
    "To help you get there, this guide will help you answer five key questions:\n",
    "\n",
    "1. __What format should I save my model in?__\n",
    "2. __What batch size(s) am I running inference at?__\n",
    "3. __What precision am I running inference at?__\n",
    "4. __What TensorRT path am I using to convert my model?__\n",
    "5. __What runtime am I targeting?__\n",
    "\n",
    "This guide will walk you broadly through all of these decision points while giving you an overview of your options at each step.\n",
    "\n",
    "We could talk about these points in isolation, but they are best understood in the context of an actual end-to-end workflow. Let's get started on a simple one here, using a TensorRT API wrapper written for this guide. Once you understand the basic workflow, you can dive into the more in depth notebooks on the Torch-TRT and ONNX converters!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Simple TensorRT Demonstration through ONNX:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are several ways of approaching TensorRT conversion and deployment. Here, we will take a pretrained ResNet50 model, convert it to an optimized TensorRT engine, and run it in the TensorRT runtime.\n",
    "\n",
    "For this simple demonstration we will focus the ONNX path - one of the two main automatic approaches for TensorRT conversion. We will then run the model in the TensorRT Python API using a simplified wrapper written for this guide. Essentially, we will follow this path to convert and deploy our model:\n",
    "\n",
    "![ONNX Conversion](./images/onnx_onnx.png)\n",
    "\n",
    "We will follow the five questions above. For a more in depth discussion, the section following this demonstration will cover options available at these steps in more detail."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__IMPORTANT NOTE:__ Please __shutdown all other notebooks and PyTorch processes__ before running these steps. TensorRT and PyTorch can not be loaded into your Python processes at the same time."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 1. What format should I save my model in?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The two main automatic conversion paths for TensorRT require different model formats to successfully convert a model. Torch-TRT uses PyTorch saved models and the ONNX path requires models be saved in ONNX. Here, we will use ONNX.\n",
    "\n",
    "We are going to use ResNet50 - a basic backbone vision model that can be used for a variety of purposes. For the sake of demonstration, here we will perform classification using a __pretrained ResNet50 ONNX__ model included with the [ONNX model zoo](https://github.com/onnx/models).\n",
    "\n",
    "We can download a pretrained ResNet50 from the ONNX model zoo and untar it by doing the following:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--2024-08-09 23:11:25--  https://download.onnxruntime.ai/onnx/models/resnet50.tar.gz\n",
      "Resolving download.onnxruntime.ai (download.onnxruntime.ai)... 13.107.246.69, 2620:1ec:bdf::69\n",
      "Connecting to download.onnxruntime.ai (download.onnxruntime.ai)|13.107.246.69|:443... connected.\n",
      "HTTP request sent, awaiting response... 200 OK\n",
      "Length: 101632129 (97M) [application/octet-stream]\n",
      "Saving to: ‘resnet50.tar.gz’\n",
      "\n",
      "resnet50.tar.gz     100%[===================>]  96.92M  6.53MB/s    in 16s     \n",
      "\n",
      "2024-08-09 23:11:41 (6.08 MB/s) - ‘resnet50.tar.gz’ saved [101632129/101632129]\n",
      "\n"
     ]
    }
   ],
   "source": [
    "!wget https://download.onnxruntime.ai/onnx/models/resnet50.tar.gz -O resnet50.tar.gz\n",
    "!tar xzf resnet50.tar.gz"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "See how to export ONNX models that will work with this same trtexec command in the [PyTorch through ONNX notebook](./2.%20Using%20PyTorch%20through%20ONNX.ipynb)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2. What precision will I use?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Inference typically requires less numeric precision than training. With some care, lower precision can give you faster computation and lower memory consumption without sacrificing any meaningful accuracy. TensorRT supports TF32, FP32, FP16, and INT8 precisions.\n",
    "\n",
    "FP32 is the default training precision of most frameworks, so we will start by using FP32 for inference here. Let's create a \"dummy\" batch to work with in order to test our model. TensorRT will use the precision of the input batch throughout the rest of the network by default."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "PRECISION = np.float32\n",
    "\n",
    "# The input tensor shape of the ONNX model.\n",
    "input_shape = (1, 3, 224, 224)\n",
    "\n",
    "dummy_input_batch = np.zeros(input_shape, dtype=PRECISION)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3. What TensorRT path am I using to convert my model?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The ONNX conversion path is one of the most universal and performant paths for automatic TensorRT conversion. It works for PyTorch and many other frameworks. There are several tools to help users convert models from ONNX to a TensorRT engine. \n",
    "\n",
    "One common approach is to use trtexec - a command line tool included with TensorRT that can, among other things, convert ONNX models to TensorRT engines and profile them."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "&&&& RUNNING TensorRT.trtexec [TensorRT v100300] # trtexec --onnx=resnet50/model.onnx --saveEngine=resnet_engine_intro.engine\n",
      "[08/09/2024-23:11:42] [I] === Model Options ===\n",
      "[08/09/2024-23:11:42] [I] Format: ONNX\n",
      "[08/09/2024-23:11:42] [I] Model: resnet50/model.onnx\n",
      "[08/09/2024-23:11:42] [I] Output:\n",
      "[08/09/2024-23:11:42] [I] === Build Options ===\n",
      "[08/09/2024-23:11:42] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default, tacticSharedMem: default\n",
      "[08/09/2024-23:11:42] [I] avgTiming: 8\n",
      "[08/09/2024-23:11:42] [I] Precision: FP32\n",
      "[08/09/2024-23:11:42] [I] LayerPrecisions: \n",
      "[08/09/2024-23:11:42] [I] Layer Device Types: \n",
      "[08/09/2024-23:11:42] [I] Calibration: \n",
      "[08/09/2024-23:11:42] [I] Refit: Disabled\n",
      "[08/09/2024-23:11:42] [I] Strip weights: Disabled\n",
      "[08/09/2024-23:11:42] [I] Version Compatible: Disabled\n",
      "[08/09/2024-23:11:42] [I] ONNX Plugin InstanceNorm: Disabled\n",
      "[08/09/2024-23:11:42] [I] TensorRT runtime: full\n",
      "[08/09/2024-23:11:42] [I] Lean DLL Path: \n",
      "[08/09/2024-23:11:42] [I] Tempfile Controls: { in_memory: allow, temporary: allow }\n",
      "[08/09/2024-23:11:42] [I] Exclude Lean Runtime: Disabled\n",
      "[08/09/2024-23:11:42] [I] Sparsity: Disabled\n",
      "[08/09/2024-23:11:42] [I] Safe mode: Disabled\n",
      "[08/09/2024-23:11:42] [I] Build DLA standalone loadable: Disabled\n",
      "[08/09/2024-23:11:42] [I] Allow GPU fallback for DLA: Disabled\n",
      "[08/09/2024-23:11:42] [I] DirectIO mode: Disabled\n",
      "[08/09/2024-23:11:42] [I] Restricted mode: Disabled\n",
      "[08/09/2024-23:11:42] [I] Skip inference: Disabled\n",
      "[08/09/2024-23:11:42] [I] Save engine: resnet_engine_intro.engine\n",
      "[08/09/2024-23:11:42] [I] Load engine: \n",
      "[08/09/2024-23:11:42] [I] Profiling verbosity: 0\n",
      "[08/09/2024-23:11:42] [I] Tactic sources: Using default tactic sources\n",
      "[08/09/2024-23:11:42] [I] timingCacheMode: local\n",
      "[08/09/2024-23:11:42] [I] timingCacheFile: \n",
      "[08/09/2024-23:11:42] [I] Enable Compilation Cache: Enabled\n",
      "[08/09/2024-23:11:42] [I] errorOnTimingCacheMiss: Disabled\n",
      "[08/09/2024-23:11:42] [I] Preview Features: Use default preview flags.\n",
      "[08/09/2024-23:11:42] [I] MaxAuxStreams: -1\n",
      "[08/09/2024-23:11:42] [I] BuilderOptimizationLevel: -1\n",
      "[08/09/2024-23:11:42] [I] Calibration Profile Index: 0\n",
      "[08/09/2024-23:11:42] [I] Weight Streaming: Disabled\n",
      "[08/09/2024-23:11:42] [I] Runtime Platform: Same As Build\n",
      "[08/09/2024-23:11:42] [I] Debug Tensors: \n",
      "[08/09/2024-23:11:42] [I] Input(s)s format: fp32:CHW\n",
      "[08/09/2024-23:11:42] [I] Output(s)s format: fp32:CHW\n",
      "[08/09/2024-23:11:42] [I] Input build shapes: model\n",
      "[08/09/2024-23:11:42] [I] Input calibration shapes: model\n",
      "[08/09/2024-23:11:42] [I] === System Options ===\n",
      "[08/09/2024-23:11:42] [I] Device: 0\n",
      "[08/09/2024-23:11:42] [I] DLACore: \n",
      "[08/09/2024-23:11:42] [I] Plugins:\n",
      "[08/09/2024-23:11:42] [I] setPluginsToSerialize:\n",
      "[08/09/2024-23:11:42] [I] dynamicPlugins:\n",
      "[08/09/2024-23:11:42] [I] ignoreParsedPluginLibs: 0\n",
      "[08/09/2024-23:11:42] [I] \n",
      "[08/09/2024-23:11:42] [I] === Inference Options ===\n",
      "[08/09/2024-23:11:42] [I] Batch: Explicit\n",
      "[08/09/2024-23:11:42] [I] Input inference shapes: model\n",
      "[08/09/2024-23:11:42] [I] Iterations: 10\n",
      "[08/09/2024-23:11:42] [I] Duration: 3s (+ 200ms warm up)\n",
      "[08/09/2024-23:11:42] [I] Sleep time: 0ms\n",
      "[08/09/2024-23:11:42] [I] Idle time: 0ms\n",
      "[08/09/2024-23:11:42] [I] Inference Streams: 1\n",
      "[08/09/2024-23:11:42] [I] ExposeDMA: Disabled\n",
      "[08/09/2024-23:11:42] [I] Data transfers: Enabled\n",
      "[08/09/2024-23:11:42] [I] Spin-wait: Disabled\n",
      "[08/09/2024-23:11:42] [I] Multithreading: Disabled\n",
      "[08/09/2024-23:11:42] [I] CUDA Graph: Disabled\n",
      "[08/09/2024-23:11:42] [I] Separate profiling: Disabled\n",
      "[08/09/2024-23:11:42] [I] Time Deserialize: Disabled\n",
      "[08/09/2024-23:11:42] [I] Time Refit: Disabled\n",
      "[08/09/2024-23:11:42] [I] NVTX verbosity: 0\n",
      "[08/09/2024-23:11:42] [I] Persistent Cache Ratio: 0\n",
      "[08/09/2024-23:11:42] [I] Optimization Profile Index: 0\n",
      "[08/09/2024-23:11:42] [I] Weight Streaming Budget: 100.000000%\n",
      "[08/09/2024-23:11:42] [I] Inputs:\n",
      "[08/09/2024-23:11:42] [I] Debug Tensor Save Destinations:\n",
      "[08/09/2024-23:11:42] [I] === Reporting Options ===\n",
      "[08/09/2024-23:11:42] [I] Verbose: Disabled\n",
      "[08/09/2024-23:11:42] [I] Averages: 10 inferences\n",
      "[08/09/2024-23:11:42] [I] Percentiles: 90,95,99\n",
      "[08/09/2024-23:11:42] [I] Dump refittable layers:Disabled\n",
      "[08/09/2024-23:11:42] [I] Dump output: Disabled\n",
      "[08/09/2024-23:11:42] [I] Profile: Disabled\n",
      "[08/09/2024-23:11:42] [I] Export timing to JSON file: \n",
      "[08/09/2024-23:11:42] [I] Export output to JSON file: \n",
      "[08/09/2024-23:11:42] [I] Export profile to JSON file: \n",
      "[08/09/2024-23:11:42] [I] \n",
      "[08/09/2024-23:11:42] [I] === Device Information ===\n",
      "[08/09/2024-23:11:42] [I] Available Devices: \n",
      "[08/09/2024-23:11:42] [I]   Device 0: \"NVIDIA RTX A5000\" UUID: GPU-bd38339f-9e6e-7f34-17ad-c1123627120b\n",
      "[08/09/2024-23:11:42] [I] Selected Device: NVIDIA RTX A5000\n",
      "[08/09/2024-23:11:42] [I] Selected Device ID: 0\n",
      "[08/09/2024-23:11:42] [I] Selected Device UUID: GPU-bd38339f-9e6e-7f34-17ad-c1123627120b\n",
      "[08/09/2024-23:11:42] [I] Compute Capability: 8.6\n",
      "[08/09/2024-23:11:42] [I] SMs: 64\n",
      "[08/09/2024-23:11:42] [I] Device Global Memory: 24238 MiB\n",
      "[08/09/2024-23:11:42] [I] Shared Memory per SM: 100 KiB\n",
      "[08/09/2024-23:11:42] [I] Memory Bus Width: 384 bits (ECC disabled)\n",
      "[08/09/2024-23:11:42] [I] Application Compute Clock Rate: 1.695 GHz\n",
      "[08/09/2024-23:11:42] [I] Application Memory Clock Rate: 8.001 GHz\n",
      "[08/09/2024-23:11:42] [I] \n",
      "[08/09/2024-23:11:42] [I] Note: The application clock rates do not reflect the actual clock rates that the GPU is currently running at.\n",
      "[08/09/2024-23:11:42] [I] \n",
      "[08/09/2024-23:11:42] [I] TensorRT version: 10.3.0\n",
      "[08/09/2024-23:11:42] [I] Loading standard plugins\n",
      "[08/09/2024-23:11:42] [I] [TRT] [MemUsageChange] Init CUDA: CPU +1, GPU +0, now: CPU 19, GPU 328 (MiB)\n",
      "[08/09/2024-23:11:44] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +2087, GPU +386, now: CPU 2262, GPU 714 (MiB)\n",
      "[08/09/2024-23:11:44] [I] Start parsing network model.\n",
      "[08/09/2024-23:11:44] [I] [TRT] ----------------------------------------------------------------\n",
      "[08/09/2024-23:11:44] [I] [TRT] Input filename:   resnet50/model.onnx\n",
      "[08/09/2024-23:11:44] [I] [TRT] ONNX IR version:  0.0.3\n",
      "[08/09/2024-23:11:44] [I] [TRT] Opset version:    9\n",
      "[08/09/2024-23:11:44] [I] [TRT] Producer name:    onnx-caffe2\n",
      "[08/09/2024-23:11:44] [I] [TRT] Producer version: \n",
      "[08/09/2024-23:11:44] [I] [TRT] Domain:           \n",
      "[08/09/2024-23:11:44] [I] [TRT] Model version:    0\n",
      "[08/09/2024-23:11:44] [I] [TRT] Doc string:       \n",
      "[08/09/2024-23:11:44] [I] [TRT] ----------------------------------------------------------------\n",
      "[08/09/2024-23:11:44] [I] Finished parsing network model. Parse time: 0.0980711\n",
      "[08/09/2024-23:11:44] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.\n",
      "[08/09/2024-23:12:00] [I] [TRT] Detected 1 inputs and 1 output network tensors.\n",
      "[08/09/2024-23:12:00] [I] [TRT] Total Host Persistent Memory: 358288\n",
      "[08/09/2024-23:12:00] [I] [TRT] Total Device Persistent Memory: 1536\n",
      "[08/09/2024-23:12:00] [I] [TRT] Total Scratch Memory: 524800\n",
      "[08/09/2024-23:12:00] [I] [TRT] [BlockAssignment] Started assigning block shifts. This will take 98 steps to complete.\n",
      "[08/09/2024-23:12:00] [I] [TRT] [BlockAssignment] Algorithm ShiftNTopDown took 1.47372ms to assign 5 blocks to 98 nodes requiring 8229376 bytes.\n",
      "[08/09/2024-23:12:00] [I] [TRT] Total Activation Memory: 8228864\n",
      "[08/09/2024-23:12:00] [I] [TRT] Total Weights Memory: 127384320\n",
      "[08/09/2024-23:12:00] [I] [TRT] Engine generation completed in 15.6092 seconds.\n",
      "[08/09/2024-23:12:00] [I] [TRT] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 16 MiB, GPU 128 MiB\n",
      "[08/09/2024-23:12:00] [I] [TRT] [MemUsageStats] Peak memory usage during Engine building and serialization: CPU: 3495 MiB\n",
      "[08/09/2024-23:12:00] [I] Engine built in 15.678 sec.\n",
      "[08/09/2024-23:12:00] [I] Created engine with size: 123.593 MiB\n",
      "[08/09/2024-23:12:00] [I] [TRT] Loaded engine size: 123 MiB\n",
      "[08/09/2024-23:12:00] [I] Engine deserialized in 0.0817244 sec.\n",
      "[08/09/2024-23:12:00] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +8, now: CPU 0, GPU 129 (MiB)\n",
      "[08/09/2024-23:12:00] [I] Setting persistentCacheLimit to 0 bytes.\n",
      "[08/09/2024-23:12:00] [I] Created execution context with device memory size: 7.84766 MiB\n",
      "[08/09/2024-23:12:00] [I] Using random values for input gpu_0/data_0\n",
      "[08/09/2024-23:12:00] [I] Input binding for gpu_0/data_0 with dimensions 1x3x224x224 is created.\n",
      "[08/09/2024-23:12:00] [I] Output binding for gpu_0/softmax_1 with dimensions 1x1000 is created.\n",
      "[08/09/2024-23:12:00] [I] Starting inference\n",
      "[08/09/2024-23:12:04] [I] Warmup completed 177 queries over 200 ms\n",
      "[08/09/2024-23:12:04] [I] Timing trace has 2622 queries over 3.00285 s\n",
      "[08/09/2024-23:12:04] [I] \n",
      "[08/09/2024-23:12:04] [I] === Trace details ===\n",
      "[08/09/2024-23:12:04] [I] Trace averages of 10 runs:\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1434 ms - Host latency: 1.2078 ms (enqueue 0.681778 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14197 ms - Host latency: 1.20857 ms (enqueue 0.623233 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14115 ms - Host latency: 1.20379 ms (enqueue 0.640746 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14207 ms - Host latency: 1.20751 ms (enqueue 0.669678 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14186 ms - Host latency: 1.20764 ms (enqueue 0.629774 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13991 ms - Host latency: 1.20297 ms (enqueue 0.741623 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13982 ms - Host latency: 1.20135 ms (enqueue 0.717255 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14012 ms - Host latency: 1.20335 ms (enqueue 0.716397 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14073 ms - Host latency: 1.20221 ms (enqueue 0.676779 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13971 ms - Host latency: 1.20068 ms (enqueue 0.704474 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14002 ms - Host latency: 1.20219 ms (enqueue 0.720291 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13941 ms - Host latency: 1.20047 ms (enqueue 0.697601 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13971 ms - Host latency: 1.20179 ms (enqueue 0.693964 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1394 ms - Host latency: 1.20286 ms (enqueue 0.683246 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14095 ms - Host latency: 1.20357 ms (enqueue 0.683603 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14279 ms - Host latency: 1.20704 ms (enqueue 0.688162 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14186 ms - Host latency: 1.2038 ms (enqueue 0.645267 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14063 ms - Host latency: 1.20164 ms (enqueue 0.692361 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14043 ms - Host latency: 1.20164 ms (enqueue 0.702689 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14084 ms - Host latency: 1.20169 ms (enqueue 0.66651 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14084 ms - Host latency: 1.20146 ms (enqueue 0.658337 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14176 ms - Host latency: 1.20426 ms (enqueue 0.69798 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14124 ms - Host latency: 1.20262 ms (enqueue 0.694205 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14002 ms - Host latency: 1.20105 ms (enqueue 0.674658 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13572 ms - Host latency: 1.19737 ms (enqueue 0.708185 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13582 ms - Host latency: 1.19694 ms (enqueue 0.692053 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13439 ms - Host latency: 1.19557 ms (enqueue 0.685651 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13633 ms - Host latency: 1.1996 ms (enqueue 0.672461 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1346 ms - Host latency: 1.19826 ms (enqueue 0.708868 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13469 ms - Host latency: 1.19805 ms (enqueue 0.660419 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13664 ms - Host latency: 1.19839 ms (enqueue 0.700195 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13582 ms - Host latency: 1.19742 ms (enqueue 0.668628 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13602 ms - Host latency: 1.19868 ms (enqueue 0.706824 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13747 ms - Host latency: 1.19943 ms (enqueue 0.704651 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1346 ms - Host latency: 1.19565 ms (enqueue 0.690649 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13438 ms - Host latency: 1.19586 ms (enqueue 0.701184 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13838 ms - Host latency: 1.20226 ms (enqueue 0.702856 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1528 ms - Host latency: 1.21788 ms (enqueue 0.568842 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13276 ms - Host latency: 1.19438 ms (enqueue 0.482397 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1307 ms - Host latency: 1.1896 ms (enqueue 0.424017 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13172 ms - Host latency: 1.19357 ms (enqueue 0.572021 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13203 ms - Host latency: 1.19403 ms (enqueue 0.755896 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13338 ms - Host latency: 1.195 ms (enqueue 0.544983 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13417 ms - Host latency: 1.1938 ms (enqueue 0.441614 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13203 ms - Host latency: 1.19128 ms (enqueue 0.413116 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13345 ms - Host latency: 1.19925 ms (enqueue 0.572754 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13408 ms - Host latency: 1.19639 ms (enqueue 0.678613 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13379 ms - Host latency: 1.19441 ms (enqueue 0.673431 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13348 ms - Host latency: 1.19547 ms (enqueue 0.690594 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13243 ms - Host latency: 1.1952 ms (enqueue 0.700989 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13296 ms - Host latency: 1.19422 ms (enqueue 0.692523 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13407 ms - Host latency: 1.19562 ms (enqueue 0.692444 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13583 ms - Host latency: 1.19931 ms (enqueue 0.689624 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13604 ms - Host latency: 1.1983 ms (enqueue 0.69278 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13572 ms - Host latency: 1.19719 ms (enqueue 0.64353 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.135 ms - Host latency: 1.19745 ms (enqueue 0.655817 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13448 ms - Host latency: 1.19561 ms (enqueue 0.714655 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13461 ms - Host latency: 1.19612 ms (enqueue 0.701239 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13409 ms - Host latency: 1.19507 ms (enqueue 0.670715 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13604 ms - Host latency: 1.19795 ms (enqueue 0.651379 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13337 ms - Host latency: 1.19666 ms (enqueue 0.721234 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13387 ms - Host latency: 1.19581 ms (enqueue 0.695056 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13449 ms - Host latency: 1.19549 ms (enqueue 0.714343 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13367 ms - Host latency: 1.19608 ms (enqueue 0.704687 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.13521 ms - Host latency: 1.19631 ms (enqueue 0.694922 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1351 ms - Host latency: 1.19637 ms (enqueue 0.693372 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14062 ms - Host latency: 1.20346 ms (enqueue 0.71059 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14207 ms - Host latency: 1.20299 ms (enqueue 0.692322 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14085 ms - Host latency: 1.20264 ms (enqueue 0.703583 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14175 ms - Host latency: 1.20376 ms (enqueue 0.714893 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14125 ms - Host latency: 1.20208 ms (enqueue 0.692554 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14236 ms - Host latency: 1.20372 ms (enqueue 0.696967 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14227 ms - Host latency: 1.20373 ms (enqueue 0.691272 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14261 ms - Host latency: 1.20391 ms (enqueue 0.70481 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1427 ms - Host latency: 1.20387 ms (enqueue 0.676599 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14381 ms - Host latency: 1.20514 ms (enqueue 0.659851 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14298 ms - Host latency: 1.20524 ms (enqueue 0.655066 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14421 ms - Host latency: 1.20553 ms (enqueue 0.685242 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14382 ms - Host latency: 1.2059 ms (enqueue 0.706702 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14166 ms - Host latency: 1.20245 ms (enqueue 0.701538 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14204 ms - Host latency: 1.20311 ms (enqueue 0.674072 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14207 ms - Host latency: 1.20341 ms (enqueue 0.720386 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14279 ms - Host latency: 1.20404 ms (enqueue 0.689429 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14187 ms - Host latency: 1.20322 ms (enqueue 0.659839 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14177 ms - Host latency: 1.20433 ms (enqueue 0.697144 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14308 ms - Host latency: 1.20757 ms (enqueue 0.679883 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14144 ms - Host latency: 1.204 ms (enqueue 0.677527 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14432 ms - Host latency: 1.20797 ms (enqueue 0.647131 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14154 ms - Host latency: 1.20652 ms (enqueue 0.659021 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14327 ms - Host latency: 1.20457 ms (enqueue 0.721362 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14219 ms - Host latency: 1.2035 ms (enqueue 0.695007 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14247 ms - Host latency: 1.20375 ms (enqueue 0.703101 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14811 ms - Host latency: 1.20905 ms (enqueue 0.688147 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14329 ms - Host latency: 1.20479 ms (enqueue 0.68363 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14207 ms - Host latency: 1.20431 ms (enqueue 0.655811 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14441 ms - Host latency: 1.20873 ms (enqueue 0.748669 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.15115 ms - Host latency: 1.21249 ms (enqueue 0.719312 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14884 ms - Host latency: 1.21035 ms (enqueue 0.758337 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14933 ms - Host latency: 1.21251 ms (enqueue 0.728979 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.15009 ms - Host latency: 1.21229 ms (enqueue 0.750781 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14967 ms - Host latency: 1.21121 ms (enqueue 0.721741 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14797 ms - Host latency: 1.20935 ms (enqueue 0.726392 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14758 ms - Host latency: 1.2085 ms (enqueue 0.724573 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14976 ms - Host latency: 1.21111 ms (enqueue 0.70625 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14956 ms - Host latency: 1.2114 ms (enqueue 0.693628 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1484 ms - Host latency: 1.2113 ms (enqueue 0.717908 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.15016 ms - Host latency: 1.21298 ms (enqueue 0.706726 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14844 ms - Host latency: 1.21119 ms (enqueue 0.718213 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14893 ms - Host latency: 1.21027 ms (enqueue 0.696863 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14935 ms - Host latency: 1.21289 ms (enqueue 0.662988 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14996 ms - Host latency: 1.21116 ms (enqueue 0.666565 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14934 ms - Host latency: 1.21072 ms (enqueue 0.710828 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14872 ms - Host latency: 1.2113 ms (enqueue 0.727893 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14893 ms - Host latency: 1.21074 ms (enqueue 0.69751 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.15046 ms - Host latency: 1.21553 ms (enqueue 0.673096 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14818 ms - Host latency: 1.20941 ms (enqueue 0.698657 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14943 ms - Host latency: 1.21099 ms (enqueue 0.709241 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14865 ms - Host latency: 1.21071 ms (enqueue 0.683936 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14861 ms - Host latency: 1.21249 ms (enqueue 0.703088 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14984 ms - Host latency: 1.21351 ms (enqueue 0.6922 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14513 ms - Host latency: 1.20815 ms (enqueue 0.709875 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14326 ms - Host latency: 1.20458 ms (enqueue 0.696545 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14198 ms - Host latency: 1.20691 ms (enqueue 0.672986 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14585 ms - Host latency: 1.20876 ms (enqueue 0.668347 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14268 ms - Host latency: 1.20422 ms (enqueue 0.610132 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14319 ms - Host latency: 1.20684 ms (enqueue 0.676782 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14247 ms - Host latency: 1.20795 ms (enqueue 0.689124 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14298 ms - Host latency: 1.20582 ms (enqueue 0.649878 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14335 ms - Host latency: 1.20626 ms (enqueue 0.664539 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14318 ms - Host latency: 1.20713 ms (enqueue 0.737732 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14298 ms - Host latency: 1.20652 ms (enqueue 0.690857 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1438 ms - Host latency: 1.20513 ms (enqueue 0.692834 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14238 ms - Host latency: 1.20387 ms (enqueue 0.670142 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14133 ms - Host latency: 1.20211 ms (enqueue 0.665064 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14463 ms - Host latency: 1.20867 ms (enqueue 0.667432 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14105 ms - Host latency: 1.20347 ms (enqueue 0.64447 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14323 ms - Host latency: 1.20671 ms (enqueue 0.659827 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14146 ms - Host latency: 1.20388 ms (enqueue 0.698633 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14501 ms - Host latency: 1.20725 ms (enqueue 0.704041 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1432 ms - Host latency: 1.20487 ms (enqueue 0.704651 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14271 ms - Host latency: 1.20555 ms (enqueue 0.716479 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14226 ms - Host latency: 1.20509 ms (enqueue 0.731213 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14297 ms - Host latency: 1.20398 ms (enqueue 0.721033 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14227 ms - Host latency: 1.20378 ms (enqueue 0.712207 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14257 ms - Host latency: 1.20366 ms (enqueue 0.702466 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14384 ms - Host latency: 1.20565 ms (enqueue 0.685388 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14535 ms - Host latency: 1.20789 ms (enqueue 0.707727 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14257 ms - Host latency: 1.20498 ms (enqueue 0.706787 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14249 ms - Host latency: 1.20538 ms (enqueue 0.70542 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1429 ms - Host latency: 1.20406 ms (enqueue 0.676648 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14183 ms - Host latency: 1.20409 ms (enqueue 0.695117 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14194 ms - Host latency: 1.20394 ms (enqueue 0.707397 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14094 ms - Host latency: 1.20253 ms (enqueue 0.662793 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14187 ms - Host latency: 1.20291 ms (enqueue 0.623547 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14319 ms - Host latency: 1.20557 ms (enqueue 0.708301 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14178 ms - Host latency: 1.20437 ms (enqueue 0.724231 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14341 ms - Host latency: 1.20441 ms (enqueue 0.691223 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14258 ms - Host latency: 1.20438 ms (enqueue 0.695947 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14227 ms - Host latency: 1.20344 ms (enqueue 0.681104 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14167 ms - Host latency: 1.20309 ms (enqueue 0.720618 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14287 ms - Host latency: 1.20515 ms (enqueue 0.685925 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14166 ms - Host latency: 1.20458 ms (enqueue 0.694983 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14099 ms - Host latency: 1.20435 ms (enqueue 0.691528 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1418 ms - Host latency: 1.20415 ms (enqueue 0.65686 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14253 ms - Host latency: 1.20601 ms (enqueue 0.799878 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14309 ms - Host latency: 1.2062 ms (enqueue 0.793213 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14565 ms - Host latency: 1.20745 ms (enqueue 0.692529 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1439 ms - Host latency: 1.20645 ms (enqueue 0.663696 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14316 ms - Host latency: 1.20762 ms (enqueue 0.681958 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14207 ms - Host latency: 1.20642 ms (enqueue 0.650146 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14314 ms - Host latency: 1.20762 ms (enqueue 0.620239 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14458 ms - Host latency: 1.20459 ms (enqueue 0.531641 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14324 ms - Host latency: 1.20635 ms (enqueue 0.620435 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14219 ms - Host latency: 1.20457 ms (enqueue 0.637573 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14309 ms - Host latency: 1.20574 ms (enqueue 0.772852 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14177 ms - Host latency: 1.20354 ms (enqueue 0.727295 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14124 ms - Host latency: 1.20593 ms (enqueue 0.840503 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14255 ms - Host latency: 1.20356 ms (enqueue 0.581299 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14126 ms - Host latency: 1.20195 ms (enqueue 0.62688 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14099 ms - Host latency: 1.20232 ms (enqueue 0.644946 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14128 ms - Host latency: 1.20239 ms (enqueue 0.64126 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14351 ms - Host latency: 1.20591 ms (enqueue 0.730664 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14099 ms - Host latency: 1.20283 ms (enqueue 0.723437 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14167 ms - Host latency: 1.20464 ms (enqueue 0.605005 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14167 ms - Host latency: 1.20293 ms (enqueue 0.555322 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14172 ms - Host latency: 1.20378 ms (enqueue 0.615112 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14277 ms - Host latency: 1.20642 ms (enqueue 0.601538 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14382 ms - Host latency: 1.2051 ms (enqueue 0.603638 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1427 ms - Host latency: 1.2064 ms (enqueue 0.660986 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14324 ms - Host latency: 1.20552 ms (enqueue 0.696948 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1436 ms - Host latency: 1.20554 ms (enqueue 0.733203 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14211 ms - Host latency: 1.20376 ms (enqueue 0.687769 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14248 ms - Host latency: 1.2054 ms (enqueue 0.679639 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14504 ms - Host latency: 1.20759 ms (enqueue 0.65022 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1437 ms - Host latency: 1.20527 ms (enqueue 0.723169 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14995 ms - Host latency: 1.2116 ms (enqueue 0.704956 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.151 ms - Host latency: 1.21372 ms (enqueue 0.686694 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14919 ms - Host latency: 1.21082 ms (enqueue 0.697852 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.149 ms - Host latency: 1.21042 ms (enqueue 0.688477 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14895 ms - Host latency: 1.21089 ms (enqueue 0.664429 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.15012 ms - Host latency: 1.21147 ms (enqueue 0.647583 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.15039 ms - Host latency: 1.21272 ms (enqueue 0.584131 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.15044 ms - Host latency: 1.2126 ms (enqueue 0.518481 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.15164 ms - Host latency: 1.21484 ms (enqueue 0.586377 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.15107 ms - Host latency: 1.21316 ms (enqueue 0.727173 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.15017 ms - Host latency: 1.21155 ms (enqueue 0.711694 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.15007 ms - Host latency: 1.21265 ms (enqueue 0.704053 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.15034 ms - Host latency: 1.21191 ms (enqueue 0.713062 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14946 ms - Host latency: 1.21079 ms (enqueue 0.706592 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14832 ms - Host latency: 1.21086 ms (enqueue 0.748608 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14968 ms - Host latency: 1.21125 ms (enqueue 0.712622 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14924 ms - Host latency: 1.20996 ms (enqueue 0.683716 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.149 ms - Host latency: 1.21199 ms (enqueue 0.699561 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14971 ms - Host latency: 1.21755 ms (enqueue 0.724756 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14973 ms - Host latency: 1.2123 ms (enqueue 0.701636 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14949 ms - Host latency: 1.21086 ms (enqueue 0.696851 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14873 ms - Host latency: 1.20984 ms (enqueue 0.685156 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.15127 ms - Host latency: 1.21292 ms (enqueue 0.696387 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1502 ms - Host latency: 1.21204 ms (enqueue 0.699292 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14636 ms - Host latency: 1.2074 ms (enqueue 0.687402 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14346 ms - Host latency: 1.20452 ms (enqueue 0.686646 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14216 ms - Host latency: 1.20486 ms (enqueue 0.722998 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14248 ms - Host latency: 1.20515 ms (enqueue 0.718799 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14211 ms - Host latency: 1.20376 ms (enqueue 0.691284 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14348 ms - Host latency: 1.20461 ms (enqueue 0.640356 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14045 ms - Host latency: 1.2033 ms (enqueue 0.759644 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14331 ms - Host latency: 1.20449 ms (enqueue 0.735645 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14136 ms - Host latency: 1.20281 ms (enqueue 0.712134 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14099 ms - Host latency: 1.20186 ms (enqueue 0.684814 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14353 ms - Host latency: 1.20581 ms (enqueue 0.709253 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14253 ms - Host latency: 1.20757 ms (enqueue 0.69895 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14207 ms - Host latency: 1.2051 ms (enqueue 0.67229 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14409 ms - Host latency: 1.21067 ms (enqueue 0.664478 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1426 ms - Host latency: 1.20449 ms (enqueue 0.68313 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14265 ms - Host latency: 1.20662 ms (enqueue 0.717017 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14258 ms - Host latency: 1.2042 ms (enqueue 0.678711 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14387 ms - Host latency: 1.20632 ms (enqueue 0.632373 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14297 ms - Host latency: 1.20986 ms (enqueue 0.69939 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14255 ms - Host latency: 1.20408 ms (enqueue 0.640991 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1425 ms - Host latency: 1.20747 ms (enqueue 0.653931 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14165 ms - Host latency: 1.20442 ms (enqueue 0.725781 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14167 ms - Host latency: 1.20256 ms (enqueue 0.698462 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14102 ms - Host latency: 1.20249 ms (enqueue 0.711816 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14482 ms - Host latency: 1.20696 ms (enqueue 0.706055 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14194 ms - Host latency: 1.20342 ms (enqueue 0.678467 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14089 ms - Host latency: 1.20234 ms (enqueue 0.721216 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14155 ms - Host latency: 1.20303 ms (enqueue 0.700317 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14109 ms - Host latency: 1.20378 ms (enqueue 0.701904 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14197 ms - Host latency: 1.20603 ms (enqueue 0.697729 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1418 ms - Host latency: 1.20352 ms (enqueue 0.697778 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14282 ms - Host latency: 1.20447 ms (enqueue 0.686914 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.1449 ms - Host latency: 1.20918 ms (enqueue 0.672437 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14299 ms - Host latency: 1.20823 ms (enqueue 0.688672 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14236 ms - Host latency: 1.20366 ms (enqueue 0.68584 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14368 ms - Host latency: 1.20764 ms (enqueue 0.674365 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14185 ms - Host latency: 1.20254 ms (enqueue 0.648682 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14158 ms - Host latency: 1.20249 ms (enqueue 0.708667 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14297 ms - Host latency: 1.20481 ms (enqueue 0.704248 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14358 ms - Host latency: 1.2052 ms (enqueue 0.709033 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14094 ms - Host latency: 1.2019 ms (enqueue 0.701807 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14277 ms - Host latency: 1.204 ms (enqueue 0.708667 ms)\n",
      "[08/09/2024-23:12:04] [I] Average on 10 runs - GPU latency: 1.14297 ms - Host latency: 1.20444 ms (enqueue 0.705786 ms)\n",
      "[08/09/2024-23:12:04] [I] \n",
      "[08/09/2024-23:12:04] [I] === Performance summary ===\n",
      "[08/09/2024-23:12:04] [I] Throughput: 873.17 qps\n",
      "[08/09/2024-23:12:04] [I] Latency: min = 1.18591 ms, max = 1.36273 ms, mean = 1.20476 ms, median = 1.2041 ms, percentile(90%) = 1.21265 ms, percentile(95%) = 1.21558 ms, percentile(99%) = 1.22229 ms\n",
      "[08/09/2024-23:12:04] [I] Enqueue Time: min = 0.250732 ms, max = 1.58789 ms, mean = 0.682948 ms, median = 0.69696 ms, percentile(90%) = 0.746094 ms, percentile(95%) = 0.783936 ms, percentile(99%) = 0.883545 ms\n",
      "[08/09/2024-23:12:04] [I] H2D Latency: min = 0.0541382 ms, max = 0.09375 ms, mean = 0.057872 ms, median = 0.0571289 ms, percentile(90%) = 0.0585938 ms, percentile(95%) = 0.0678711 ms, percentile(99%) = 0.0698242 ms\n",
      "[08/09/2024-23:12:04] [I] GPU Compute Time: min = 1.12744 ms, max = 1.30353 ms, mean = 1.14253 ms, median = 1.14185 ms, percentile(90%) = 1.1499 ms, percentile(95%) = 1.151 ms, percentile(99%) = 1.15405 ms\n",
      "[08/09/2024-23:12:04] [I] D2H Latency: min = 0.00341797 ms, max = 0.0123291 ms, mean = 0.00436357 ms, median = 0.00415039 ms, percentile(90%) = 0.00537109 ms, percentile(95%) = 0.00561523 ms, percentile(99%) = 0.00817871 ms\n",
      "[08/09/2024-23:12:04] [I] Total Host Walltime: 3.00285 s\n",
      "[08/09/2024-23:12:04] [I] Total GPU Compute Time: 2.99571 s\n",
      "[08/09/2024-23:12:04] [I] Explanations of the performance metrics are printed in the verbose logs.\n",
      "[08/09/2024-23:12:04] [I] \n",
      "&&&& PASSED TensorRT.trtexec [TensorRT v100300] # trtexec --onnx=resnet50/model.onnx --saveEngine=resnet_engine_intro.engine\n"
     ]
    }
   ],
   "source": [
    "!trtexec --onnx=resnet50/model.onnx --saveEngine=resnet_engine_intro.engine "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Notes on the flags above:__\n",
    "    \n",
    "Tell trtexec where to find our ONNX model:\n",
    "\n",
    "    --onnx=resnet50/model.onnx \n",
    "\n",
    "Tell trtexec where to save our optimized TensorRT engine:\n",
    "\n",
    "    --saveEngine=resnet_engine_intro.engine\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "####  4. What runtime will I use?\n",
    "\n",
    "After we have our TensorRT engine created successfully, we need to decide how to run it with TensorRT.\n",
    "\n",
    "There are two types of TensorRT runtimes: a standalone runtime which has C++ and Python bindings, and a native integration into PyTorch. In this section, we will use a simplified wrapper (ONNXClassifierWrapper) which calls the standalone runtime. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Defaulting to user installation because normal site-packages is not writeable\n",
      "Requirement already satisfied: numpy<2.0 in /home/trtuser/.local/lib/python3.10/site-packages (1.26.4)\n",
      "\u001b[33mWARNING: Error parsing dependencies of devscripts: Invalid version: '2.22.1ubuntu1'\u001b[0m\u001b[33m\n",
      "\u001b[0m"
     ]
    }
   ],
   "source": [
    "!python3 -m pip install \"numpy<2.0\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "# If you get an error in this cell, restart your notebook (possibly your whole machine) and do not run anything that imports/uses PyTorch\n",
    "\n",
    "from onnx_helper import ONNXClassifierWrapper\n",
    "trt_model = ONNXClassifierWrapper(\"resnet_engine_intro.engine\", target_dtype = PRECISION)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Note__: If this conversion fails, please restart your Jupyter notebook kernel (in menu bar Kernel->Restart Kernel) and run steps 3 to 5 again. If you get an error like 'TypeError: pybind11::init(): factory function returned nullptr' there is likely some dangling process on the GPU - restart your machine and try again.\n",
    "\n",
    "We will feed our batch of randomized dummy data into our ONNXClassifierWrapper to run inference on that batch:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([1.6979914e-04, 6.5189815e-04, 7.4759657e-05, 5.2728519e-05,\n",
       "       1.2118524e-04, 2.3333427e-04, 1.8690433e-05, 2.0056585e-04,\n",
       "       5.2347525e-05, 4.5492270e-04], dtype=float32)"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Warm up:\n",
    "trt_model.predict(dummy_input_batch)[:10] # softmax probability predictions for the first 10 classes of the first sample"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can get a rough sense of performance using %%timeit:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1.22 ms ± 3.39 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)\n"
     ]
    }
   ],
   "source": [
    "%%timeit\n",
    "trt_model.predict(dummy_input_batch)[:10]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##  Applying TensorRT to Your Model:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is a simple example applied to a single model, but how should you go about answering these questions for your workload?\n",
    "\n",
    "First and foremost, it is a good idea to get an understanding of what your options are, and where you can learn more about them! \n",
    "\n",
    "### __Compatible Models:__ MLP/CNN/RNN/Transformer/Embedding/Etc\n",
    "\n",
    "TensorRT is compatible with models consisting of [these layers](https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#layers-matrix). Using only supported layers ensures optimal performance without having to write any custom plugin code.\n",
    "\n",
    "In terms of framework, TensorRT is integrated directly with PyTorch - and most other major deep learning frameworks are supported by first converting to ONNX format.\n",
    "\n",
    "### __Conversion Methods:__ ONNX/TensorRT API\n",
    "\n",
    "The __ONNX__ path is the most performant and framework-agnostic automatic way of converting models. It's main disadvantage is that it must convert networks completely - if a network has an unsupported layer ONNX can't convert it unless you write a custom plugin.\n",
    "\n",
    "You can see an example of how to use TensorRT with ONNX:\n",
    "- [Here](./2.%20Using%20PyTorch%20through%20ONNX.ipynb) in this guide for PyTorch\n",
    "\n",
    "There is the __TensorRT API__. The TensorRT ONNX path automatically convert models to TensorRT engines for you. Sometimes, however, we want to convert something complex, or have the maximum amount of control in how our TensorRT engine is created. This let's us do things like creating custom plugins for layers that TensorRT doesn't support. \n",
    "\n",
    "When using this approach, we create TensorRT engine manually operation-by-operation using the TensorRT API's available in Python and C++. This process involves building a network identical in structure to your target network using the TensorRT API, and then loading in the weights directly in proper format. You can find more details on this [in the TensorRT documentation](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#c_topics).\n",
    "\n",
    "### __Precision:__ TF32/FP32/FP16/INT8/FP8\n",
    "\n",
    "TensorRT feature support - such as precision - for NVIDIA GPUs is determined by their __compute capability__. You can check the compute cabapility of your card on the [NVIDIA website](https://developer.nvidia.com/cuda-gpus).\n",
    "\n",
    "TensorRT supports different precisions depending on said compute capability. You can check what features are supported by your compute capability in the [TensorRT documentation](https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#hardware-precision-matrix).\n",
    "\n",
    "__TF32__ is the default training precision on cards with compute cabapilities 8.0 and higher (e.g. NVIDIA A100 and later)  - use when you want to replicate your original model performance as closely as possible on cards with compute capability of 8.0 or higher. \n",
    "\n",
    "TF32 is a precision designed to preserve the range of FP32 with the precision of FP16. In practice, this means that TF32 models train faster than FP32 models while still converging to the same accuracy. This feature is only available on newer GPUs.\n",
    "\n",
    "__FP32__ is the default training precision on cards with compute cabapilities of less than 8.0 (e.g. pre-NVIDIA A100)  - use when you want to replicate your original model performance as closely as possible on cards with compute capability of less than 8.0\n",
    "\n",
    "__FP16__ is an inference focused reduced precision. It gives up some accuracy for faster models with lower latency and lower memory footprint. In practice, the accuracy loss is generally negligible in FP16 - so FP16 is a fairly safe bet in most cases for inference. Cards that are focused on deep learning training often have strong FP16 capabilities, making FP16 a great choice for GPUs that are expected to be used for both training and inference.\n",
    "\n",
    "__INT8__ is an inference focused reduced precision. It further reduces memory requirements and latency compared to FP16. INT8 has the potential to lose more accuracy than FP16 - but TensorRT provides tools to help you quantize your network's INT8 weights to avoid this as much as possible. INT8 requires the extra step of calibrating how TensorRT should quantize your weights to integers - requiring some sample data. With careful tuning and a good calibration dataset, accuracy loss from INT8 is often minimal. This makes INT8 a great precision for lower-power environments such as those using T4 GPUs or AGX Jetson modules - both of which have strong INT8 capabilities.\n",
    "\n",
    "__FP8__  8-bit floating point type with 1-bit for sign, 4-bits for exponent, 3-bits for mantissa is an inference focused reduced precision. Similar to  INT8, it further reduces memory requirements and latency compared to FP16 with potential suffer from precision loss, especially in deep models with large ranges of values. Extra steps of calibration are needed to minimize accuracy loss. The FP8 precision is supported on NVIDIA GPUs with compute cabapilities of 9.0 and higher (e.g. NVIDIA H100 and later). \n",
    "\n",
    "### __Runtime:__ Torch-TRT/Python API/C++ API/TRITON\n",
    "\n",
    "For a more in depth discussion of these options and how they compare see [this notebook on TensorRT Runtimes!](./3.%20Understanding%20TensorRT%20Runtimes.ipynb)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## What do I do if I run into issues with conversion?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here are several steps you can try if your model is not converting to TensorRT properly:\n",
    "\n",
    "1. Check the logs - if you are using a tool such as trtexec to convert your model, it will tell you which layer is problematic\n",
    "2. Write a custom plugin - you can find more information on it [here](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#extending).\n",
    "3. Use alternative implementations of the layers or operations in question in your network definition - for example, it can be easier to use the padding argument in your convolutional layers instead of adding an explicit padding layer to the network. \n",
    "4. ONNX to TensorRT conversion can be hard to debug, but tools like graph surgeon https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/graphsurgeon/graphsurgeon.html can help you fix specific nodes in your graph as well as pull it apart for analysis or patch specific nodes in your graph\n",
    "5. Ask on the [NVIDIA developer forums](https://forums.developer.nvidia.com/c/ai-data-science/deep-learning/tensorrt) - we have many active TensorRT experts at NVIDIA who who browse the forums and can help\n",
    "6. Post an issue on the [TensorRT OSS Github](https://github.com/NVIDIA/TensorRT)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Next Steps:\n",
    "\n",
    "You have now taken a model saved in ONNX format, converted it to an optimized TensorRT engine, and deployed it using the Python runtime. This is a great first step towards getting better performance out of your deep learning models at inference time!\n",
    "\n",
    "Now, you can check out the remaining notebooks in this guide. See:\n",
    "\n",
    "- [2. Using PyTorch through ONNX.ipynb](./2.%20Using%20PyTorch%20through%20ONNX.ipynb)\n",
    "- [3. Understanding TensorRT Runtimes.ipynb](./3.%20Understanding%20TensorRT%20Runtimes.ipynb)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<h4> Profiling </h4>\n",
    "\n",
    "This is a great next step for further optimizing and debugging models you are working on productionizing\n",
    "\n",
    "You can find it here: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#performance\n",
    "\n",
    "<h4>  TRT Dev Docs </h4>\n",
    "\n",
    "Main documentation page for the ONNX, layer builder, C++, and legacy APIs\n",
    "\n",
    "You can find it here: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html\n",
    "\n",
    "<h4>  TRT OSS GitHub </h4>\n",
    "\n",
    "Contains OSS TRT components, sample applications, and plugin examples\n",
    "\n",
    "You can find it here: https://github.com/NVIDIA/TensorRT"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
