{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "# 在 Relay 中使用外部库\n",
        "\n",
        "**原作者**: [Masahiro Masuda](https://github.com/masahi), [Truman Tian](https://github.com/SiNZeRo)\n",
        "\n",
        "这是简短的教程，介绍关于如何使用在 Relay 中使用外部库，如 cuDNN，或 cuBLAS。\n",
        "\n",
        "Relay 在内部使用 TVM 来生成目标特定的代码。例如，使用 cuda 后端，TVM 为用户提供的网络中的所有层生成 cuda kernel。但有时，将不同供应商开发的外部库合并到 Relay 中也是有帮助的。幸运的是，TVM 有一种透明地调用这些库的机制。对于 Relay 用户，需要做的只是适当地设置目标字符串。\n",
        "\n",
        "使用来自 Relay 的外部库之前， TVM 需要构建您想要使用的库。例如，要使用 cuDNN，在 `cmake/config.cmake` 中启用 `USE_CUDNN` 选项，必要时需要指定 cuDNN include 和库目录。\n",
        "\n",
        "首先，导入 Relay 和 TVM。"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "import tvm\n",
        "from tvm import te\n",
        "import numpy as np\n",
        "from tvm.contrib import graph_executor as runtime\n",
        "from tvm import relay\n",
        "from tvm.relay import testing\n",
        "import tvm.testing"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 创建简单网络\n",
        "\n",
        "创建非常简单的网络进行演示。它由卷积、batch normalization 和 ReLU 激活组成。"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 2,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "out_channels = 16\n",
        "batch_size = 1\n",
        "\n",
        "data = relay.var(\"data\", relay.TensorType((batch_size, 3, 224, 224), \"float32\"))\n",
        "weight = relay.var(\"weight\")\n",
        "bn_gamma = relay.var(\"bn_gamma\")\n",
        "bn_beta = relay.var(\"bn_beta\")\n",
        "bn_mmean = relay.var(\"bn_mean\")\n",
        "bn_mvar = relay.var(\"bn_var\")\n",
        "\n",
        "simple_net = relay.nn.conv2d(\n",
        "    data=data, weight=weight, kernel_size=(3, 3), channels=out_channels, padding=(1, 1)\n",
        ")\n",
        "simple_net = relay.nn.batch_norm(simple_net, bn_gamma, bn_beta, bn_mmean, bn_mvar)[0]\n",
        "simple_net = relay.nn.relu(simple_net)\n",
        "simple_net = relay.Function(relay.analysis.free_vars(simple_net), simple_net)\n",
        "\n",
        "data_shape = (batch_size, 3, 224, 224)\n",
        "net, params = testing.create_workload(simple_net)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 使用 cuda 后端构建和运行\n",
        "\n",
        "像往常一样，用 cuda 后端构建和运行这个网络。通过将日志级别设置为 `DEBUG`，将 Relay graph 编译的结果转储为伪代码。"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 3,
      "metadata": {
        "collapsed": false
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "DEBUG:autotvm:Finish loading 825 records\n",
            "INFO:te_compiler:Using injective.cpu for add based on highest priority (10)\n",
            "/media/workspace/anaconda3/envs/mxnetx/lib/python3.10/site-packages/tvm/driver/build_module.py:263: UserWarning: target_host parameter is going to be deprecated. Please pass in tvm.target.Target(target, host=target_host) instead.\n",
            "  warnings.warn(\n",
            "INFO:te_compiler:Using injective.cpu for sqrt based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for divide based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for multiply based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for expand_dims based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for negative based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for multiply based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for add based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for expand_dims based on highest priority (10)\n",
            "WARNING:autotvm:One or more operators have not been tuned. Please tune your model for better performance. Use DEBUG logging level to see more details.\n",
            "DEBUG:autotvm:Cannot find tuning records for:\n",
            "    target=cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32\n",
            "    key=('conv2d_nchw.cuda', ('TENSOR', (1, 3, 224, 224), 'float32'), ('TENSOR', (16, 3, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'float32')\n",
            "TVM will apply a default schedule which may negatively impact performance.\n",
            "INFO:te_compiler:Using conv2d_nchw.cuda for nn.conv2d based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cuda for multiply based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cuda for add based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cuda for nn.relu based on highest priority (10)\n"
          ]
        }
      ],
      "source": [
        "import logging\n",
        "\n",
        "logging.basicConfig(level=logging.DEBUG)  # to dump TVM IR after fusion\n",
        "\n",
        "target = \"cuda\"\n",
        "lib = relay.build_module.build(net, target, params=params)\n",
        "\n",
        "dev = tvm.device(target, 0)\n",
        "data = np.random.uniform(-1, 1, size=data_shape).astype(\"float32\")\n",
        "module = runtime.GraphModule(lib[\"default\"](dev))\n",
        "module.set_input(\"data\", data)\n",
        "module.run()\n",
        "out_shape = (batch_size, out_channels, 224, 224)\n",
        "out = module.get_output(0, tvm.nd.empty(out_shape))\n",
        "out_cuda = out.numpy()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "生成的伪代码应该如下所示。\n",
        "\n",
        "```{tip}\n",
        "注意 bias add、batch normalization 和 ReLU 激活是如何融合到卷积核中的。\n",
        "```\n",
        "\n",
        "TVM 从这个表示生成单一的融合 kernel。"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 4,
      "metadata": {},
      "outputs": [
        {
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "fn (%data: Tensor[(1, 3, 224, 224), float32] /* ty=Tensor[(1, 3, 224, 224), float32] */, %weight: Tensor[(16, 3, 3, 3), float32] /* ty=Tensor[(16, 3, 3, 3), float32] */, %bn_gamma: Tensor[(16), float32] /* ty=Tensor[(16), float32] */, %bn_beta: Tensor[(16), float32] /* ty=Tensor[(16), float32] */, %bn_mean: Tensor[(16), float32] /* ty=Tensor[(16), float32] */, %bn_var: Tensor[(16), float32] /* ty=Tensor[(16), float32] */) -> Tensor[(1, 16, 224, 224), float32] {\n",
            "  %0 = nn.conv2d(%data, %weight, padding=[1, 1, 1, 1], channels=16, kernel_size=[3, 3]) /* ty=Tensor[(1, 16, 224, 224), float32] */;\n",
            "  %1 = nn.batch_norm(%0, %bn_gamma, %bn_beta, %bn_mean, %bn_var) /* ty=(Tensor[(1, 16, 224, 224), float32], Tensor[(16), float32], Tensor[(16), float32]) */;\n",
            "  %2 = %1.0 /* ty=Tensor[(1, 16, 224, 224), float32] */;\n",
            "  nn.relu(%2) /* ty=Tensor[(1, 16, 224, 224), float32] */\n",
            "} /* ty=fn (Tensor[(1, 3, 224, 224), float32], Tensor[(16, 3, 3, 3), float32], Tensor[(16), float32], Tensor[(16), float32], Tensor[(16), float32], Tensor[(16), float32]) -> Tensor[(1, 16, 224, 224), float32] */\n"
          ]
        }
      ],
      "source": [
        "print(lib.ir_mod[\"main\"])"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 5,
      "metadata": {},
      "outputs": [
        {
          "data": {
            "text/plain": [
              "{\"tvmgen_default_fused_nn_conv2d_multiply_add_nn_relu\": FunctionInfoNode(\n",
              "workspace_sizes={cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32: 768},\n",
              "  io_sizes={cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32: 3211264},\n",
              "  constant_sizes={cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32: 0},\n",
              "  tir_primfuncs={cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32: PrimFunc([placeholder, placeholder, placeholder, placeholder, T_relu]) attrs={\"from_legacy_te_schedule\": (bool)1, \"global_symbol\": \"tvmgen_default_fused_nn_conv2d_multiply_add_nn_relu\", \"tir.noalias\": (bool)1, \"hash\": \"97c4f8c60220fadf\"} {\n",
              "  // attr [iter_var(blockIdx.z, , blockIdx.z)] thread_extent = 1\n",
              "  allocate conv2d_nchw[float32 * 28], storage_scope = local\n",
              "  allocate pad_temp.shared[float32 * 114], storage_scope = shared\n",
              "  allocate placeholder.shared[float32 * 48], storage_scope = shared\n",
              "  // attr [iter_var(blockIdx.y, , blockIdx.y)] thread_extent = 224\n",
              "  // attr [iter_var(blockIdx.x, , blockIdx.x)] thread_extent = 2\n",
              "  // attr [iter_var(threadIdx.z, , threadIdx.z)] thread_extent = 4\n",
              "  // attr [iter_var(threadIdx.y, , threadIdx.y)] thread_extent = 1\n",
              "  // attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 16\n",
              "  conv2d_nchw[0] = 0f\n",
              "  conv2d_nchw[14] = 0f\n",
              "  conv2d_nchw[2] = 0f\n",
              "  conv2d_nchw[16] = 0f\n",
              "  conv2d_nchw[4] = 0f\n",
              "  conv2d_nchw[18] = 0f\n",
              "  conv2d_nchw[6] = 0f\n",
              "  conv2d_nchw[20] = 0f\n",
              "  conv2d_nchw[8] = 0f\n",
              "  conv2d_nchw[22] = 0f\n",
              "  conv2d_nchw[10] = 0f\n",
              "  conv2d_nchw[24] = 0f\n",
              "  conv2d_nchw[12] = 0f\n",
              "  conv2d_nchw[26] = 0f\n",
              "  conv2d_nchw[1] = 0f\n",
              "  conv2d_nchw[15] = 0f\n",
              "  conv2d_nchw[3] = 0f\n",
              "  conv2d_nchw[17] = 0f\n",
              "  conv2d_nchw[5] = 0f\n",
              "  conv2d_nchw[19] = 0f\n",
              "  conv2d_nchw[7] = 0f\n",
              "  conv2d_nchw[21] = 0f\n",
              "  conv2d_nchw[9] = 0f\n",
              "  conv2d_nchw[23] = 0f\n",
              "  conv2d_nchw[11] = 0f\n",
              "  conv2d_nchw[25] = 0f\n",
              "  conv2d_nchw[13] = 0f\n",
              "  conv2d_nchw[27] = 0f\n",
              "  for (rc.outer, 0, 3) {\n",
              "    // attr [iter_var(threadIdx.z, , threadIdx.z)] thread_extent = 4\n",
              "    // attr [iter_var(threadIdx.y, , threadIdx.y)] thread_extent = 1\n",
              "    // attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 16\n",
              "    if ((((threadIdx.z*29) + (threadIdx.x*2)) < 114)) {\n",
              "      if ((threadIdx.x < 15)) {\n",
              "        pad_temp.shared[((threadIdx.z*29) + (threadIdx.x*2))] = tir.if_then_else((((1 <= blockIdx.y) && (1 <= (((blockIdx.x*112) + (threadIdx.z*29)) + (threadIdx.x*2)))) && ((((blockIdx.x*112) + (threadIdx.z*29)) + (threadIdx.x*2)) < 225)), placeholder[((((((rc.outer*50176) + (blockIdx.y*224)) + (blockIdx.x*112)) + (threadIdx.z*29)) + (threadIdx.x*2)) - 225)], 0f)\n",
              "      }\n",
              "    }\n",
              "    if ((((threadIdx.z*29) + (threadIdx.x*2)) < 113)) {\n",
              "      if ((threadIdx.x < 14)) {\n",
              "        pad_temp.shared[(((threadIdx.z*29) + (threadIdx.x*2)) + 1)] = tir.if_then_else(((1 <= blockIdx.y) && ((((blockIdx.x*112) + (threadIdx.z*29)) + (threadIdx.x*2)) < 224)), placeholder[((((((rc.outer*50176) + (blockIdx.y*224)) + (blockIdx.x*112)) + (threadIdx.z*29)) + (threadIdx.x*2)) - 224)], 0f)\n",
              "      }\n",
              "    }\n",
              "    // attr [iter_var(threadIdx.z, , threadIdx.z)] thread_extent = 4\n",
              "    // attr [iter_var(threadIdx.y, , threadIdx.y)] thread_extent = 1\n",
              "    // attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 16\n",
              "    if (((floordiv(threadIdx.x, 12) + threadIdx.z) < 4)) {\n",
              "      if ((threadIdx.x < 12)) {\n",
              "        placeholder.shared[((threadIdx.z*12) + threadIdx.x)] = placeholder[((((threadIdx.z*108) + (floordiv(threadIdx.x, 3)*27)) + (rc.outer*9)) + floormod(threadIdx.x, 3))]\n",
              "      }\n",
              "    }\n",
              "    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp.shared[threadIdx.x]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp.shared[threadIdx.x]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp.shared[(threadIdx.x + 16)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[16] = (conv2d_nchw[16] + (pad_temp.shared[(threadIdx.x + 16)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp.shared[(threadIdx.x + 32)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[18] = (conv2d_nchw[18] + (pad_temp.shared[(threadIdx.x + 32)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp.shared[(threadIdx.x + 48)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[20] = (conv2d_nchw[20] + (pad_temp.shared[(threadIdx.x + 48)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp.shared[(threadIdx.x + 64)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[22] = (conv2d_nchw[22] + (pad_temp.shared[(threadIdx.x + 64)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp.shared[(threadIdx.x + 80)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[24] = (conv2d_nchw[24] + (pad_temp.shared[(threadIdx.x + 80)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp.shared[(threadIdx.x + 96)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[26] = (conv2d_nchw[26] + (pad_temp.shared[(threadIdx.x + 96)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp.shared[threadIdx.x]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp.shared[threadIdx.x]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp.shared[(threadIdx.x + 16)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[17] = (conv2d_nchw[17] + (pad_temp.shared[(threadIdx.x + 16)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp.shared[(threadIdx.x + 32)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[19] = (conv2d_nchw[19] + (pad_temp.shared[(threadIdx.x + 32)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp.shared[(threadIdx.x + 48)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[21] = (conv2d_nchw[21] + (pad_temp.shared[(threadIdx.x + 48)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp.shared[(threadIdx.x + 64)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[23] = (conv2d_nchw[23] + (pad_temp.shared[(threadIdx.x + 64)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp.shared[(threadIdx.x + 80)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[25] = (conv2d_nchw[25] + (pad_temp.shared[(threadIdx.x + 80)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp.shared[(threadIdx.x + 96)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[27] = (conv2d_nchw[27] + (pad_temp.shared[(threadIdx.x + 96)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp.shared[(threadIdx.x + 1)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp.shared[(threadIdx.x + 1)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp.shared[(threadIdx.x + 17)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[16] = (conv2d_nchw[16] + (pad_temp.shared[(threadIdx.x + 17)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp.shared[(threadIdx.x + 33)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[18] = (conv2d_nchw[18] + (pad_temp.shared[(threadIdx.x + 33)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp.shared[(threadIdx.x + 49)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[20] = (conv2d_nchw[20] + (pad_temp.shared[(threadIdx.x + 49)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp.shared[(threadIdx.x + 65)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[22] = (conv2d_nchw[22] + (pad_temp.shared[(threadIdx.x + 65)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp.shared[(threadIdx.x + 81)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[24] = (conv2d_nchw[24] + (pad_temp.shared[(threadIdx.x + 81)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp.shared[(threadIdx.x + 97)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[26] = (conv2d_nchw[26] + (pad_temp.shared[(threadIdx.x + 97)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp.shared[(threadIdx.x + 1)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp.shared[(threadIdx.x + 1)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp.shared[(threadIdx.x + 17)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[17] = (conv2d_nchw[17] + (pad_temp.shared[(threadIdx.x + 17)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp.shared[(threadIdx.x + 33)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[19] = (conv2d_nchw[19] + (pad_temp.shared[(threadIdx.x + 33)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp.shared[(threadIdx.x + 49)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[21] = (conv2d_nchw[21] + (pad_temp.shared[(threadIdx.x + 49)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp.shared[(threadIdx.x + 65)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[23] = (conv2d_nchw[23] + (pad_temp.shared[(threadIdx.x + 65)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp.shared[(threadIdx.x + 81)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[25] = (conv2d_nchw[25] + (pad_temp.shared[(threadIdx.x + 81)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp.shared[(threadIdx.x + 97)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[27] = (conv2d_nchw[27] + (pad_temp.shared[(threadIdx.x + 97)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp.shared[(threadIdx.x + 2)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp.shared[(threadIdx.x + 2)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp.shared[(threadIdx.x + 18)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[16] = (conv2d_nchw[16] + (pad_temp.shared[(threadIdx.x + 18)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp.shared[(threadIdx.x + 34)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[18] = (conv2d_nchw[18] + (pad_temp.shared[(threadIdx.x + 34)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp.shared[(threadIdx.x + 50)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[20] = (conv2d_nchw[20] + (pad_temp.shared[(threadIdx.x + 50)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp.shared[(threadIdx.x + 66)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[22] = (conv2d_nchw[22] + (pad_temp.shared[(threadIdx.x + 66)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp.shared[(threadIdx.x + 82)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[24] = (conv2d_nchw[24] + (pad_temp.shared[(threadIdx.x + 82)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp.shared[(threadIdx.x + 98)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[26] = (conv2d_nchw[26] + (pad_temp.shared[(threadIdx.x + 98)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp.shared[(threadIdx.x + 2)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp.shared[(threadIdx.x + 2)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp.shared[(threadIdx.x + 18)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[17] = (conv2d_nchw[17] + (pad_temp.shared[(threadIdx.x + 18)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp.shared[(threadIdx.x + 34)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[19] = (conv2d_nchw[19] + (pad_temp.shared[(threadIdx.x + 34)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp.shared[(threadIdx.x + 50)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[21] = (conv2d_nchw[21] + (pad_temp.shared[(threadIdx.x + 50)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp.shared[(threadIdx.x + 66)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[23] = (conv2d_nchw[23] + (pad_temp.shared[(threadIdx.x + 66)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp.shared[(threadIdx.x + 82)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[25] = (conv2d_nchw[25] + (pad_temp.shared[(threadIdx.x + 82)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp.shared[(threadIdx.x + 98)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[27] = (conv2d_nchw[27] + (pad_temp.shared[(threadIdx.x + 98)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    // attr [iter_var(threadIdx.z, , threadIdx.z)] thread_extent = 4\n",
              "    // attr [iter_var(threadIdx.y, , threadIdx.y)] thread_extent = 1\n",
              "    // attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 16\n",
              "    if ((((threadIdx.z*29) + (threadIdx.x*2)) < 114)) {\n",
              "      if ((threadIdx.x < 15)) {\n",
              "        pad_temp.shared[((threadIdx.z*29) + (threadIdx.x*2))] = tir.if_then_else(((1 <= (((blockIdx.x*112) + (threadIdx.z*29)) + (threadIdx.x*2))) && ((((blockIdx.x*112) + (threadIdx.z*29)) + (threadIdx.x*2)) < 225)), placeholder[((((((rc.outer*50176) + (blockIdx.y*224)) + (blockIdx.x*112)) + (threadIdx.z*29)) + (threadIdx.x*2)) - 1)], 0f)\n",
              "      }\n",
              "    }\n",
              "    if ((((threadIdx.z*29) + (threadIdx.x*2)) < 113)) {\n",
              "      if ((threadIdx.x < 14)) {\n",
              "        pad_temp.shared[(((threadIdx.z*29) + (threadIdx.x*2)) + 1)] = tir.if_then_else(((((blockIdx.x*112) + (threadIdx.z*29)) + (threadIdx.x*2)) < 224), placeholder[(((((rc.outer*50176) + (blockIdx.y*224)) + (blockIdx.x*112)) + (threadIdx.z*29)) + (threadIdx.x*2))], 0f)\n",
              "      }\n",
              "    }\n",
              "    // attr [iter_var(threadIdx.z, , threadIdx.z)] thread_extent = 4\n",
              "    // attr [iter_var(threadIdx.y, , threadIdx.y)] thread_extent = 1\n",
              "    // attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 16\n",
              "    if (((floordiv(threadIdx.x, 12) + threadIdx.z) < 4)) {\n",
              "      if ((threadIdx.x < 12)) {\n",
              "        placeholder.shared[((threadIdx.z*12) + threadIdx.x)] = placeholder[(((((threadIdx.z*108) + (floordiv(threadIdx.x, 3)*27)) + (rc.outer*9)) + floormod(threadIdx.x, 3)) + 3)]\n",
              "      }\n",
              "    }\n",
              "    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp.shared[threadIdx.x]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp.shared[threadIdx.x]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp.shared[(threadIdx.x + 16)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[16] = (conv2d_nchw[16] + (pad_temp.shared[(threadIdx.x + 16)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp.shared[(threadIdx.x + 32)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[18] = (conv2d_nchw[18] + (pad_temp.shared[(threadIdx.x + 32)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp.shared[(threadIdx.x + 48)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[20] = (conv2d_nchw[20] + (pad_temp.shared[(threadIdx.x + 48)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp.shared[(threadIdx.x + 64)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[22] = (conv2d_nchw[22] + (pad_temp.shared[(threadIdx.x + 64)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp.shared[(threadIdx.x + 80)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[24] = (conv2d_nchw[24] + (pad_temp.shared[(threadIdx.x + 80)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp.shared[(threadIdx.x + 96)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[26] = (conv2d_nchw[26] + (pad_temp.shared[(threadIdx.x + 96)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp.shared[threadIdx.x]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp.shared[threadIdx.x]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp.shared[(threadIdx.x + 16)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[17] = (conv2d_nchw[17] + (pad_temp.shared[(threadIdx.x + 16)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp.shared[(threadIdx.x + 32)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[19] = (conv2d_nchw[19] + (pad_temp.shared[(threadIdx.x + 32)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp.shared[(threadIdx.x + 48)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[21] = (conv2d_nchw[21] + (pad_temp.shared[(threadIdx.x + 48)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp.shared[(threadIdx.x + 64)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[23] = (conv2d_nchw[23] + (pad_temp.shared[(threadIdx.x + 64)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp.shared[(threadIdx.x + 80)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[25] = (conv2d_nchw[25] + (pad_temp.shared[(threadIdx.x + 80)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp.shared[(threadIdx.x + 96)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[27] = (conv2d_nchw[27] + (pad_temp.shared[(threadIdx.x + 96)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp.shared[(threadIdx.x + 1)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp.shared[(threadIdx.x + 1)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp.shared[(threadIdx.x + 17)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[16] = (conv2d_nchw[16] + (pad_temp.shared[(threadIdx.x + 17)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp.shared[(threadIdx.x + 33)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[18] = (conv2d_nchw[18] + (pad_temp.shared[(threadIdx.x + 33)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp.shared[(threadIdx.x + 49)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[20] = (conv2d_nchw[20] + (pad_temp.shared[(threadIdx.x + 49)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp.shared[(threadIdx.x + 65)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[22] = (conv2d_nchw[22] + (pad_temp.shared[(threadIdx.x + 65)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp.shared[(threadIdx.x + 81)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[24] = (conv2d_nchw[24] + (pad_temp.shared[(threadIdx.x + 81)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp.shared[(threadIdx.x + 97)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[26] = (conv2d_nchw[26] + (pad_temp.shared[(threadIdx.x + 97)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp.shared[(threadIdx.x + 1)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp.shared[(threadIdx.x + 1)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp.shared[(threadIdx.x + 17)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[17] = (conv2d_nchw[17] + (pad_temp.shared[(threadIdx.x + 17)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp.shared[(threadIdx.x + 33)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[19] = (conv2d_nchw[19] + (pad_temp.shared[(threadIdx.x + 33)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp.shared[(threadIdx.x + 49)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[21] = (conv2d_nchw[21] + (pad_temp.shared[(threadIdx.x + 49)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp.shared[(threadIdx.x + 65)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[23] = (conv2d_nchw[23] + (pad_temp.shared[(threadIdx.x + 65)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp.shared[(threadIdx.x + 81)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[25] = (conv2d_nchw[25] + (pad_temp.shared[(threadIdx.x + 81)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp.shared[(threadIdx.x + 97)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[27] = (conv2d_nchw[27] + (pad_temp.shared[(threadIdx.x + 97)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp.shared[(threadIdx.x + 2)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp.shared[(threadIdx.x + 2)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp.shared[(threadIdx.x + 18)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[16] = (conv2d_nchw[16] + (pad_temp.shared[(threadIdx.x + 18)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp.shared[(threadIdx.x + 34)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[18] = (conv2d_nchw[18] + (pad_temp.shared[(threadIdx.x + 34)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp.shared[(threadIdx.x + 50)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[20] = (conv2d_nchw[20] + (pad_temp.shared[(threadIdx.x + 50)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp.shared[(threadIdx.x + 66)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[22] = (conv2d_nchw[22] + (pad_temp.shared[(threadIdx.x + 66)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp.shared[(threadIdx.x + 82)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[24] = (conv2d_nchw[24] + (pad_temp.shared[(threadIdx.x + 82)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp.shared[(threadIdx.x + 98)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[26] = (conv2d_nchw[26] + (pad_temp.shared[(threadIdx.x + 98)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp.shared[(threadIdx.x + 2)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp.shared[(threadIdx.x + 2)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp.shared[(threadIdx.x + 18)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[17] = (conv2d_nchw[17] + (pad_temp.shared[(threadIdx.x + 18)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp.shared[(threadIdx.x + 34)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[19] = (conv2d_nchw[19] + (pad_temp.shared[(threadIdx.x + 34)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp.shared[(threadIdx.x + 50)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[21] = (conv2d_nchw[21] + (pad_temp.shared[(threadIdx.x + 50)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp.shared[(threadIdx.x + 66)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[23] = (conv2d_nchw[23] + (pad_temp.shared[(threadIdx.x + 66)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp.shared[(threadIdx.x + 82)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[25] = (conv2d_nchw[25] + (pad_temp.shared[(threadIdx.x + 82)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp.shared[(threadIdx.x + 98)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[27] = (conv2d_nchw[27] + (pad_temp.shared[(threadIdx.x + 98)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    // attr [iter_var(threadIdx.z, , threadIdx.z)] thread_extent = 4\n",
              "    // attr [iter_var(threadIdx.y, , threadIdx.y)] thread_extent = 1\n",
              "    // attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 16\n",
              "    if ((((threadIdx.z*29) + (threadIdx.x*2)) < 114)) {\n",
              "      if ((threadIdx.x < 15)) {\n",
              "        pad_temp.shared[((threadIdx.z*29) + (threadIdx.x*2))] = tir.if_then_else((((blockIdx.y < 223) && (1 <= (((blockIdx.x*112) + (threadIdx.z*29)) + (threadIdx.x*2)))) && ((((blockIdx.x*112) + (threadIdx.z*29)) + (threadIdx.x*2)) < 225)), placeholder[((((((rc.outer*50176) + (blockIdx.y*224)) + (blockIdx.x*112)) + (threadIdx.z*29)) + (threadIdx.x*2)) + 223)], 0f)\n",
              "      }\n",
              "    }\n",
              "    if ((((threadIdx.z*29) + (threadIdx.x*2)) < 113)) {\n",
              "      if ((threadIdx.x < 14)) {\n",
              "        pad_temp.shared[(((threadIdx.z*29) + (threadIdx.x*2)) + 1)] = tir.if_then_else(((blockIdx.y < 223) && ((((blockIdx.x*112) + (threadIdx.z*29)) + (threadIdx.x*2)) < 224)), placeholder[((((((rc.outer*50176) + (blockIdx.y*224)) + (blockIdx.x*112)) + (threadIdx.z*29)) + (threadIdx.x*2)) + 224)], 0f)\n",
              "      }\n",
              "    }\n",
              "    // attr [iter_var(threadIdx.z, , threadIdx.z)] thread_extent = 4\n",
              "    // attr [iter_var(threadIdx.y, , threadIdx.y)] thread_extent = 1\n",
              "    // attr [iter_var(threadIdx.x, , threadIdx.x)] thread_extent = 16\n",
              "    if (((floordiv(threadIdx.x, 12) + threadIdx.z) < 4)) {\n",
              "      if ((threadIdx.x < 12)) {\n",
              "        placeholder.shared[((threadIdx.z*12) + threadIdx.x)] = placeholder[(((((threadIdx.z*108) + (floordiv(threadIdx.x, 3)*27)) + (rc.outer*9)) + floormod(threadIdx.x, 3)) + 6)]\n",
              "      }\n",
              "    }\n",
              "    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp.shared[threadIdx.x]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp.shared[threadIdx.x]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp.shared[(threadIdx.x + 16)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[16] = (conv2d_nchw[16] + (pad_temp.shared[(threadIdx.x + 16)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp.shared[(threadIdx.x + 32)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[18] = (conv2d_nchw[18] + (pad_temp.shared[(threadIdx.x + 32)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp.shared[(threadIdx.x + 48)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[20] = (conv2d_nchw[20] + (pad_temp.shared[(threadIdx.x + 48)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp.shared[(threadIdx.x + 64)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[22] = (conv2d_nchw[22] + (pad_temp.shared[(threadIdx.x + 64)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp.shared[(threadIdx.x + 80)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[24] = (conv2d_nchw[24] + (pad_temp.shared[(threadIdx.x + 80)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp.shared[(threadIdx.x + 96)]*placeholder.shared[(threadIdx.z*6)]))\n",
              "    conv2d_nchw[26] = (conv2d_nchw[26] + (pad_temp.shared[(threadIdx.x + 96)]*placeholder.shared[((threadIdx.z*6) + 24)]))\n",
              "    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp.shared[threadIdx.x]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp.shared[threadIdx.x]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp.shared[(threadIdx.x + 16)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[17] = (conv2d_nchw[17] + (pad_temp.shared[(threadIdx.x + 16)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp.shared[(threadIdx.x + 32)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[19] = (conv2d_nchw[19] + (pad_temp.shared[(threadIdx.x + 32)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp.shared[(threadIdx.x + 48)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[21] = (conv2d_nchw[21] + (pad_temp.shared[(threadIdx.x + 48)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp.shared[(threadIdx.x + 64)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[23] = (conv2d_nchw[23] + (pad_temp.shared[(threadIdx.x + 64)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp.shared[(threadIdx.x + 80)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[25] = (conv2d_nchw[25] + (pad_temp.shared[(threadIdx.x + 80)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp.shared[(threadIdx.x + 96)]*placeholder.shared[((threadIdx.z*6) + 3)]))\n",
              "    conv2d_nchw[27] = (conv2d_nchw[27] + (pad_temp.shared[(threadIdx.x + 96)]*placeholder.shared[((threadIdx.z*6) + 27)]))\n",
              "    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp.shared[(threadIdx.x + 1)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp.shared[(threadIdx.x + 1)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp.shared[(threadIdx.x + 17)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[16] = (conv2d_nchw[16] + (pad_temp.shared[(threadIdx.x + 17)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp.shared[(threadIdx.x + 33)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[18] = (conv2d_nchw[18] + (pad_temp.shared[(threadIdx.x + 33)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp.shared[(threadIdx.x + 49)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[20] = (conv2d_nchw[20] + (pad_temp.shared[(threadIdx.x + 49)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp.shared[(threadIdx.x + 65)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[22] = (conv2d_nchw[22] + (pad_temp.shared[(threadIdx.x + 65)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp.shared[(threadIdx.x + 81)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[24] = (conv2d_nchw[24] + (pad_temp.shared[(threadIdx.x + 81)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp.shared[(threadIdx.x + 97)]*placeholder.shared[((threadIdx.z*6) + 1)]))\n",
              "    conv2d_nchw[26] = (conv2d_nchw[26] + (pad_temp.shared[(threadIdx.x + 97)]*placeholder.shared[((threadIdx.z*6) + 25)]))\n",
              "    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp.shared[(threadIdx.x + 1)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp.shared[(threadIdx.x + 1)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp.shared[(threadIdx.x + 17)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[17] = (conv2d_nchw[17] + (pad_temp.shared[(threadIdx.x + 17)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp.shared[(threadIdx.x + 33)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[19] = (conv2d_nchw[19] + (pad_temp.shared[(threadIdx.x + 33)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp.shared[(threadIdx.x + 49)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[21] = (conv2d_nchw[21] + (pad_temp.shared[(threadIdx.x + 49)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp.shared[(threadIdx.x + 65)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[23] = (conv2d_nchw[23] + (pad_temp.shared[(threadIdx.x + 65)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp.shared[(threadIdx.x + 81)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[25] = (conv2d_nchw[25] + (pad_temp.shared[(threadIdx.x + 81)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp.shared[(threadIdx.x + 97)]*placeholder.shared[((threadIdx.z*6) + 4)]))\n",
              "    conv2d_nchw[27] = (conv2d_nchw[27] + (pad_temp.shared[(threadIdx.x + 97)]*placeholder.shared[((threadIdx.z*6) + 28)]))\n",
              "    conv2d_nchw[0] = (conv2d_nchw[0] + (pad_temp.shared[(threadIdx.x + 2)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[14] = (conv2d_nchw[14] + (pad_temp.shared[(threadIdx.x + 2)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[2] = (conv2d_nchw[2] + (pad_temp.shared[(threadIdx.x + 18)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[16] = (conv2d_nchw[16] + (pad_temp.shared[(threadIdx.x + 18)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[4] = (conv2d_nchw[4] + (pad_temp.shared[(threadIdx.x + 34)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[18] = (conv2d_nchw[18] + (pad_temp.shared[(threadIdx.x + 34)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[6] = (conv2d_nchw[6] + (pad_temp.shared[(threadIdx.x + 50)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[20] = (conv2d_nchw[20] + (pad_temp.shared[(threadIdx.x + 50)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[8] = (conv2d_nchw[8] + (pad_temp.shared[(threadIdx.x + 66)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[22] = (conv2d_nchw[22] + (pad_temp.shared[(threadIdx.x + 66)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[10] = (conv2d_nchw[10] + (pad_temp.shared[(threadIdx.x + 82)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[24] = (conv2d_nchw[24] + (pad_temp.shared[(threadIdx.x + 82)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[12] = (conv2d_nchw[12] + (pad_temp.shared[(threadIdx.x + 98)]*placeholder.shared[((threadIdx.z*6) + 2)]))\n",
              "    conv2d_nchw[26] = (conv2d_nchw[26] + (pad_temp.shared[(threadIdx.x + 98)]*placeholder.shared[((threadIdx.z*6) + 26)]))\n",
              "    conv2d_nchw[1] = (conv2d_nchw[1] + (pad_temp.shared[(threadIdx.x + 2)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[15] = (conv2d_nchw[15] + (pad_temp.shared[(threadIdx.x + 2)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[3] = (conv2d_nchw[3] + (pad_temp.shared[(threadIdx.x + 18)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[17] = (conv2d_nchw[17] + (pad_temp.shared[(threadIdx.x + 18)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[5] = (conv2d_nchw[5] + (pad_temp.shared[(threadIdx.x + 34)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[19] = (conv2d_nchw[19] + (pad_temp.shared[(threadIdx.x + 34)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[7] = (conv2d_nchw[7] + (pad_temp.shared[(threadIdx.x + 50)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[21] = (conv2d_nchw[21] + (pad_temp.shared[(threadIdx.x + 50)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[9] = (conv2d_nchw[9] + (pad_temp.shared[(threadIdx.x + 66)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[23] = (conv2d_nchw[23] + (pad_temp.shared[(threadIdx.x + 66)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[11] = (conv2d_nchw[11] + (pad_temp.shared[(threadIdx.x + 82)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[25] = (conv2d_nchw[25] + (pad_temp.shared[(threadIdx.x + 82)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "    conv2d_nchw[13] = (conv2d_nchw[13] + (pad_temp.shared[(threadIdx.x + 98)]*placeholder.shared[((threadIdx.z*6) + 5)]))\n",
              "    conv2d_nchw[27] = (conv2d_nchw[27] + (pad_temp.shared[(threadIdx.x + 98)]*placeholder.shared[((threadIdx.z*6) + 29)]))\n",
              "  }\n",
              "  T_relu[((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x)] = max(((conv2d_nchw[0]*placeholder[(threadIdx.z*2)]) + placeholder[(threadIdx.z*2)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 401408)] = max(((conv2d_nchw[14]*placeholder[((threadIdx.z*2) + 8)]) + placeholder[((threadIdx.z*2) + 8)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 16)] = max(((conv2d_nchw[2]*placeholder[(threadIdx.z*2)]) + placeholder[(threadIdx.z*2)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 401424)] = max(((conv2d_nchw[16]*placeholder[((threadIdx.z*2) + 8)]) + placeholder[((threadIdx.z*2) + 8)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 32)] = max(((conv2d_nchw[4]*placeholder[(threadIdx.z*2)]) + placeholder[(threadIdx.z*2)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 401440)] = max(((conv2d_nchw[18]*placeholder[((threadIdx.z*2) + 8)]) + placeholder[((threadIdx.z*2) + 8)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 48)] = max(((conv2d_nchw[6]*placeholder[(threadIdx.z*2)]) + placeholder[(threadIdx.z*2)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 401456)] = max(((conv2d_nchw[20]*placeholder[((threadIdx.z*2) + 8)]) + placeholder[((threadIdx.z*2) + 8)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 64)] = max(((conv2d_nchw[8]*placeholder[(threadIdx.z*2)]) + placeholder[(threadIdx.z*2)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 401472)] = max(((conv2d_nchw[22]*placeholder[((threadIdx.z*2) + 8)]) + placeholder[((threadIdx.z*2) + 8)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 80)] = max(((conv2d_nchw[10]*placeholder[(threadIdx.z*2)]) + placeholder[(threadIdx.z*2)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 401488)] = max(((conv2d_nchw[24]*placeholder[((threadIdx.z*2) + 8)]) + placeholder[((threadIdx.z*2) + 8)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 96)] = max(((conv2d_nchw[12]*placeholder[(threadIdx.z*2)]) + placeholder[(threadIdx.z*2)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 401504)] = max(((conv2d_nchw[26]*placeholder[((threadIdx.z*2) + 8)]) + placeholder[((threadIdx.z*2) + 8)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 50176)] = max(((conv2d_nchw[1]*placeholder[((threadIdx.z*2) + 1)]) + placeholder[((threadIdx.z*2) + 1)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 451584)] = max(((conv2d_nchw[15]*placeholder[((threadIdx.z*2) + 9)]) + placeholder[((threadIdx.z*2) + 9)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 50192)] = max(((conv2d_nchw[3]*placeholder[((threadIdx.z*2) + 1)]) + placeholder[((threadIdx.z*2) + 1)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 451600)] = max(((conv2d_nchw[17]*placeholder[((threadIdx.z*2) + 9)]) + placeholder[((threadIdx.z*2) + 9)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 50208)] = max(((conv2d_nchw[5]*placeholder[((threadIdx.z*2) + 1)]) + placeholder[((threadIdx.z*2) + 1)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 451616)] = max(((conv2d_nchw[19]*placeholder[((threadIdx.z*2) + 9)]) + placeholder[((threadIdx.z*2) + 9)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 50224)] = max(((conv2d_nchw[7]*placeholder[((threadIdx.z*2) + 1)]) + placeholder[((threadIdx.z*2) + 1)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 451632)] = max(((conv2d_nchw[21]*placeholder[((threadIdx.z*2) + 9)]) + placeholder[((threadIdx.z*2) + 9)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 50240)] = max(((conv2d_nchw[9]*placeholder[((threadIdx.z*2) + 1)]) + placeholder[((threadIdx.z*2) + 1)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 451648)] = max(((conv2d_nchw[23]*placeholder[((threadIdx.z*2) + 9)]) + placeholder[((threadIdx.z*2) + 9)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 50256)] = max(((conv2d_nchw[11]*placeholder[((threadIdx.z*2) + 1)]) + placeholder[((threadIdx.z*2) + 1)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 451664)] = max(((conv2d_nchw[25]*placeholder[((threadIdx.z*2) + 9)]) + placeholder[((threadIdx.z*2) + 9)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 50272)] = max(((conv2d_nchw[13]*placeholder[((threadIdx.z*2) + 1)]) + placeholder[((threadIdx.z*2) + 1)]), 0f)\n",
              "  T_relu[(((((threadIdx.z*100352) + (blockIdx.y*224)) + (blockIdx.x*112)) + threadIdx.x) + 451680)] = max(((conv2d_nchw[27]*placeholder[((threadIdx.z*2) + 9)]) + placeholder[((threadIdx.z*2) + 9)]), 0f)\n",
              "}\n",
              "},\n",
              "  relay_primfuncs={cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32: fn (%p0: Tensor[(1, 3, 224, 224), float32] /* ty=Tensor[(1, 3, 224, 224), float32] */, %p1: Tensor[(16, 3, 3, 3), float32] /* ty=Tensor[(16, 3, 3, 3), float32] */, %p2: Tensor[(16, 1, 1), float32] /* ty=Tensor[(16, 1, 1), float32] */, %p3: Tensor[(16, 1, 1), float32] /* ty=Tensor[(16, 1, 1), float32] */, target=meta[Target][0], prim_funcs={'tvmgen_default_fused_nn_conv2d_multiply_add_nn_relu'=meta[tir.PrimFunc][0]}, out_layout=\"\", data_layout=\"NCHW\", hash=\"97c4f8c60220fadf\", kernel_layout=\"OIHW\", prim_fn_var='tvmgen_default_fused_nn_conv2d_multiply_add_nn_relu', Primitive=1) -> Tensor[(1, 16, 224, 224), float32] {\n",
              "  %0 = nn.conv2d(%p0, %p1, padding=[1, 1, 1, 1], channels=16, kernel_size=[3, 3]) /* ty=Tensor[(1, 16, 224, 224), float32] */;\n",
              "  %1 = multiply(%0, %p2) /* ty=Tensor[(1, 16, 224, 224), float32] */;\n",
              "  %2 = add(%1, %p3) /* ty=Tensor[(1, 16, 224, 224), float32] */;\n",
              "  nn.relu(%2) /* ty=Tensor[(1, 16, 224, 224), float32] */\n",
              "} /* ty=fn (Tensor[(1, 3, 224, 224), float32], Tensor[(16, 3, 3, 3), float32], Tensor[(16, 1, 1), float32], Tensor[(16, 1, 1), float32]) -> Tensor[(1, 16, 224, 224), float32] */\n",
              "}), \"__tvm_main__\": FunctionInfoNode(\n",
              "workspace_sizes={cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32: 0},\n",
              "  io_sizes={cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32: 3813376},\n",
              "  constant_sizes={cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32: 1856},\n",
              "  tir_primfuncs={},\n",
              "  relay_primfuncs={cuda -keys=cuda,gpu -arch=sm_75 -max_num_threads=1024 -thread_warp_size=32: fn (%data {virtual_device=VirtualDevice(device_type=2, virtual_device_id=0, target=Target(kind='cuda', keys={'cuda', 'gpu'}, attrs={'thread_warp_size': 32, 'max_num_threads': 1024, 'arch': \"sm_75\"}, host=Target(kind='llvm', keys={'cpu'}, attrs={'link-params': (bool)0})))}: Tensor[(1, 3, 224, 224), float32] /* ty=Tensor[(1, 3, 224, 224), float32] */, hash=\"9f4aa0477d7fec53\", executor=meta[Executor][0], kernel_layout=\"OIHW\", data_layout=\"NCHW\", out_layout=\"\", runtime=meta[Runtime][0], virtual_device=VirtualDevice(device_type=2, virtual_device_id=0, target=Target(kind='cuda', keys={'cuda', 'gpu'}, attrs={'thread_warp_size': 32, 'max_num_threads': 1024, 'arch': \"sm_75\"}, host=Target(kind='llvm', keys={'cpu'}, attrs={'link-params': (bool)0})))) -> Tensor[(1, 16, 224, 224), float32] {\n",
              "  %3 = fn (%p0: Tensor[(1, 3, 224, 224), float32] /* ty=Tensor[(1, 3, 224, 224), float32] */, %p1: Tensor[(16, 3, 3, 3), float32] /* ty=Tensor[(16, 3, 3, 3), float32] */, %p2: Tensor[(16, 1, 1), float32] /* ty=Tensor[(16, 1, 1), float32] */, %p3: Tensor[(16, 1, 1), float32] /* ty=Tensor[(16, 1, 1), float32] */, hash=\"97c4f8c60220fadf\", data_layout=\"NCHW\", kernel_layout=\"OIHW\", Primitive=1, out_layout=\"\") -> Tensor[(1, 16, 224, 224), float32] {\n",
              "    %0 = nn.conv2d(%p0, %p1, padding=[1, 1, 1, 1], channels=16, kernel_size=[3, 3]) /* ty=Tensor[(1, 16, 224, 224), float32] */;\n",
              "    %1 = multiply(%0, %p2) /* ty=Tensor[(1, 16, 224, 224), float32] */;\n",
              "    %2 = add(%1, %p3) /* ty=Tensor[(1, 16, 224, 224), float32] */;\n",
              "    nn.relu(%2) /* ty=Tensor[(1, 16, 224, 224), float32] */\n",
              "  } /* ty=fn (Tensor[(1, 3, 224, 224), float32], Tensor[(16, 3, 3, 3), float32], Tensor[(16, 1, 1), float32], Tensor[(16, 1, 1), float32]) -> Tensor[(1, 16, 224, 224), float32] */;\n",
              "  %3(%data, meta[relay.Constant][0] /* ty=Tensor[(16, 3, 3, 3), float32] */, meta[relay.Constant][1] /* ty=Tensor[(16, 1, 1), float32] */, meta[relay.Constant][2] /* ty=Tensor[(16, 1, 1), float32] */) /* ty=Tensor[(1, 16, 224, 224), float32] */\n",
              "} /* ty=fn (Tensor[(1, 3, 224, 224), float32]) -> Tensor[(1, 16, 224, 224), float32] */\n",
              "})}"
            ]
          },
          "execution_count": 5,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "lib.function_metadata"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 为卷积层使用 cuDNN\n",
        "\n",
        "可以用 cuDNN 来代替 cuDNN 的卷积核。为此，需要做的就是将选项 `\" -libs=cudnn\"` 附加到目标字符串中。"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 6,
      "metadata": {
        "collapsed": false
      },
      "outputs": [
        {
          "name": "stderr",
          "output_type": "stream",
          "text": [
            "DEBUG:autotvm:Finish loading 825 records\n",
            "INFO:te_compiler:Using injective.cpu for add based on highest priority (10)\n",
            "/media/workspace/anaconda3/envs/mxnetx/lib/python3.10/site-packages/tvm/driver/build_module.py:263: UserWarning: target_host parameter is going to be deprecated. Please pass in tvm.target.Target(target, host=target_host) instead.\n",
            "  warnings.warn(\n",
            "INFO:te_compiler:Using injective.cpu for sqrt based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for divide based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for multiply based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for expand_dims based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for negative based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for multiply based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for add based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cpu for expand_dims based on highest priority (10)\n",
            "[09:19:32] /media/pc/data/4tb/lxw/books/tvm/src/runtime/contrib/cudnn/conv_forward.cc:135: \tCUDNN Found 8 fwd algorithms, choosing CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM\n",
            "[09:19:32] /media/pc/data/4tb/lxw/books/tvm/src/runtime/contrib/cudnn/conv_forward.cc:138: \t\t0) CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_GEMM - time: 0.046912 ms, Memory: 0\n",
            "[09:19:32] /media/pc/data/4tb/lxw/books/tvm/src/runtime/contrib/cudnn/conv_forward.cc:138: \t\t1) CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM - time: 0.073984 ms, Memory: 304000\n",
            "[09:19:32] /media/pc/data/4tb/lxw/books/tvm/src/runtime/contrib/cudnn/conv_forward.cc:138: \t\t2) CUDNN_CONVOLUTION_FWD_ALGO_GEMM - time: 0.08064 ms, Memory: 5419008\n",
            "[09:19:32] /media/pc/data/4tb/lxw/books/tvm/src/runtime/contrib/cudnn/conv_forward.cc:138: \t\t3) CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD - time: 0.089344 ms, Memory: 19200\n",
            "[09:19:32] /media/pc/data/4tb/lxw/books/tvm/src/runtime/contrib/cudnn/conv_forward.cc:138: \t\t4) CUDNN_CONVOLUTION_FWD_ALGO_IMPLICIT_PRECOMP_GEMM - time: 0.129024 ms, Memory: 304000\n",
            "[09:19:32] /media/pc/data/4tb/lxw/books/tvm/src/runtime/contrib/cudnn/conv_forward.cc:138: \t\t5) CUDNN_CONVOLUTION_FWD_ALGO_FFT_TILING - time: 0.900576 ms, Memory: 374272\n",
            "[09:19:32] /media/pc/data/4tb/lxw/books/tvm/src/runtime/contrib/cudnn/conv_forward.cc:138: \t\t6) CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD_NONFUSED - time: 1.11293 ms, Memory: 137288448\n",
            "[09:19:32] /media/pc/data/4tb/lxw/books/tvm/src/runtime/contrib/cudnn/conv_forward.cc:138: \t\t7) CUDNN_CONVOLUTION_FWD_ALGO_WINOGRAD_NONFUSED - time: 1.11664 ms, Memory: 137288448\n",
            "DEBUG:autotvm:Cannot find tuning records for:\n",
            "    target=cuda -keys=cuda,gpu -arch=sm_75 -libs=cudnn -max_num_threads=1024 -thread_warp_size=32\n",
            "    key=('conv2d_cudnn.cuda', ('TENSOR', (1, 3, 224, 224), 'float32'), ('TENSOR', (16, 3, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 1, 'NCHW', 'float32')\n",
            "TVM will apply a default schedule which may negatively impact performance.\n",
            "INFO:te_compiler:Using conv2d_cudnn.cuda for nn.conv2d based on highest priority (25)\n",
            "INFO:te_compiler:Using injective.cuda for multiply based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cuda for add based on highest priority (10)\n",
            "INFO:te_compiler:Using injective.cuda for nn.relu based on highest priority (10)\n"
          ]
        }
      ],
      "source": [
        "net, params = testing.create_workload(simple_net)\n",
        "target = \"cuda -libs=cudnn\"  # use cudnn for convolution\n",
        "lib = relay.build_module.build(net, target, params=params)\n",
        "\n",
        "dev = tvm.device(target, 0)\n",
        "data = np.random.uniform(-1, 1, size=data_shape).astype(\"float32\")\n",
        "module = runtime.GraphModule(lib[\"default\"](dev))\n",
        "module.set_input(\"data\", data)\n",
        "module.run()\n",
        "out_shape = (batch_size, out_channels, 224, 224)\n",
        "out = module.get_output(0, tvm.nd.empty(out_shape))\n",
        "out_cudnn = out.numpy()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "```{note}\n",
        "如果你使用 cuDNN, Relay 不能融合后面的层的卷积。这是因为层融合发生在 TVM 内部表示 (IR) 级别。Relay 将外部库视为黑盒，因此没有办法将它们与 TVM IR 融合。\n",
        "```\n",
        "\n",
        "下面的伪代码显示，cuDNN 卷积 + bias add + batch norm + ReLU 分为两个计算阶段，一个用于 cuDNN 调用，另一个用于其余的运算。"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 19,
      "metadata": {},
      "outputs": [
        {
          "data": {
            "text/plain": [
              "#[version = \"0.0.5\"]\n",
              "def @main(%data: Tensor[(1, 3, 224, 224), float32] /* ty=Tensor[(1, 3, 224, 224), float32] */, %weight: Tensor[(16, 3, 3, 3), float32] /* ty=Tensor[(16, 3, 3, 3), float32] */, %bn_gamma: Tensor[(16), float32] /* ty=Tensor[(16), float32] */, %bn_beta: Tensor[(16), float32] /* ty=Tensor[(16), float32] */, %bn_mean: Tensor[(16), float32] /* ty=Tensor[(16), float32] */, %bn_var: Tensor[(16), float32] /* ty=Tensor[(16), float32] */) -> Tensor[(1, 16, 224, 224), float32] {\n",
              "  %0 = nn.conv2d(%data, %weight, padding=[1, 1, 1, 1], channels=16, kernel_size=[3, 3]) /* ty=Tensor[(1, 16, 224, 224), float32] */;\n",
              "  %1 = nn.batch_norm(%0, %bn_gamma, %bn_beta, %bn_mean, %bn_var) /* ty=(Tensor[(1, 16, 224, 224), float32], Tensor[(16), float32], Tensor[(16), float32]) */;\n",
              "  %2 = %1.0 /* ty=Tensor[(1, 16, 224, 224), float32] */;\n",
              "  nn.relu(%2) /* ty=Tensor[(1, 16, 224, 224), float32] */\n",
              "}"
            ]
          },
          "execution_count": 19,
          "metadata": {},
          "output_type": "execute_result"
        }
      ],
      "source": [
        "lib.ir_mod"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 验证结果\n",
        "\n",
        "可以检查两次运行的结果是否匹配。"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": 7,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "tvm.testing.assert_allclose(out_cuda, out_cudnn, rtol=1e-5)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 结论\n",
        "\n",
        "本教程涵盖了 cuDNN 与 Relay 的使用。TVM 也支持 cuBLAS。如果 cuBLAS 被启用，它将在全连接的层(`relay.dense`)内使用。要使用 cuBLAS，设置目标字符串为 `\"cuda -libs=cublas\"`。\n",
        "\n",
        "也可以同时使用 cuDNN 和 cuBLAS：`\"cuda -libs=cudnn,cublas\"`。\n",
        "\n",
        "对于 ROCm 后端，支持 MIOpen 和 rocBLAS。它们可以通过 target `\"rocm -libs=miopen,rocblas\"` 来启用。\n",
        "\n",
        "能够使用外部库是很好的，但是需要记住一些注意事项。\n",
        "\n",
        "- 首先，使用外部库可能会限制 TVM 和 Relay 的使用。\n",
        "    \n",
        "    例如，MIOpen 目前只支持 NCHW 布局和 fp32 数据类型，所以在 TVM 中不能使用其他布局或数据类型。\n",
        "\n",
        "- 其次，更重要的是，外部库限制了 graph 编译过程中算子融合的可能性，如上所示。\n",
        "\n",
        "    TVM 和 Relay 的目标是实现在各种硬件上的最佳性能，通过联合算子级和图优化。\n",
        "    为了实现这一目标，应该继续为 TVM 和 Relay 开发更好的优化，同时在必要时使用外部库作为返回现有实现的好方法。"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "Python 3.8.13 ('py38': conda)",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.8.13"
    },
    "vscode": {
      "interpreter": {
        "hash": "28558e8daad512806f5c536a1a04c119185f99f65b79002708a12162d02a79c7"
      }
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
