{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "【从零开始学深度学习编译器】八，TVM的算符融合以及如何使用TVM Pass Infra自定义Pass\n",
    "\n",
    "https://cloud.tencent.com/developer/article/1840909"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "关于TVM Pass Infra的介绍可以移步【从零开始学深度学习编译器】七，万字长文入门TVM Pass查看。这里来介绍一下TVM Pass Infra的使用方法，内容翻译自https://tvm.apache.org/docs/tutorials/dev/use_pass_infra.html，加了一些自己的理解。\n",
    "\n",
    "随着 Relay/tir 中优化pass次数的增加，手动执行它们并维护它们的依赖关系变得棘手。因此，我们引入了一个Pass基础设施来管理优化passes，并使其适用于 TVM 栈中不同层的 IR。\n",
    "\n",
    "Relay/tir 程序的优化Pass可以应用于各种粒度，即分别使用 tvm.relay.transform.FunctionPass/tvm.tir.transform.PrimFuncPass 和 tvm.transform.ModulePass 的function-level和module-level级别的优化pass。或者用户可以依靠 tvm.transform.Sequential 在 Relay/tir 程序上应用一系列passes，其中passes之间的依赖关系可以通过Pass Infra解决。\n",
    "\n",
    "这里主要是来演示一些开发人员如何使用Pass Infra来进行某种优化，并为Relay程序创建优化管道。这里的方法同样适用于tir。首先导入一些必要的包。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import tvm\n",
    "from tvm import te\n",
    "import tvm.relay as relay"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "def example():\n",
    "    shape = (1, 64, 54, 54)\n",
    "    c_data = np.empty(shape).astype(\"float32\")\n",
    "    c = relay.const(c_data)\n",
    "    weight = relay.var(\"weight\", shape=(64, 64, 3, 3))\n",
    "    x = relay.var(\"x\", relay.TensorType((1, 64, 56, 56), \"float32\"))\n",
    "    conv = relay.nn.conv2d(x, weight)\n",
    "    y = relay.add(c, c)\n",
    "    y = relay.multiply(y, relay.const(2, \"float32\"))\n",
    "    y = relay.add(conv, y)\n",
    "    z = relay.add(y, c)\n",
    "    z1 = relay.add(y, c)\n",
    "    z2 = relay.add(z, z1)\n",
    "    return relay.Function([x, weight], z2)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "@relay.op.register_alter_op_layout(\"nn.conv2d\", level=101)\n",
    "def alter_conv2d(attrs, inputs, tinfos, out_type):\n",
    "    data, weight = inputs\n",
    "    new_attrs = dict(attrs)\n",
    "    new_attrs[\"data_layout\"] = \"NCHW16c\"\n",
    "    return relay.nn.conv2d(data, weight, **new_attrs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在应用Pass之前我们看一下Relay程序长什么样："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "fn (%x: Tensor[(1, 64, 56, 56), float32], %weight: Tensor[(64, 64, 3, 3), float32]) {\n",
      "  %0 = add(meta[relay.Constant][0], meta[relay.Constant][0]);\n",
      "  %1 = nn.conv2d(%x, %weight, padding=[0, 0, 0, 0]);\n",
      "  %2 = multiply(%0, 2f);\n",
      "  %3 = add(%1, %2);\n",
      "  %4 = add(%3, meta[relay.Constant][0]);\n",
      "  %5 = add(%3, meta[relay.Constant][0]);\n",
      "  add(%4, %5)\n",
      "}\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/tmp/ipykernel_358129/2756789699.py:3: RuntimeWarning: overflow encountered in cast\n",
      "  c_data = np.empty(shape).astype(\"float32\")\n"
     ]
    }
   ],
   "source": [
    "print(example())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "现在我们要优化程序。Relay 具有许多优化功能。我们将选择其中的一些应用到这个示例程序中。\n",
    "\n",
    "手动应用优化Passes，这里使用一个FoldConstant的Pass。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/tmp/ipykernel_358129/2756789699.py:3: RuntimeWarning: overflow encountered in cast\n",
      "  c_data = np.empty(shape).astype(\"float32\")\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "def @main(%x: Tensor[(1, 64, 56, 56), float32] /* ty=Tensor[(1, 64, 56, 56), float32] */, %weight: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */) -> Tensor[(1, 64, 54, 54), float32] {\n",
      "  %0 = nn.conv2d(%x, %weight, padding=[0, 0, 0, 0]) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %1 = add(%0, meta[relay.Constant][0] /* ty=Tensor[(1, 64, 54, 54), float32] */) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %2 = add(%1, meta[relay.Constant][1] /* ty=Tensor[(1, 64, 54, 54), float32] */) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %3 = add(%1, meta[relay.Constant][1] /* ty=Tensor[(1, 64, 54, 54), float32] */) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "  add(%2, %3) /* ty=Tensor[(1, 64, 54, 54), float32] */\n",
      "}\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# Let's first create a relay Module which contains one or multiple Relay\n",
    "# functions for optimization.\n",
    "f = example()\n",
    "mod = tvm.IRModule.from_expr(f)\n",
    "\n",
    "# Now we can apply constant folding on the module.\n",
    "# fold_const here is a callback that doesn't take any parameters.\n",
    "fold_const = relay.transform.FoldConstant()\n",
    "# Then, we can invoke the pass on the given module. Note that the constant\n",
    "# folding pass works at the function-level. That being said, each function in\n",
    "# the module will be applied with the optimization. Users don't need to iterate\n",
    "# through individual functions manually to apply this pass.\n",
    "mod = fold_const(mod)\n",
    "# We can see from the updated program that the constants are folded.\n",
    "print(mod)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看到相对于优化之前的IR，应用了FoldConstant Pass之后初始IR的%2 = multiply(%0, 2f);由于是一个常量直接被折叠起来变成了relay.Constant][1]\n",
    "\n",
    "\n",
    "一些优化，例如fuse，也是带一些配置参数的。例如，opt_level 0 将不允许运算融合在一起。用户可以通过fuse_opt_level来启用它。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "def @main(%x: Tensor[(1, 64, 56, 56), float32] /* ty=Tensor[(1, 64, 56, 56), float32] */, %weight: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */) -> Tensor[(1, 64, 54, 54), float32] {\n",
      "  %0 = fn (%p03: Tensor[(1, 64, 56, 56), float32] /* ty=Tensor[(1, 64, 56, 56), float32] */, %p13: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */, Primitive=1) -> Tensor[(1, 64, 54, 54), float32] {\n",
      "    nn.conv2d(%p03, %p13, padding=[0, 0, 0, 0]) /* ty=Tensor[(1, 64, 54, 54), float32] */\n",
      "  } /* ty=fn (Tensor[(1, 64, 56, 56), float32], Tensor[(64, 64, 3, 3), float32]) -> Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %1 = %0(%x, %weight) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %2 = fn (%p02: Tensor[(1, 64, 54, 54), float32] /* ty=Tensor[(1, 64, 54, 54), float32] */, %p12: Tensor[(1, 64, 54, 54), float32] /* ty=Tensor[(1, 64, 54, 54), float32] */, Primitive=1) -> Tensor[(1, 64, 54, 54), float32] {\n",
      "    add(%p02, %p12) /* ty=Tensor[(1, 64, 54, 54), float32] */\n",
      "  } /* ty=fn (Tensor[(1, 64, 54, 54), float32], Tensor[(1, 64, 54, 54), float32]) -> Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %3 = %2(%1, meta[relay.Constant][0] /* ty=Tensor[(1, 64, 54, 54), float32] */) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %4 = fn (%p01: Tensor[(1, 64, 54, 54), float32] /* ty=Tensor[(1, 64, 54, 54), float32] */, %p11: Tensor[(1, 64, 54, 54), float32] /* ty=Tensor[(1, 64, 54, 54), float32] */, Primitive=1) -> Tensor[(1, 64, 54, 54), float32] {\n",
      "    add(%p01, %p11) /* ty=Tensor[(1, 64, 54, 54), float32] */\n",
      "  } /* ty=fn (Tensor[(1, 64, 54, 54), float32], Tensor[(1, 64, 54, 54), float32]) -> Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %5 = fn (%p04: Tensor[(1, 64, 54, 54), float32] /* ty=Tensor[(1, 64, 54, 54), float32] */, %p14: Tensor[(1, 64, 54, 54), float32] /* ty=Tensor[(1, 64, 54, 54), float32] */, Primitive=1) -> Tensor[(1, 64, 54, 54), float32] {\n",
      "    add(%p04, %p14) /* ty=Tensor[(1, 64, 54, 54), float32] */\n",
      "  } /* ty=fn (Tensor[(1, 64, 54, 54), float32], Tensor[(1, 64, 54, 54), float32]) -> Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %6 = %4(%3, meta[relay.Constant][1] /* ty=Tensor[(1, 64, 54, 54), float32] */) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %7 = %5(%3, meta[relay.Constant][1] /* ty=Tensor[(1, 64, 54, 54), float32] */) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %8 = fn (%p0: Tensor[(1, 64, 54, 54), float32] /* ty=Tensor[(1, 64, 54, 54), float32] */, %p1: Tensor[(1, 64, 54, 54), float32] /* ty=Tensor[(1, 64, 54, 54), float32] */, Primitive=1) -> Tensor[(1, 64, 54, 54), float32] {\n",
      "    add(%p0, %p1) /* ty=Tensor[(1, 64, 54, 54), float32] */\n",
      "  } /* ty=fn (Tensor[(1, 64, 54, 54), float32], Tensor[(1, 64, 54, 54), float32]) -> Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %8(%6, %7) /* ty=Tensor[(1, 64, 54, 54), float32] */\n",
      "}\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "mod = relay.transform.FuseOps(fuse_opt_level=0)(mod)\n",
    "\n",
    "# We can observe that the optimized module contains functions that only have\n",
    "# a signle primitive op.\n",
    "print(mod)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用 Sequential 应用一系列Pass\n",
    "\n",
    "像上面这样应用pass实际上很麻烦，它可能需要用户更好地理解它们之间的依赖关系。例如，目前 fusion 在 let bindings上效果不佳。因此，如果在融合之前应用 relay.transform.ToANormalForm() ，我们将无法融合可融合的运算符，因为此Pass为每个表达式生成 let bindings以规范 Relay 程序。\n",
    "\n",
    "因此，Relay 提供了 tvm.transform.Sequential，通过指定每个Pass所需的passes并将它们打包为一个整体来执行，从而减轻开发人员明确处理这些问题的负担。例如，现在可以使用sequential 样式应用相同的passes，如下所示。tvm.transform.Sequential 类似于 torch.nn.sequential 和 mxnet.gluon.block。例如，torch.nn.sequential 用于包含将被添加以构建网络的一系列 PyTorch Module，它侧重于网络层。相反，我们的Pass Infra中的 tvm.transform.Sequential 用于优化Pass。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "def @main(%x: Tensor[(1, 64, 56, 56), float32] /* ty=Tensor[(1, 64, 56, 56), float32] */, %weight: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */) -> Tensor[(1, 64, 54, 54), float32] {\n",
      "  %4 = fn (%p0: Tensor[(1, 64, 56, 56), float32] /* ty=Tensor[(1, 64, 56, 56), float32] */, %p1: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */, %p2: Tensor[(1, 64, 54, 54), float32] /* ty=Tensor[(1, 64, 54, 54), float32] */, %p3: Tensor[(1, 64, 54, 54), float32] /* ty=Tensor[(1, 64, 54, 54), float32] */, Primitive=1) -> Tensor[(1, 64, 54, 54), float32] {\n",
      "    %0 = nn.conv2d(%p0, %p1, padding=[0, 0, 0, 0]) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "    %1 = add(%0, %p2) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "    %2 = add(%1, %p3) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "    %3 = add(%1, %p3) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "    add(%2, %3) /* ty=Tensor[(1, 64, 54, 54), float32] */\n",
      "  } /* ty=fn (Tensor[(1, 64, 56, 56), float32], Tensor[(64, 64, 3, 3), float32], Tensor[(1, 64, 54, 54), float32], Tensor[(1, 64, 54, 54), float32]) -> Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %4(%x, %weight, meta[relay.Constant][0] /* ty=Tensor[(1, 64, 54, 54), float32] */, meta[relay.Constant][1] /* ty=Tensor[(1, 64, 54, 54), float32] */) /* ty=Tensor[(1, 64, 54, 54), float32] */\n",
      "}\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# Now let's execute some passes through :py:class:`tvm.transform.Sequential`\n",
    "f = example()\n",
    "mod = tvm.IRModule.from_expr(f)\n",
    "# Glob the interested passes.\n",
    "seq = tvm.transform.Sequential(\n",
    "    [\n",
    "        relay.transform.FoldConstant(),\n",
    "        relay.transform.EliminateCommonSubexpr(),\n",
    "        relay.transform.FuseOps(fuse_opt_level=2),\n",
    "    ]\n",
    ")\n",
    "mod1 = seq(mod)\n",
    "print(mod1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "到目前为止应用的Pass与目标设备无关。Pass Infra 还提供了一些硬件感知Pass。例如，layout alteration pass就属于此类。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "def @main(%x: Tensor[(1, 64, 56, 56), float32] /* ty=Tensor[(1, 64, 56, 56), float32] */, %weight: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */) -> Tensor[(1, 64, 54, 54), float32] {\n",
      "  %3 = fn (%p0: Tensor[(1, 64, 56, 56), float32] /* ty=Tensor[(1, 64, 56, 56), float32] */, %p1: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */, %p2: Tensor[(1, 64, 54, 54), float32] /* ty=Tensor[(1, 64, 54, 54), float32] */, %p3: Tensor[(1, 64, 54, 54), float32] /* ty=Tensor[(1, 64, 54, 54), float32] */, Primitive=1) -> Tensor[(1, 64, 54, 54), float32] {\n",
      "    %0 = nn.conv2d(%p0, %p1, padding=[0, 0, 0, 0]) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "    %1 = add(%0, %p2) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "    %2 = add(%1, %p3) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "    add(%2, %2) /* ty=Tensor[(1, 64, 54, 54), float32] */\n",
      "  } /* ty=fn (Tensor[(1, 64, 56, 56), float32], Tensor[(64, 64, 3, 3), float32], Tensor[(1, 64, 54, 54), float32], Tensor[(1, 64, 54, 54), float32]) -> Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %3(%x, %weight, meta[relay.Constant][0] /* ty=Tensor[(1, 64, 54, 54), float32] */, meta[relay.Constant][1] /* ty=Tensor[(1, 64, 54, 54), float32] */) /* ty=Tensor[(1, 64, 54, 54), float32] */\n",
      "}\n",
      "\n",
      "\n",
      "def @main(%x: Tensor[(1, 64, 56, 56), float32] /* ty=Tensor[(1, 64, 56, 56), float32] */, %weight: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */) -> Tensor[(1, 64, 54, 54), float32] {\n",
      "  %0 = layout_transform(%x, src_layout=\"NCHW\", dst_layout=\"NCHW16c\") /* ty=Tensor[(1, 4, 56, 56, 16), float32] */;\n",
      "  %1 = add(meta[relay.Constant][0] /* ty=Tensor[(1, 64, 54, 54), float32] */, meta[relay.Constant][0] /* ty=Tensor[(1, 64, 54, 54), float32] */) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %2 = multiply(%1, 2f /* ty=float32 */) /* ty=Tensor[(1, 64, 54, 54), float32] */;\n",
      "  %3 = nn.conv2d(%0, %weight, padding=[0, 0, 0, 0], data_layout=\"NCHW16c\") /* ty=Tensor[(1, 4, 54, 54, 16), float32] */;\n",
      "  %4 = layout_transform(%2, src_layout=\"NCHW\", dst_layout=\"NCHW16c\") /* ty=Tensor[(1, 4, 54, 54, 16), float32] */;\n",
      "  %5 = add(%3, %4) /* ty=Tensor[(1, 4, 54, 54, 16), float32] */;\n",
      "  %6 = layout_transform(meta[relay.Constant][0] /* ty=Tensor[(1, 64, 54, 54), float32] */, src_layout=\"NCHW\", dst_layout=\"NCHW16c\") /* ty=Tensor[(1, 4, 54, 54, 16), float32] */;\n",
      "  %7 = add(%5, %6) /* ty=Tensor[(1, 4, 54, 54, 16), float32] */;\n",
      "  %8 = add(%5, %6) /* ty=Tensor[(1, 4, 54, 54, 16), float32] */;\n",
      "  %9 = add(%7, %8) /* ty=Tensor[(1, 4, 54, 54, 16), float32] */;\n",
      "  layout_transform(%9, src_layout=\"NCHW16c\", dst_layout=\"NCHW\") /* ty=Tensor[(1, 64, 54, 54), float32] */\n",
      "}\n",
      "\n",
      "\n"
     ]
    }
   ],
   "source": [
    "with tvm.transform.PassContext(opt_level=3):\n",
    "    mod4 = seq(mod)\n",
    "print(mod4)\n",
    "\n",
    "seq1 = tvm.transform.Sequential([relay.transform.AlterOpLayout()])\n",
    "with tvm.transform.PassContext(opt_level=3):\n",
    "    with tvm.target.Target(\"llvm\"):\n",
    "        mod5 = seq1(mod)\n",
    "print(mod5)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "0x03. TVM的算符融合（操作符融合）\n",
    "\n",
    "在TVM论文中提到，对于GPU和特定加速器而言，将多次操作融合在一起的优化方法能较为明显地降低执行时间。操作符融合的想法是来源于单个Kernel函数会节省将中间结果写回全局内存的时间消耗。从具体分析来看，我们总结了四种类别的图操作符：\n",
    "\n",
    "injective(one-to-one map)：映射函数，比如加法，点乘等。\n",
    "reduction：约简，如sum/max/min，输入到输出具有降维性质的，比如sum。\n",
    "complex-out-fusable(can fuse element-wise map to output)，是计算比较复杂的，如conv2d\n",
    "opaque(cannot be fused) 无法被融合的算符，比如sort。\n",
    "根据以上对算符的不同类型，TVM提供了三种融合规则：\n",
    "\n",
    "![](https://ask.qcloudimg.com/http-save/yehe-4941972/0c25093b813f7cee57c92bfc6e9de695.png)\n",
    "\n",
    "在TVM中实现算符融合 Pass的代码在tvm/src/relay/transforms/fuse_ops.cc。TVM的算符融合主要包含以下三个步骤：\n",
    "\n",
    "遍历Relay树，建立DAG用于后支配树分析\n",
    "建立后支配树\n",
    "应用算符融合算法\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "建立DAG\n",
    "首先我们看一下Pass的注册接口：\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "namespace transform {\n",
    "\n",
    "Pass FuseOps(int fuse_opt_level) {\n",
    "  runtime::TypedPackedFunc<Function(Function, IRModule, PassContext)> pass_func =\n",
    "      [=](Function f, IRModule m, PassContext pc) {\n",
    "        int opt_level = fuse_opt_level == -1 ? pc->opt_level : fuse_opt_level;\n",
    "        auto max_fuse_depth = pc->GetConfig(\"relay.FuseOps.max_depth\", Integer(kMaxFusedOps));\n",
    "        return Downcast<Function>(FuseOps(f, opt_level, max_fuse_depth.value(), m));\n",
    "      };\n",
    "  return CreateFunctionPass(pass_func, 1, \"FuseOps\", {\"InferType\"});\n",
    "}\n",
    "\n",
    "TVM_REGISTER_GLOBAL(\"relay._transform.FuseOps\").set_body_typed(FuseOps);"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "tvm",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.17"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
