{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "0a9a8945",
   "metadata": {},
   "source": [
    "# 规范化处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "2a334d59",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pytest\n",
    "\n",
    "import tvm\n",
    "import tvm.testing\n",
    "from tvm import relax\n",
    "from tvm import tir\n",
    "from tvm.ir.base import assert_structural_equal\n",
    "\n",
    "import tvm.script\n",
    "from tvm.script import tir as T, relax as R"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ffb33cfe",
   "metadata": {},
   "source": [
    "## 测试基本函数的规范化处理"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "76df95fc",
   "metadata": {},
   "source": [
    "`Normalize` 变换将嵌套的算子分解为单独的绑定语句，使 IR 更加扁平化\n",
    "\n",
    "- 输入: 包含嵌套 `add` 和 `multiply` 算子的函数\n",
    "- 预期输出: 分解后的 ANF(Administrative Normal Form) 形式，每个算子都有独立的绑定变量\n",
    "- 测试重点: 验证函数体的规范化处理\n",
    "- ANF 形式特点: 每个复杂表达式都被分解为一系列绑定到变量的简单表达式\n",
    "- 核心步骤: \n",
    "    1. 手动构建带嵌套算子的函数 \n",
    "    2. 应 用Normalize 变换 \n",
    "    3. 验证转换结果是否符合 ANF 形式\n",
    "- 测试方法: 使用 `assert_structural_equal` 验证变换前后 IR 结构是否一致"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "10919041",
   "metadata": {},
   "outputs": [],
   "source": [
    "m = tir.Var(\"m\", \"int64\")\n",
    "n = tir.Var(\"n\", \"int64\")\n",
    "x = relax.Var(\"x\", R.Tensor([m, n], \"float16\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "60c45e91",
   "metadata": {},
   "source": [
    "注意: TVMScript 解析器会自动规范化用 TVMScript 编写的 IR，因此手动构造函数，这里构建包含嵌套操作的函数: `multiply(add(x,x), add(x,x))`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "08f593b3",
   "metadata": {},
   "outputs": [],
   "source": [
    "mul_add = relax.Function(\n",
    "    [x],\n",
    "    relax.op.multiply(relax.op.add(x, x), relax.op.add(x, x)),\n",
    "    ret_struct_info=R.Tensor(\"float16\", ndim=2),\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "98b0f3e9",
   "metadata": {},
   "source": [
    "注意: `from_expr` API 将私有函数(没有 `global_symbol` 的函数)命名为 `\"main\"`："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "806cfe19",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div class=\"highlight\" style=\"background: \"><pre style=\"line-height: 125%;\"><span></span><span style=\"color: #007979; font-style: italic\"># from tvm.script import ir as I</span>\n",
       "<span style=\"color: #007979; font-style: italic\"># from tvm.script import tir as T</span>\n",
       "<span style=\"color: #007979; font-style: italic\"># from tvm.script import relax as R</span>\n",
       "\n",
       "<span style=\"color: #A2F\">@I</span><span style=\"color: #A2F; font-weight: bold\">.</span>ir_module\n",
       "<span style=\"color: #008000; font-weight: bold\">class</span> <span style=\"color: #00F; font-weight: bold\">Module</span>:\n",
       "    <span style=\"color: #A2F\">@R</span><span style=\"color: #A2F; font-weight: bold\">.</span>function(private<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #008000; font-weight: bold\">True</span>)\n",
       "    <span style=\"color: #008000; font-weight: bold\">def</span> <span style=\"color: #00F\">main</span>(x: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #BA2121\">&quot;m&quot;</span>, <span style=\"color: #BA2121\">&quot;n&quot;</span>), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float16&quot;</span>)) <span style=\"color: #A2F; font-weight: bold\">-&gt;</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor(dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float16&quot;</span>, ndim<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #008000\">2</span>):\n",
       "        m <span style=\"color: #A2F; font-weight: bold\">=</span> T<span style=\"color: #A2F; font-weight: bold\">.</span>int64()\n",
       "        n <span style=\"color: #A2F; font-weight: bold\">=</span> T<span style=\"color: #A2F; font-weight: bold\">.</span>int64()\n",
       "        <span style=\"color: #008000; font-weight: bold\">return</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>multiply(R<span style=\"color: #A2F; font-weight: bold\">.</span>add(x, x), R<span style=\"color: #A2F; font-weight: bold\">.</span>add(x, x))\n",
       "</pre></div>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "before_mod = tvm.IRModule.from_expr(mul_add)\n",
    "before_mod.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "17eef220",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div class=\"highlight\" style=\"background: \"><pre style=\"line-height: 125%;\"><span></span><span style=\"color: #007979; font-style: italic\"># from tvm.script import ir as I</span>\n",
       "<span style=\"color: #007979; font-style: italic\"># from tvm.script import tir as T</span>\n",
       "<span style=\"color: #007979; font-style: italic\"># from tvm.script import relax as R</span>\n",
       "\n",
       "<span style=\"color: #A2F\">@I</span><span style=\"color: #A2F; font-weight: bold\">.</span>ir_module\n",
       "<span style=\"color: #008000; font-weight: bold\">class</span> <span style=\"color: #00F; font-weight: bold\">Module</span>:\n",
       "    <span style=\"color: #A2F\">@R</span><span style=\"color: #A2F; font-weight: bold\">.</span>function(private<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #008000; font-weight: bold\">True</span>)\n",
       "    <span style=\"color: #008000; font-weight: bold\">def</span> <span style=\"color: #00F\">main</span>(x: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #BA2121\">&quot;m&quot;</span>, <span style=\"color: #BA2121\">&quot;n&quot;</span>), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float16&quot;</span>)) <span style=\"color: #A2F; font-weight: bold\">-&gt;</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #BA2121\">&quot;m&quot;</span>, <span style=\"color: #BA2121\">&quot;n&quot;</span>), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float16&quot;</span>):\n",
       "        m <span style=\"color: #A2F; font-weight: bold\">=</span> T<span style=\"color: #A2F; font-weight: bold\">.</span>int64()\n",
       "        n <span style=\"color: #A2F; font-weight: bold\">=</span> T<span style=\"color: #A2F; font-weight: bold\">.</span>int64()\n",
       "        gv: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((m, n), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float16&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>add(x, x)\n",
       "        gv1: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((m, n), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float16&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>add(x, x)\n",
       "        gv2: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((m, n), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float16&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>multiply(gv, gv1)\n",
       "        <span style=\"color: #008000; font-weight: bold\">return</span> gv2\n",
       "</pre></div>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "after_mod = relax.transform.Normalize()(before_mod)\n",
    "after_mod.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c0494d4",
   "metadata": {},
   "source": [
    "## 测试条件语句(If节点)的规范化处理"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c47e9a2",
   "metadata": {},
   "source": [
    "- 输入: 包含 If 节点的函数，if 和 else 分支中包含嵌套操作\n",
    "- 预期输出: 规范化后的函数，其中 If 节点的分支被变换为 seq exprs，每个算子都有独立绑定\n",
    "- 测试重点: 验证条件分支内的操作是否被正确规范化\n",
    "- 变换机制: Normalize 变换会确保 If 节点的两个分支都被变换为规范化的表达式序列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "b4cf5802",
   "metadata": {},
   "outputs": [],
   "source": [
    "cond = relax.Var(\"cond\", R.Tensor([], \"bool\"))\n",
    "x = relax.Var(\"x\", R.Tensor([1], \"float32\"))\n",
    "# TODO(relax-team): 为IfNode添加类型和形状推断\n",
    "y = relax.Var(\"y\")\n",
    "\n",
    "# 注意: TVMScript解析器会自动规范化用TVMScript编写的IR，因此我们手动构造函数和If节点\n",
    "f = relax.Function(\n",
    "    [cond, x],\n",
    "    relax.SeqExpr(\n",
    "        [\n",
    "            relax.BindingBlock(\n",
    "                [\n",
    "                    relax.VarBinding(\n",
    "                        y,\n",
    "                        relax.If(\n",
    "                            cond,\n",
    "                            relax.op.multiply(relax.op.add(x, x), relax.op.add(x, x)),\n",
    "                            relax.op.add(relax.op.multiply(x, x), relax.op.multiply(x, x)),\n",
    "                        ),\n",
    "                    )\n",
    "                ]\n",
    "            )\n",
    "        ],\n",
    "        y,\n",
    "    ),\n",
    "    ret_struct_info=R.Tensor(\"float32\", ndim=1),\n",
    ")\n",
    "\n",
    "before_mod = tvm.IRModule.from_expr(f)\n",
    "after_mod = relax.transform.Normalize()(before_mod)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "654b2c24",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div class=\"highlight\" style=\"background: \"><pre style=\"line-height: 125%;\"><span></span><span style=\"color: #007979; font-style: italic\"># from tvm.script import ir as I</span>\n",
       "<span style=\"color: #007979; font-style: italic\"># from tvm.script import relax as R</span>\n",
       "\n",
       "<span style=\"color: #A2F\">@I</span><span style=\"color: #A2F; font-weight: bold\">.</span>ir_module\n",
       "<span style=\"color: #008000; font-weight: bold\">class</span> <span style=\"color: #00F; font-weight: bold\">Module</span>:\n",
       "    <span style=\"color: #A2F\">@R</span><span style=\"color: #A2F; font-weight: bold\">.</span>function(private<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #008000; font-weight: bold\">True</span>)\n",
       "    <span style=\"color: #008000; font-weight: bold\">def</span> <span style=\"color: #00F\">main</span>(cond: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;bool&quot;</span>), x: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>)) <span style=\"color: #A2F; font-weight: bold\">-&gt;</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor(dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>, ndim<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #008000\">1</span>):\n",
       "        <span style=\"color: #008000; font-weight: bold\">if</span> cond:\n",
       "            y: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>multiply(R<span style=\"color: #A2F; font-weight: bold\">.</span>add(x, x), R<span style=\"color: #A2F; font-weight: bold\">.</span>add(x, x))\n",
       "        <span style=\"color: #008000; font-weight: bold\">else</span>:\n",
       "            y: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>add(R<span style=\"color: #A2F; font-weight: bold\">.</span>multiply(x, x), R<span style=\"color: #A2F; font-weight: bold\">.</span>multiply(x, x))\n",
       "        <span style=\"color: #008000; font-weight: bold\">return</span> y\n",
       "</pre></div>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "before_mod.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "ee2b9769",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div class=\"highlight\" style=\"background: \"><pre style=\"line-height: 125%;\"><span></span><span style=\"color: #007979; font-style: italic\"># from tvm.script import ir as I</span>\n",
       "<span style=\"color: #007979; font-style: italic\"># from tvm.script import relax as R</span>\n",
       "\n",
       "<span style=\"color: #A2F\">@I</span><span style=\"color: #A2F; font-weight: bold\">.</span>ir_module\n",
       "<span style=\"color: #008000; font-weight: bold\">class</span> <span style=\"color: #00F; font-weight: bold\">Module</span>:\n",
       "    <span style=\"color: #A2F\">@R</span><span style=\"color: #A2F; font-weight: bold\">.</span>function(private<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #008000; font-weight: bold\">True</span>)\n",
       "    <span style=\"color: #008000; font-weight: bold\">def</span> <span style=\"color: #00F\">main</span>(cond: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;bool&quot;</span>), x: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>)) <span style=\"color: #A2F; font-weight: bold\">-&gt;</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>):\n",
       "        <span style=\"color: #008000; font-weight: bold\">if</span> cond:\n",
       "            gv: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>add(x, x)\n",
       "            gv1: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>add(x, x)\n",
       "            gv2: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>multiply(gv, gv1)\n",
       "            y: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> gv2\n",
       "        <span style=\"color: #008000; font-weight: bold\">else</span>:\n",
       "            gv3: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>multiply(x, x)\n",
       "            gv4: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>multiply(x, x)\n",
       "            gv5: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>add(gv3, gv4)\n",
       "            y: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> gv5\n",
       "        <span style=\"color: #008000; font-weight: bold\">return</span> y\n",
       "</pre></div>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "after_mod.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6c2b33b9",
   "metadata": {},
   "source": [
    "## 测试已经是ANF形式的IR的规范化处理"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "31916dbe",
   "metadata": {},
   "source": [
    "- 输入: 已经符合ANF形式的IR模块\n",
    "- 预期输出: 保持不变，Normalize变换对其不产生影响\n",
    "- 测试重点: 验证Normalize变换对已经规范化的IR是幂等的\n",
    "- 幂等性说明: 多次应用同一个变换应产生相同的结果，不会改变已经符合要求的IR"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "9bea7290",
   "metadata": {},
   "outputs": [],
   "source": [
    "# normalize pass对ANF形式的IR应该是无操作的\n",
    "@tvm.script.ir_module\n",
    "class ANFMod1:\n",
    "    @R.function\n",
    "    def f(x: R.Tensor(dtype=\"float32\")):\n",
    "        gv = R.add(x, x)\n",
    "        gv1 = R.add(gv, gv)\n",
    "        gv2 = R.add(gv, gv1)\n",
    "        return (gv, gv2)\n",
    "\n",
    "before_mod = ANFMod1\n",
    "after_mod = relax.transform.Normalize()(before_mod)\n",
    "assert_structural_equal(before_mod, after_mod, map_free_vars=True)\n",
    "\n",
    "# 测试dataflow块的情况\n",
    "@tvm.script.ir_module\n",
    "class ANFMod2:\n",
    "    @R.function\n",
    "    def foo(x: R.Tensor((\"m\", \"n\"), \"float32\")):\n",
    "        m, n = T.int64(), T.int64()\n",
    "        with R.dataflow():\n",
    "            lv0 = R.call_dps_packed(\"test.op.identity\", (x,), R.Tensor((m, n), dtype=\"float32\"))\n",
    "            gv0 = R.call_dps_packed(\n",
    "                \"test.op.identity\", (lv0,), R.Tensor((m, n), dtype=\"float32\")\n",
    "            )\n",
    "            R.output(gv0)\n",
    "        return gv0\n",
    "\n",
    "mod = ANFMod2\n",
    "mod_post = relax.transform.Normalize()(mod)\n",
    "\n",
    "assert_structural_equal(mod, mod_post)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "878608c1",
   "metadata": {},
   "source": [
    "## 测试序列表达式(SeqExpr)中非叶节点体的规范化处理\n",
    "\n",
    "- 输入: 序列表达式的体(body)不是叶节点的情况\n",
    "- 预期输出: 将非叶节点体绑定到一个变量\n",
    "- 测试重点: 验证seq expr中非叶节点的处理逻辑\n",
    "- 技术细节: 规范化过程会将复杂的表达式绑定到变量，使表达式结构更加扁平"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "08ef3333",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 一个带有非叶节点体的序列表达式也应该将该体绑定到一个变量\n",
    "x = relax.Var(\"x\", R.Tensor([], \"int32\"))\n",
    "y = relax.Var(\"y\", R.Tensor([], \"int32\"))\n",
    "seq = relax.SeqExpr([], relax.op.add(x, y))\n",
    "f = relax.Function(\n",
    "    [x, y],\n",
    "    seq,\n",
    "    ret_struct_info=R.Tensor([], \"int32\"),\n",
    ")\n",
    "\n",
    "before_mod = tvm.IRModule.from_expr(f)\n",
    "after_mod = relax.transform.Normalize()(before_mod)\n",
    "\n",
    "@R.function(private=True)\n",
    "def expected(\n",
    "    x: R.Tensor((), dtype=\"int32\"), y: R.Tensor((), dtype=\"int32\")\n",
    ") -> R.Tensor(ndim=0, dtype=\"int32\"):\n",
    "    # 规范化插入了这样的绑定\n",
    "    z = R.add(x, y)\n",
    "    return z\n",
    "\n",
    "assert_structural_equal(after_mod[\"main\"], expected)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "00673c52",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 一个体不是序列表达式的函数应该将其包装在序列表达式中\n",
    "x = relax.Var(\"x\", R.Tensor([], \"int32\"))\n",
    "y = relax.Var(\"y\", R.Tensor([], \"int32\"))\n",
    "f = relax.Function(\n",
    "    [x, y],\n",
    "    relax.op.add(x, y),\n",
    "    ret_struct_info=R.Tensor([], \"int32\"),\n",
    ")\n",
    "\n",
    "before_mod = tvm.IRModule.from_expr(f)\n",
    "after_mod = relax.transform.Normalize()(before_mod)\n",
    "\n",
    "@R.function(private=True)\n",
    "def expected(\n",
    "    x: R.Tensor((), dtype=\"int32\"), y: R.Tensor((), dtype=\"int32\")\n",
    ") -> R.Tensor(ndim=0, dtype=\"int32\"):\n",
    "    # 结果将是一个序列表达式，其中body是一个变量\n",
    "    z = R.add(x, y)\n",
    "    return z\n",
    "\n",
    "assert_structural_equal(after_mod[\"main\"], expected)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "72470297",
   "metadata": {},
   "source": [
    "## 测试If节点分支的规范化处理\n",
    "- 输入: If节点的分支不是序列表达式的情况\n",
    "- 预期输出: 将If节点的分支转换为序列表达式\n",
    "- 测试重点: 验证If节点内部结构的规范化\n",
    "- 技术要点: 规范化过程会确保If节点的两个分支都符合ANF形式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "c81e483f",
   "metadata": {},
   "outputs": [],
   "source": [
    "# if节点的分支必须是序列表达式\n",
    "x = relax.Var(\"x\", R.Tensor([], \"int32\"))\n",
    "y = relax.Var(\"y\", R.Tensor([], \"int32\"))\n",
    "# TODO(@relax-team): z具有()形状和TensorType(ndim=0)类型，\n",
    "# 但规范化未能推断出这些，尽管它应该能推断\n",
    "z = relax.Var(\"z\")\n",
    "cond = relax.Var(\"cond\", R.Tensor([], \"bool\"))\n",
    "plus = relax.op.add(x, y)\n",
    "mult = relax.op.multiply(x, y)\n",
    "if_node = relax.If(cond, plus, mult)\n",
    "seq = relax.SeqExpr([relax.BindingBlock([relax.VarBinding(z, if_node)])], z)\n",
    "f = relax.Function(\n",
    "    [cond, x, y],\n",
    "    seq,\n",
    "    ret_struct_info=R.Tensor([], \"int32\"),\n",
    ")\n",
    "\n",
    "before_mod = tvm.IRModule.from_expr(f)\n",
    "after_mod = relax.transform.Normalize()(before_mod)\n",
    "\n",
    "@R.function(private=True)\n",
    "def expected(\n",
    "    cond: R.Tensor((), dtype=\"bool\"),\n",
    "    x: R.Tensor((), dtype=\"int32\"),\n",
    "    y: R.Tensor((), dtype=\"int32\"),\n",
    ") -> R.Tensor(ndim=0, dtype=\"int32\"):\n",
    "    # 分支的body将是带有绑定的序列表达式\n",
    "    if cond:\n",
    "        w = R.add(x, y)\n",
    "        z = w\n",
    "    else:\n",
    "        w = R.multiply(x, y)\n",
    "        z = w\n",
    "    return z\n",
    "\n",
    "assert_structural_equal(after_mod[\"main\"], expected)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ada70e9e",
   "metadata": {},
   "source": [
    "## 测试If节点条件的规范化处理\n",
    "- 输入: If节点的条件是复杂表达式的情况\n",
    "- 预期输出: 将复杂条件表达式分解为单独的绑定语句\n",
    "- 测试重点: 验证If条件表达式的规范化处理\n",
    "- 技术细节: 即使是if条件也会被规范化，确保所有复杂表达式都被分解"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "6d320a01",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div class=\"highlight\" style=\"background: \"><pre style=\"line-height: 125%;\"><span></span><span style=\"color: #007979; font-style: italic\"># from tvm.script import ir as I</span>\n",
       "<span style=\"color: #007979; font-style: italic\"># from tvm.script import relax as R</span>\n",
       "\n",
       "<span style=\"color: #A2F\">@I</span><span style=\"color: #A2F; font-weight: bold\">.</span>ir_module\n",
       "<span style=\"color: #008000; font-weight: bold\">class</span> <span style=\"color: #00F; font-weight: bold\">Module</span>:\n",
       "    <span style=\"color: #A2F\">@R</span><span style=\"color: #A2F; font-weight: bold\">.</span>function(private<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #008000; font-weight: bold\">True</span>)\n",
       "    <span style=\"color: #008000; font-weight: bold\">def</span> <span style=\"color: #00F\">main</span>(cond: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;bool&quot;</span>), x: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>)) <span style=\"color: #A2F; font-weight: bold\">-&gt;</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>):\n",
       "        gv2: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;bool&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> (cond,)[<span style=\"color: #008000\">0</span>]\n",
       "        <span style=\"color: #008000; font-weight: bold\">if</span> gv2:\n",
       "            gv: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>add(x, x)\n",
       "            y: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> gv\n",
       "        <span style=\"color: #008000; font-weight: bold\">else</span>:\n",
       "            gv1: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>multiply(x, x)\n",
       "            y: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((<span style=\"color: #008000\">1</span>,), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> gv1\n",
       "        <span style=\"color: #008000; font-weight: bold\">return</span> y\n",
       "</pre></div>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "cond = relax.Var(\"cond\", R.Tensor([], \"bool\"))\n",
    "x = relax.Var(\"x\", R.Tensor([1], \"float32\"))\n",
    "# TODO(relax-team): 为IfNode添加类型和形状推断\n",
    "y = relax.Var(\"y\")\n",
    "\n",
    "# 条件被包装在元组中然后被索引\n",
    "f = relax.Function(\n",
    "    [cond, x],\n",
    "    relax.SeqExpr(\n",
    "        [\n",
    "            relax.BindingBlock(\n",
    "                [\n",
    "                    relax.VarBinding(\n",
    "                        y,\n",
    "                        relax.If(\n",
    "                            relax.TupleGetItem(relax.Tuple([cond]), 0),\n",
    "                            relax.op.add(x, x),\n",
    "                            relax.op.multiply(x, x),\n",
    "                        ),\n",
    "                    )\n",
    "                ]\n",
    "            )\n",
    "        ],\n",
    "        y,\n",
    "    ),\n",
    "    ret_struct_info=R.Tensor(\"float32\", ndim=1),\n",
    ")\n",
    "\n",
    "before_mod = tvm.IRModule.from_expr(f)\n",
    "after_mod = relax.transform.Normalize()(before_mod)\n",
    "\n",
    "after_mod.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8d05007b",
   "metadata": {},
   "source": [
    "## 测试元组元素获取(TupleGetItem)的规范化处理\n",
    "- 输入: 嵌套的TupleGetItem操作\n",
    "- 预期输出: 将嵌套的元组索引操作分解为单独的绑定语句\n",
    "- 测试重点: 验证复杂元组操作的规范化\n",
    "- 技术细节: 多层嵌套的元组索引会被分解为多个简单的索引操作"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "80a69fcb",
   "metadata": {},
   "outputs": [],
   "source": [
    "x = relax.Var(\"x\", R.Tensor([], \"int32\"))\n",
    "f = relax.Function(\n",
    "    [x],\n",
    "    relax.TupleGetItem(\n",
    "        relax.TupleGetItem(\n",
    "            relax.Tuple([relax.Tuple([x])]),\n",
    "            0,\n",
    "        ),\n",
    "        0,\n",
    "    ),\n",
    "    ret_struct_info=R.Tensor([], \"int32\"),\n",
    ")\n",
    "\n",
    "before_mod = tvm.IRModule.from_expr(f)\n",
    "after_mod = relax.transform.Normalize()(before_mod)\n",
    "\n",
    "# TODO: 在我们规范化SeqExprs后重新审视(作为规范化的一部分？)\n",
    "# 这次不使用解析器，因为正确写出它会导致\n",
    "# *一个*绑定块，而规范化版本有*两个*\n",
    "idx_var = relax.Var(\"idx_var\", R.Tuple([R.Tensor([], \"int32\")]))\n",
    "ret_var = relax.Var(\"ret\", R.Tensor([], \"int32\"))\n",
    "expected_f = relax.Function(\n",
    "    [x],\n",
    "    relax.SeqExpr(\n",
    "        [\n",
    "            relax.BindingBlock(\n",
    "                [\n",
    "                    relax.VarBinding(\n",
    "                        idx_var, relax.TupleGetItem(relax.Tuple([relax.Tuple([x])]), 0)\n",
    "                    )\n",
    "                ]\n",
    "            ),\n",
    "            relax.BindingBlock([relax.VarBinding(ret_var, relax.TupleGetItem(idx_var, 0))]),\n",
    "        ],\n",
    "        ret_var,\n",
    "    ),\n",
    "    ret_struct_info=R.Tensor([], \"int32\"),\n",
    ")\n",
    "expected_mod = tvm.IRModule.from_expr(expected_f)\n",
    "# 应用规范化以填充类型和形状注解(否则很繁琐)\n",
    "final_mod = relax.transform.Normalize()(expected_mod)\n",
    "\n",
    "assert_structural_equal(after_mod, final_mod)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1970989d",
   "metadata": {},
   "source": [
    "## 测试相邻块合并的规范化处理\n",
    "- 输入: 包含多个相邻数据块和绑定块的函数\n",
    "- 预期输出: 将相邻的同类块合并，并规范化变量引用\n",
    "- 测试重点: 验证块合并优化逻辑\n",
    "- 优化策略: Normalize转换会合并相邻的相同类型块，减少IR中的块数量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "e9df26ab",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div class=\"highlight\" style=\"background: \"><pre style=\"line-height: 125%;\"><span></span><span style=\"color: #007979; font-style: italic\"># from tvm.script import ir as I</span>\n",
       "<span style=\"color: #007979; font-style: italic\"># from tvm.script import relax as R</span>\n",
       "\n",
       "<span style=\"color: #A2F\">@I</span><span style=\"color: #A2F; font-weight: bold\">.</span>ir_module\n",
       "<span style=\"color: #008000; font-weight: bold\">class</span> <span style=\"color: #00F; font-weight: bold\">Module</span>:\n",
       "    <span style=\"color: #A2F\">@R</span><span style=\"color: #A2F; font-weight: bold\">.</span>function(private<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #008000; font-weight: bold\">True</span>)\n",
       "    <span style=\"color: #008000; font-weight: bold\">def</span> <span style=\"color: #00F\">main</span>(x: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;int32&quot;</span>)) <span style=\"color: #A2F; font-weight: bold\">-&gt;</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;int32&quot;</span>):\n",
       "        <span style=\"color: #008000; font-weight: bold\">with</span> R<span style=\"color: #A2F; font-weight: bold\">.</span>dataflow():\n",
       "            v0: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;int32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> x\n",
       "            v1: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;int32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> v0\n",
       "            R<span style=\"color: #A2F; font-weight: bold\">.</span>output(v0, v1)\n",
       "        v2: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;int32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> v1\n",
       "        v3: R<span style=\"color: #A2F; font-weight: bold\">.</span>Tensor((), dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;int32&quot;</span>) <span style=\"color: #A2F; font-weight: bold\">=</span> v2\n",
       "        <span style=\"color: #008000; font-weight: bold\">return</span> v3\n",
       "</pre></div>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "x = relax.Var(\"x\", R.Tensor([], \"int32\"))\n",
    "v0 = relax.Var(\"v0\", R.Tensor([], \"int32\"))\n",
    "v1 = relax.Var(\"v1\", R.Tensor([], \"int32\"))\n",
    "v2 = relax.Var(\"v2\", R.Tensor([], \"int32\"))\n",
    "v3 = relax.Var(\"v3\", R.Tensor([], \"int32\"))\n",
    "f = relax.Function(\n",
    "    [x],\n",
    "    relax.SeqExpr(\n",
    "        [\n",
    "            relax.DataflowBlock([relax.VarBinding(v0, x)]),\n",
    "            relax.DataflowBlock([relax.VarBinding(v1, v0)]),\n",
    "            relax.BindingBlock([relax.VarBinding(v2, v1)]),\n",
    "            relax.BindingBlock([relax.VarBinding(v3, v2)]),\n",
    "        ],\n",
    "        v3,\n",
    "    ),\n",
    "    ret_struct_info=R.Tensor([], \"int32\"),\n",
    ")\n",
    "\n",
    "after_mod = relax.transform.Normalize()(tvm.IRModule.from_expr(f))\n",
    "after_mod.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "64c736a3",
   "metadata": {},
   "source": [
    "# 测试嵌套序列表达式的规范化处理\n",
    "- 输入: 包含嵌套SeqExpr的函数\n",
    "- 预期输出: 展平嵌套结构，将所有绑定提升到顶层\n",
    "- 测试重点: 验证嵌套序列的展平逻辑\n",
    "- 转换机制: Normalize转换会递归处理嵌套的序列表达式，将它们展平为单一层次的绑定"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "466dfba5",
   "metadata": {},
   "outputs": [],
   "source": [
    "x = relax.Var(\"x\", R.Tensor([], \"int32\"))\n",
    "y = relax.Var(\"y\", R.Tensor([], \"int32\"))\n",
    "z = relax.Var(\"z\", R.Tensor([], \"int32\"))\n",
    "seq = relax.SeqExpr(\n",
    "    [\n",
    "        relax.BindingBlock(\n",
    "            [\n",
    "                relax.VarBinding(x, relax.const(1)),\n",
    "                relax.VarBinding(\n",
    "                    y,\n",
    "                    relax.SeqExpr(\n",
    "                        [relax.BindingBlock([relax.VarBinding(z, relax.const(2))])],\n",
    "                        z,\n",
    "                    ),\n",
    "                ),\n",
    "            ]\n",
    "        )\n",
    "    ],\n",
    "    y,\n",
    ")\n",
    "\n",
    "f = relax.Function(\n",
    "    [],\n",
    "    seq,\n",
    "    ret_struct_info=R.Tensor([], \"int32\"),\n",
    ")\n",
    "after_mod = relax.transform.Normalize()(tvm.IRModule.from_expr(f))\n",
    "\n",
    "@R.function(private=True)\n",
    "def expected():\n",
    "    x = relax.const(1)\n",
    "    z = relax.const(2)\n",
    "    y = z\n",
    "    return y\n",
    "\n",
    "assert_structural_equal(after_mod[\"main\"], expected)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d29fb339",
   "metadata": {},
   "source": [
    "# 测试包含数据流块的嵌套序列表达式的规范化处理\n",
    "- 输入: 包含DataflowBlock的嵌套SeqExpr\n",
    "- 预期输出: 展平嵌套结构，同时保留数据流块的特性\n",
    "- 测试重点: 验证嵌套数据流块的处理逻辑\n",
    "- 技术挑战: 需要在展平嵌套结构的同时，保持数据流块的语义完整性"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "91321934",
   "metadata": {},
   "outputs": [],
   "source": [
    "x = relax.Var(\"x\", R.Tensor([], \"int32\"))\n",
    "y = relax.Var(\"y\", R.Tensor([], \"int32\"))\n",
    "z = relax.Var(\"z\", R.Tensor([], \"int32\"))\n",
    "q = relax.Var(\"u\", R.Tensor([], \"int32\"))\n",
    "w = relax.DataflowVar(\"w\", R.Tensor([], \"int32\"))\n",
    "u = relax.Var(\"u\", R.Tensor([], \"int32\"))\n",
    "seq = relax.SeqExpr(\n",
    "    [\n",
    "        relax.BindingBlock(\n",
    "            [\n",
    "                relax.VarBinding(x, relax.const(1)),\n",
    "                relax.VarBinding(\n",
    "                    y,\n",
    "                    relax.SeqExpr(\n",
    "                        [\n",
    "                            relax.BindingBlock([relax.VarBinding(q, relax.const(2))]),\n",
    "                            relax.DataflowBlock(\n",
    "                                [\n",
    "                                    relax.VarBinding(w, q),\n",
    "                                    relax.VarBinding(u, w),\n",
    "                                ]\n",
    "                            ),\n",
    "                            relax.BindingBlock([relax.VarBinding(z, u)]),\n",
    "                        ],\n",
    "                        z,\n",
    "                    ),\n",
    "                ),\n",
    "            ]\n",
    "        )\n",
    "    ],\n",
    "    y,\n",
    ")\n",
    "\n",
    "f = relax.Function(\n",
    "    [],\n",
    "    seq,\n",
    "    ret_struct_info=R.Tensor([], \"int32\"),\n",
    ")\n",
    "after_mod = relax.transform.Normalize()(tvm.IRModule.from_expr(f))\n",
    "\n",
    "@R.function(private=True)\n",
    "def expected():\n",
    "    x = relax.const(1)\n",
    "    q = relax.const(2)\n",
    "    with R.dataflow():\n",
    "        w = q\n",
    "        u = w\n",
    "        R.output(u)\n",
    "    z = u\n",
    "    y = z\n",
    "    return y\n",
    "\n",
    "assert_structural_equal(after_mod[\"main\"], expected)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6ad327e1",
   "metadata": {},
   "source": [
    "# 测试深层嵌套序列表达式的规范化处理\n",
    "- 输入: 多层嵌套的SeqExpr\n",
    "- 预期输出: 完全展平深层嵌套结构\n",
    "- 测试重点: 验证深层嵌套的处理能力\n",
    "- 边界测试: 确保规范化转换能够处理任意深度的嵌套结构"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "26e8a021",
   "metadata": {},
   "outputs": [],
   "source": [
    "x = relax.Var(\"x\", R.Tensor([], \"int32\"))\n",
    "y = relax.Var(\"y\", R.Tensor([], \"int32\"))\n",
    "z = relax.Var(\"z\", R.Tensor([], \"int32\"))\n",
    "u = relax.Var(\"u\", R.Tensor([], \"int32\"))\n",
    "v = relax.Var(\"v\", R.Tensor([], \"int32\"))\n",
    "w = relax.Var(\"w\", R.Tensor([], \"int32\"))\n",
    "_ = relax.Var(\"w\", R.Tensor([], \"int32\"))\n",
    "seq = relax.SeqExpr(\n",
    "    [\n",
    "        relax.BindingBlock(\n",
    "            [\n",
    "                relax.VarBinding(x, relax.const(1)),\n",
    "                relax.VarBinding(\n",
    "                    y,\n",
    "                    relax.SeqExpr(\n",
    "                        [\n",
    "                            relax.BindingBlock(\n",
    "                                [\n",
    "                                    relax.VarBinding(\n",
    "                                        z,\n",
    "                                        relax.SeqExpr(\n",
    "                                            [\n",
    "                                                relax.BindingBlock(\n",
    "                                                    [\n",
    "                                                        relax.VarBinding(u, relax.const(2)),\n",
    "                                                        relax.MatchCast(\n",
    "                                                            _, u, R.Tensor([], \"int32\")\n",
    "                                                        ),\n",
    "                                                        relax.VarBinding(v, u),\n",
    "                                                        relax.MatchCast(\n",
    "                                                            w, v, R.Tensor([], \"int32\")\n",
    "                                                        ),\n",
    "                                                    ]\n",
    "                                                )\n",
    "                                            ],\n",
    "                                            w,\n",
    "                                        ),\n",
    "                                    )\n",
    "                                ]\n",
    "                            )\n",
    "                        ],\n",
    "                        z,\n",
    "                    ),\n",
    "                ),\n",
    "            ]\n",
    "        )\n",
    "    ],\n",
    "    y,\n",
    ")\n",
    "\n",
    "f = relax.Function(\n",
    "    [],\n",
    "    seq,\n",
    "    ret_struct_info=R.Tensor([], \"int32\"),\n",
    ")\n",
    "after_mod = relax.transform.Normalize()(tvm.IRModule.from_expr(f))\n",
    "\n",
    "@R.function(private=True)\n",
    "def expected():\n",
    "    x = relax.const(1)\n",
    "    u = relax.const(2)\n",
    "    _ = R.match_cast(u, R.Tensor((), \"int32\"))\n",
    "    v = u\n",
    "    w = R.match_cast(v, R.Tensor((), \"int32\"))\n",
    "    z = w\n",
    "    y = z\n",
    "    return y\n",
    "\n",
    "assert_structural_equal(after_mod[\"main\"], expected)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f8136b79",
   "metadata": {},
   "source": [
    "# 测试在数据流块中嵌套非数据流块的错误情况\n",
    "- 标记为预期失败的测试\n",
    "- 验证: 在数据流块中嵌套普通绑定块应该失败"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "400e3f4f",
   "metadata": {},
   "outputs": [],
   "source": [
    "@pytest.mark.xfail()\n",
    "# xfail标记表示这个测试预期会失败，这是因为当前实现不支持在DataflowBlock中嵌套普通BindingBlock\n",
    "# 这个测试用例验证了IR的结构约束\n",
    "def test_nesting_non_dataflow_in_dataflow_error():\n",
    "    x = relax.DataflowVar(\"x\", R.Tensor([], \"int32\"))\n",
    "    y = relax.Var(\"y\", R.Tensor([], \"int32\"))\n",
    "    z = relax.Var(\"z\", R.Tensor([], \"int32\"))\n",
    "    seq = relax.SeqExpr(\n",
    "        [\n",
    "            relax.DataflowBlock(\n",
    "                [\n",
    "                    relax.VarBinding(x, relax.const(1)),\n",
    "                    relax.VarBinding(\n",
    "                        y,\n",
    "                        relax.SeqExpr(\n",
    "                            [relax.BindingBlock([relax.VarBinding(z, relax.const(2))])],\n",
    "                            z,\n",
    "                        ),\n",
    "                    ),\n",
    "                ]\n",
    "            )\n",
    "        ],\n",
    "        y,\n",
    "    )\n",
    "    f = relax.Function(\n",
    "        [],\n",
    "        seq,\n",
    "        ret_struct_info=R.Tensor([], \"int32\"),\n",
    "    )\n",
    "    relax.transform.Normalize()(tvm.IRModule.from_expr(f))\n",
    "    # 应该失败，因为在dataflowblock内部有一个普通的binding block"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ccea3b64",
   "metadata": {},
   "source": [
    "# 测试移除void类型变量的使用\n",
    "- 验证: 所有空元组都应该内联构造\n",
    "- 技术细节: 为了可读性，TVMScript隐藏了void类型变量的绑定，但在Relax中使用空元组表示void\n",
    "- 优化处理: 通过规范化将void类型变量的使用替换为内联的R.tuple()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "69cec0b6",
   "metadata": {},
   "outputs": [],
   "source": [
    "def test_remove_usage_of_void_type_variables():\n",
    "    \"\"\"所有空元组都应该内联构造\n",
    "\n",
    "    为了可读性，TVMScript隐藏了类型为void的变量的绑定。例如，`R.assert_op(condition)`\n",
    "    而不是`void_var: R.Tuple([]) = R.assert_op(condition)`。\n",
    "    然而，Relax遵循函数式语言的标准约定，使用空元组表示void。由于空元组\n",
    "    可能在函数后面被合法使用，`void_var`可能需要一个绑定。\n",
    "\n",
    "    通过使用内联的`R.tuple()`规范化所有void类型变量的使用，可以避免这种情况。\n",
    "    \"\"\"\n",
    "    x = relax.Var(\"x\", R.Tuple([]))\n",
    "    bindings = [\n",
    "        relax.VarBinding(x, R.assert_op(R.const(True, \"bool\"))),\n",
    "    ]\n",
    "    seq = relax.SeqExpr([relax.BindingBlock(bindings)], x)\n",
    "    before = relax.Function([], seq, ret_struct_info=R.Tuple([]), is_pure=False)\n",
    "\n",
    "    after = relax.transform.Normalize()(tvm.IRModule({\"main\": before}))[\"main\"]\n",
    "\n",
    "    @R.function(private=True, pure=False)\n",
    "    def expected():\n",
    "        x = R.assert_op(R.const(True, \"bool\"))\n",
    "        return R.tuple()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "207025d2",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "py313",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
