{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "7c01e6d5",
   "metadata": {},
   "source": [
    "# `AnnotateTIROpPattern`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "9df44c8e",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pytest\n",
    "import tvm\n",
    "import tvm.script\n",
    "import tvm.testing\n",
    "from tvm import relax\n",
    "from tvm.script import tir as T\n",
    "from tvm_book.op.attr_types import OpPatternKind"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d9478a5a",
   "metadata": {},
   "source": [
    "## `kOutEWiseFusable` 模式"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0278d5dc",
   "metadata": {},
   "source": [
    "测试复杂运算算子模式的注解：\n",
    "\n",
    "- 验证矩阵乘法等复杂算子是否被正确识别为 `kOutEWiseFusable` 模式。\n",
    "- 这类算子可以将元素级算子融合到其输出中，但不能链接另一个复杂算子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "45924c94",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class InputModule:\n",
    "    @T.prim_func\n",
    "    def tir_matmul(x: T.handle, y: T.handle, z: T.handle) -> None:\n",
    "        T.func_attr({\"global_symbol\": \"tir_matmul\"})\n",
    "        m = T.int32()\n",
    "        n = T.int32()\n",
    "        k = T.int32()\n",
    "        A = T.match_buffer(x, (m, n))\n",
    "        B = T.match_buffer(y, (n, k))\n",
    "        C = T.match_buffer(z, (m, k))\n",
    "\n",
    "        for i, j, k in T.grid(m, k, n):\n",
    "            with T.block(\"matmul\"):\n",
    "                vi, vj, vk = T.axis.remap(\"SSR\", [i, j, k])\n",
    "                with T.init():\n",
    "                    C[vi, vj] = T.float32(0)\n",
    "                C[vi, vj] = C[vi, vj] + A[vi, vk] * B[vk, vj]\n",
    "\n",
    "mod = InputModule\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"tir_matmul\"].attrs[\"op_pattern\"] == OpPatternKind.kOutEWiseFusable"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c3ede8aa",
   "metadata": {},
   "source": [
    "### 测试带类型转换的复杂运算算子模式注解\n",
    "\n",
    "验证带有不同类型转换模式的矩阵乘法是否仍被正确识别为 `kOutEWiseFusable` 模式。测试多种类型转换场景：直接变换乘积结果、变换输入后相乘、嵌套变换。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "a5cdb2b0",
   "metadata": {},
   "outputs": [],
   "source": [
    "def test(cast_pattern):\n",
    "    @tvm.script.ir_module\n",
    "    class InputModule:\n",
    "        @T.prim_func\n",
    "        def tir_matmul(x: T.handle, y: T.handle, z: T.handle) -> None:\n",
    "            T.func_attr({\"global_symbol\": \"tir_matmul\"})\n",
    "            m = T.int32()\n",
    "            n = T.int32()\n",
    "            k = T.int32()\n",
    "            A = T.match_buffer(x, (m, n), \"float16\")\n",
    "            B = T.match_buffer(y, (n, k), \"float16\")\n",
    "            C = T.match_buffer(z, (m, k), \"float32\")\n",
    "\n",
    "            for i, j, k in T.grid(m, k, n):\n",
    "                with T.block(\"matmul\"):\n",
    "                    vi, vj, vk = T.axis.remap(\"SSR\", [i, j, k])\n",
    "                    with T.init():\n",
    "                        C[vi, vj] = T.float32(0)\n",
    "                    C[vi, vj] = C[vi, vj] + cast_pattern(A[vi, vk], B[vk, vj])\n",
    "    return InputModule\n",
    "\n",
    "args = [\n",
    "    lambda a, b: T.Cast(\"float32\", a * b),\n",
    "    lambda a, b: T.Cast(\"float32\", a) * T.Cast(\"float32\", b),\n",
    "    lambda a, b: T.Cast(\"float32\", T.Cast(\"float16\", a * b)),\n",
    "]\n",
    "\n",
    "for cast_pattern in args:\n",
    "    mod = test(cast_pattern)\n",
    "    new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "    assert new_mod[\"tir_matmul\"].attrs[\"op_pattern\"] == OpPatternKind.kOutEWiseFusable"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f527f4e4",
   "metadata": {},
   "source": [
    "### 测试带有整型变量签名的复杂运算算子模式注解"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ffedcf3",
   "metadata": {},
   "source": [
    "验证带有显式整型变量参数的矩阵乘法是否被正确识别为`kOutEWiseFusable`模式。此测试确保即使函数签名中包含显式维度参数，模式识别仍能正常工作。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "5df6d2f1",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class InputModule:\n",
    "    @T.prim_func\n",
    "    def tir_matmul(x: T.handle, y: T.handle, z: T.handle, m: T.int64, n: T.int64, k: T.int64):\n",
    "        T.func_attr({\"global_symbol\": \"tir_matmul\"})\n",
    "        A = T.match_buffer(x, (m, n))\n",
    "        B = T.match_buffer(y, (n, k))\n",
    "        C = T.match_buffer(z, (m, k))\n",
    "\n",
    "        for i, j, k in T.grid(m, k, n):\n",
    "            with T.block(\"matmul\"):\n",
    "                vi, vj, vk = T.axis.remap(\"SSR\", [i, j, k])\n",
    "                with T.init():\n",
    "                    C[vi, vj] = T.float32(0)\n",
    "                C[vi, vj] = C[vi, vj] + A[vi, vk] * B[vk, vj]\n",
    "\n",
    "mod = InputModule\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"tir_matmul\"].attrs[\"op_pattern\"] == OpPatternKind.kOutEWiseFusable"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2c6f265c",
   "metadata": {},
   "source": [
    "## `kCommReduce` 模式"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "672b6609",
   "metadata": {},
   "source": [
    "验证求和等归约操作是否被正确识别为`kCommReduce`模式。这类算子具有交换性，用于对输入数据进行汇总计算。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "f6cf9419",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class InputModule:\n",
    "    @T.prim_func\n",
    "    def sum(x: T.handle, y: T.handle) -> None:\n",
    "        T.func_attr({\"global_symbol\": \"elemwise\"})\n",
    "        A = T.match_buffer(x, (16, 16))\n",
    "        B = T.match_buffer(y, (16,))\n",
    "\n",
    "        for i, j in T.grid(16, 16):\n",
    "            with T.block(\"matmul\"):\n",
    "                vi, vj = T.axis.remap(\"SR\", [i, j])\n",
    "                with T.init():\n",
    "                    B[vi] = 0.0\n",
    "                B[vi] += A[vi, vj]\n",
    "\n",
    "mod = InputModule\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"sum\"].attrs[\"op_pattern\"] == OpPatternKind.kCommReduce"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "00f59286",
   "metadata": {},
   "source": [
    "## `kElemWise` 模式"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "411bcab7",
   "metadata": {},
   "source": [
    "验证简单的元素级操作（如加法）是否被正确识别为`kElemWise`模式。这类算子对输入张量的每个元素进行独立计算。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "706913bc",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class InputModule:\n",
    "    @T.prim_func\n",
    "    def elemwise(x: T.handle, y: T.handle) -> None:\n",
    "        T.func_attr({\"global_symbol\": \"elemwise\"})\n",
    "        A = T.match_buffer(x, (16, 16))\n",
    "        B = T.match_buffer(y, (16, 16))\n",
    "\n",
    "        for i, j in T.grid(16, 16):\n",
    "            with T.block(\"matmul\"):\n",
    "                vi, vj = T.axis.remap(\"SS\", [i, j])\n",
    "                B[vi, vj] = A[vi, vj] + 1.0\n",
    "\n",
    "mod = InputModule\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"elemwise\"].attrs[\"op_pattern\"] == OpPatternKind.kElemWise"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a9e69f8d",
   "metadata": {},
   "source": [
    "## `kBroadcast` 模式"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2ba0aa15",
   "metadata": {},
   "source": [
    "验证广播操作是否被正确识别为 `kBroadcast` 模式。这类算子可以将低维输入广播到高维输出，轴必须按顺序排列。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "0091b862",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class InputModule:\n",
    "    @T.prim_func\n",
    "    def broadcast(x: T.handle, y: T.handle) -> None:\n",
    "        T.func_attr({\"global_symbol\": \"elemwise\"})\n",
    "        A = T.match_buffer(x, (16, 16))\n",
    "        B = T.match_buffer(y, (16, 16, 16, 16))\n",
    "\n",
    "        for i0, j0, i1, j1 in T.grid(16, 16, 16, 16):\n",
    "            with T.block(\"matmul\"):\n",
    "                vi0, vj0, vi1, vj1 = T.axis.remap(\"SSSS\", [i0, j0, i1, j1])\n",
    "                B[vi0, vj0, vi1, vj1] = A[vj0, vj1]\n",
    "\n",
    "mod = InputModule\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"broadcast\"].attrs[\"op_pattern\"] == OpPatternKind.kBroadcast"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5b2c7991",
   "metadata": {},
   "source": [
    "## `kInjective` 模式"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "50199189",
   "metadata": {},
   "source": [
    "验证单射算子是否被正确识别为 `kInjective` 模式。这类算子的输出轴可以单射映射到单个输入轴，可安全地与其他单射算子和归约算子融合。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "d88c645e",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class InputModule:\n",
    "    @T.prim_func\n",
    "    def injective(x: T.handle, y: T.handle) -> None:\n",
    "        T.func_attr({\"global_symbol\": \"elemwise\"})\n",
    "        A = T.match_buffer(x, (4, 4, 4, 4))\n",
    "        B = T.match_buffer(y, (16, 16))\n",
    "\n",
    "        for i, j in T.grid(16, 16):\n",
    "            with T.block(\"matmul\"):\n",
    "                vi, vj = T.axis.remap(\"SS\", [i, j])\n",
    "                B[vi, vj] = A[vi // 4, vj // 4, vi % 4, vj % 4]\n",
    "\n",
    "mod = InputModule\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"injective\"].attrs[\"op_pattern\"] == OpPatternKind.kInjective"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4fc15e4b",
   "metadata": {},
   "source": [
    "## 测试偏置加法算子模式的注解\n",
    "\n",
    "验证偏置加法算子是否被正确识别为 `kElemWise` 模式。偏置加法是一种特殊的元素级算子，其中一个输入被广播到另一个输入的维度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "56d1ddec",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class InputModule:\n",
    "    @T.prim_func\n",
    "    def tir_bias_add(\n",
    "        A: T.Buffer((1, 1000), \"float32\"),\n",
    "        B: T.Buffer((1000,), \"float32\"),\n",
    "        C: T.Buffer((1, 1000), \"float32\"),\n",
    "    ) -> None:\n",
    "        # 函数属性字典\n",
    "        T.func_attr({\"global_symbol\": \"tir_bias_add\", \"tir.noalias\": True})\n",
    "        # body\n",
    "        # with T.block(\"root\")\n",
    "        for i0, i1 in T.grid(1, 1000):\n",
    "            with T.block(\"T_add\"):\n",
    "                ax0, ax1 = T.axis.remap(\"SS\", [i0, i1])\n",
    "                T.reads(A[ax0, ax1], B[ax1])\n",
    "                T.writes(C[ax0, ax1])\n",
    "                C[ax0, ax1] = A[ax0, ax1] + B[ax1]\n",
    "\n",
    "mod = InputModule\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"tir_bias_add\"].attrs[\"op_pattern\"] == OpPatternKind.kElemWise"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "28abfe48",
   "metadata": {},
   "source": [
    "## 测试带单位维度形状的广播加法算子模式注解\n",
    "\n",
    "验证带有单位维度（`size=1`）的广播加法是否被正确识别为 `kElemWise` 模式。此测试确保优化器能够正确处理常见的广播场景。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "18715c1f",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class InputModule:\n",
    "    @T.prim_func\n",
    "    def add_with_unit_dim_len_broadcast(\n",
    "        A: T.Buffer((1, 64, 112, 112), \"float32\"),\n",
    "        B: T.Buffer((64, 1, 1), \"float32\"),\n",
    "        C: T.Buffer((1, 64, 112, 112), \"float32\"),\n",
    "    ) -> None:\n",
    "        T.func_attr({\"global_symbol\": \"add5\", \"tir.noalias\": True})\n",
    "        for i0, i1, i2, i3 in T.grid(1, 64, 112, 112):\n",
    "            with T.block(\"T_add\"):\n",
    "                ax0, ax1, ax2, ax3 = T.axis.remap(\"SSSS\", [i0, i1, i2, i3])\n",
    "                T.reads(A[ax0, ax1, ax2, ax3], B[ax1, 0, 0])\n",
    "                T.writes(C[ax0, ax1, ax2, ax3])\n",
    "                C[ax0, ax1, ax2, ax3] = A[ax0, ax1, ax2, ax3] + B[ax1, 0, 0]\n",
    "\n",
    "mod = InputModule\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"add_with_unit_dim_len_broadcast\"].attrs[\"op_pattern\"] == OpPatternKind.kElemWise\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e24c20f8",
   "metadata": {},
   "source": [
    "## 测试零维元素级加法算子模式注解\n",
    "\n",
    "验证标量（零维张量）与向量的加法是否被正确识别为 `kElemWise` 模式。此测试确保优化器能够正确处理标量与数组的运算。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "a53fdc7d",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class InputModule:\n",
    "    @T.prim_func\n",
    "    def add_zero_dim(\n",
    "        A: T.Buffer((128,), \"float32\"),\n",
    "        B: T.Buffer((), \"float32\"),\n",
    "        C: T.Buffer((128,), \"float32\"),\n",
    "    ) -> None:\n",
    "        T.func_attr({\"global_symbol\": \"add8\", \"tir.noalias\": True})\n",
    "        for i0 in T.serial(128):\n",
    "            with T.block(\"T_add\"):\n",
    "                ax0 = T.axis.spatial(128, i0)\n",
    "                T.reads(A[ax0], B[()])\n",
    "                T.writes(C[ax0])\n",
    "                C[ax0] = A[ax0] + B[()]\n",
    "\n",
    "mod = InputModule\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"add_zero_dim\"].attrs[\"op_pattern\"] == OpPatternKind.kElemWise"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4dee2cb1",
   "metadata": {},
   "source": [
    "## 测试池化算子模式的注解\n",
    "\n",
    "验证最大池化算子是否被正确识别为 `kOutEWiseFusable` 模式。池化是一种复杂运算，可以融合元素级算子到其输出中。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "103505a6",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class InputModule:\n",
    "    @T.prim_func\n",
    "    def max_pool2d(\n",
    "        rxplaceholder_1: T.Buffer((1, 64, 112, 112), \"float32\"),\n",
    "        tensor_1: T.Buffer((1, 64, 56, 56), \"float32\"),\n",
    "    ) -> None:\n",
    "        # 函数属性字典\n",
    "        T.func_attr({\"global_symbol\": \"max_pool2d\", \"T.noalias\": True})\n",
    "        # body\n",
    "        # with T.block(\"root\")\n",
    "        pad_temp_1 = T.alloc_buffer([1, 64, 114, 114], dtype=\"float32\")\n",
    "        for i0, i1, i2, i3 in T.grid(1, 64, 114, 114):\n",
    "            with T.block(\"pad_temp\"):\n",
    "                ax0, ax1, ax2, ax3 = T.axis.remap(\"SSSS\", [i0, i1, i2, i3])\n",
    "                T.reads(rxplaceholder_1[ax0, ax1, ax2 - 1, ax3 - 1])\n",
    "                T.writes(pad_temp_1[ax0, ax1, ax2, ax3])\n",
    "                pad_temp_1[ax0, ax1, ax2, ax3] = T.if_then_else(\n",
    "                    1 <= ax2 and ax2 < 113 and 1 <= ax3 and ax3 < 113,\n",
    "                    rxplaceholder_1[ax0, ax1, ax2 - 1, ax3 - 1],\n",
    "                    T.float32(-3.4028234663852886e38),\n",
    "                    dtype=\"float32\",\n",
    "                )\n",
    "        for i0, i1, i2, i3, i4, i5 in T.grid(1, 64, 56, 56, 3, 3):\n",
    "            with T.block(\"tensor\"):\n",
    "                ax0, ax1, ax2, ax3, rv0, rv1 = T.axis.remap(\"SSSSRR\", [i0, i1, i2, i3, i4, i5])\n",
    "                T.reads(\n",
    "                    tensor_1[ax0, ax1, ax2, ax3],\n",
    "                    pad_temp_1[ax0, ax1, ax2 * 2 + rv0, ax3 * 2 + rv1],\n",
    "                )\n",
    "                T.writes(tensor_1[ax0, ax1, ax2, ax3])\n",
    "                with T.init():\n",
    "                    tensor_1[ax0, ax1, ax2, ax3] = T.float32(-3.4028234663852886e38)\n",
    "                tensor_1[ax0, ax1, ax2, ax3] = T.max(\n",
    "                    tensor_1[ax0, ax1, ax2, ax3],\n",
    "                    pad_temp_1[ax0, ax1, ax2 * 2 + rv0, ax3 * 2 + rv1],\n",
    "                )\n",
    "\n",
    "mod = InputModule\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"max_pool2d\"].attrs[\"op_pattern\"] == OpPatternKind.kOutEWiseFusable"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cc16c845",
   "metadata": {},
   "source": [
    "## 测试softmax算子模式的注解\n",
    "\n",
    "验证 softmax 算子是否被正确识别为 `kOutEWiseFusable` 模式。`softmax` 是一种复杂运算，包含多个步骤但仍可融合元素级算子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "892c27e7",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class InputModule:\n",
    "    @T.prim_func\n",
    "    def softmax(\n",
    "        rxplaceholder_1: T.Buffer((16, 16), \"float32\"),\n",
    "        T_softmax_norm_1: T.Buffer((16, 16), \"float32\"),\n",
    "    ) -> None:\n",
    "        # 函数属性字典\n",
    "        T.func_attr({\"global_symbol\": \"softmax\", \"T.noalias\": True})\n",
    "        # body\n",
    "        # with T.block(\"root\")\n",
    "        T_softmax_maxelem_1 = T.alloc_buffer([16], dtype=\"float32\")\n",
    "        T_softmax_exp_1 = T.alloc_buffer([16, 16], dtype=\"float32\")\n",
    "        T_softmax_expsum_1 = T.alloc_buffer([16], dtype=\"float32\")\n",
    "        for i0_7, i1_3 in T.grid(16, 16):\n",
    "            with T.block(\"T_softmax_maxelem\"):\n",
    "                i0_8, k = T.axis.remap(\"SR\", [i0_7, i1_3])\n",
    "                T.reads(T_softmax_maxelem_1[i0_8], rxplaceholder_1[i0_8, k])\n",
    "                T.writes(T_softmax_maxelem_1[i0_8])\n",
    "                with T.init():\n",
    "                    T_softmax_maxelem_1[i0_8] = T.float32(-3.4028234663852886e38)\n",
    "                T_softmax_maxelem_1[i0_8] = T.max(\n",
    "                    T_softmax_maxelem_1[i0_8], rxplaceholder_1[i0_8, k]\n",
    "                )\n",
    "        for i0_9, i1_4 in T.grid(16, 16):\n",
    "            with T.block(\"T_softmax_exp\"):\n",
    "                i0_10, i1_5 = T.axis.remap(\"SS\", [i0_9, i1_4])\n",
    "                T.reads(rxplaceholder_1[i0_10, i1_5], T_softmax_maxelem_1[i0_10])\n",
    "                T.writes(T_softmax_exp_1[i0_10, i1_5])\n",
    "                T_softmax_exp_1[i0_10, i1_5] = T.exp(\n",
    "                    rxplaceholder_1[i0_10, i1_5] - T_softmax_maxelem_1[i0_10], dtype=\"float32\"\n",
    "                )\n",
    "        for i0_11, i1_6 in T.grid(16, 16):\n",
    "            with T.block(\"T_softmax_expsum\"):\n",
    "                i0_12, k = T.axis.remap(\"SR\", [i0_11, i1_6])\n",
    "                T.reads(T_softmax_expsum_1[i0_12], T_softmax_exp_1[i0_12, k])\n",
    "                T.writes(T_softmax_expsum_1[i0_12])\n",
    "                with T.init():\n",
    "                    T_softmax_expsum_1[i0_12] = T.float32(0)\n",
    "                T_softmax_expsum_1[i0_12] = (\n",
    "                    T_softmax_expsum_1[i0_12] + T_softmax_exp_1[i0_12, k]\n",
    "                )\n",
    "        for i0_13, i1_7 in T.grid(16, 16):\n",
    "            with T.block(\"T_softmax_norm\"):\n",
    "                i0_14, i1_8 = T.axis.remap(\"SS\", [i0_13, i1_7])\n",
    "                T.reads(T_softmax_exp_1[i0_14, i1_8], T_softmax_expsum_1[i0_14])\n",
    "                T.writes(T_softmax_norm_1[i0_14, i1_8])\n",
    "                T.block_attr({\"axis\": 1})\n",
    "                T_softmax_norm_1[i0_14, i1_8] = (\n",
    "                    T_softmax_exp_1[i0_14, i1_8] / T_softmax_expsum_1[i0_14]\n",
    "                )\n",
    "\n",
    "mod = InputModule\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"softmax\"].attrs[\"op_pattern\"] == OpPatternKind.kOutEWiseFusable"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "040dafe3",
   "metadata": {},
   "source": [
    "## 测试多缓冲区存储的算子模式注解回退行为\n",
    "\n",
    "验证累积和（cumsum）算子是否被正确识别为 `kOpaque` 模式。当算子包含复杂的缓冲区存储模式时，优化器会将其视为不透明算子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "839e0d6a",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class CumsumModule:\n",
    "    @T.prim_func\n",
    "    def cumsum(var_rxplaceholder: T.handle, out_buf: T.Buffer(160, \"float32\")):\n",
    "        rxplaceholder = T.match_buffer(\n",
    "            var_rxplaceholder, [10, 16], dtype=\"float32\", offset_factor=1\n",
    "        )\n",
    "        with T.block(\"cumsum_generic\"):\n",
    "            T.reads(rxplaceholder[0:10, 0:16])\n",
    "            T.writes(out_buf[0:160])\n",
    "            for fused in T.parallel(1):\n",
    "                out_buf[fused * 160] = rxplaceholder[fused * 160 // 16, fused * 160 % 16]\n",
    "                for v_k in T.serial(159):\n",
    "                    out_buf[fused * 160 + (v_k + 1)] = (\n",
    "                        out_buf[fused * 160 + (v_k + 1 - 1)]\n",
    "                        + rxplaceholder[\n",
    "                            (fused * 160 + (v_k + 1)) // 16,\n",
    "                            (fused * 160 + (v_k + 1)) % 16,\n",
    "                        ]\n",
    "                    )\n",
    "\n",
    "mod = CumsumModule\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"cumsum\"].attrs[\"op_pattern\"] == OpPatternKind.kOpaque"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3aa87307",
   "metadata": {},
   "source": [
    "## 测试同时计算和与平方和的算子模式注解\n",
    "\n",
    "验证同时计算元素和与平方和的操作是否被正确识别为`kCommReduce`模式。此测试确保优化器能够正确处理多输出的归约操作。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "435c7604",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class Module:\n",
    "    @T.prim_func\n",
    "    def sum_sqsum(\n",
    "        A: T.Buffer((32, 64), \"float32\"),\n",
    "        vsum: T.Buffer((32,), \"float32\"),\n",
    "        sqsum: T.Buffer((32,), \"float32\"),\n",
    "    ):\n",
    "        for ax0, k0 in T.grid(32, 64):\n",
    "            with T.block(\"block\"):\n",
    "                v_ax0, v_k0 = T.axis.remap(\"SR\", [ax0, k0])\n",
    "                T.reads(A[v_ax0, v_k0])\n",
    "                T.writes(vsum[v_ax0], sqsum[v_ax0])\n",
    "                with T.init():\n",
    "                    vsum[v_ax0] = T.float32(0)\n",
    "                    sqsum[v_ax0] = T.float32(0)\n",
    "                v_vsum: T.float32 = vsum[v_ax0] + A[v_ax0, v_k0]\n",
    "                v_sqsum: T.float32 = sqsum[v_ax0] + A[v_ax0, v_k0] * A[v_ax0, v_k0]\n",
    "                vsum[v_ax0] = v_vsum\n",
    "                sqsum[v_ax0] = v_sqsum\n",
    "\n",
    "mod = Module\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"sum_sqsum\"].attrs[\"op_pattern\"] == OpPatternKind.kCommReduce"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3aca7144",
   "metadata": {},
   "source": [
    "## 测试无缓冲区存储的算子模式注解\n",
    "\n",
    "验证包含外部调用且缺乏明确缓冲区存储模式的算子是否被正确识别为 `kOpaque` 模式。当优化器无法确定算子的具体行为时，会将其视为不透明算子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "59df941a",
   "metadata": {},
   "outputs": [],
   "source": [
    "@tvm.script.ir_module\n",
    "class Module:\n",
    "    @T.prim_func\n",
    "    def no_buffer_stores(A: T.Buffer((32, 64), \"float32\"), vsum: T.Buffer((32,), \"float32\")):\n",
    "        for ax0, k0 in T.grid(32, 64):\n",
    "            with T.block(\"block\"):\n",
    "                v_ax0, v_k0 = T.axis.remap(\"SR\", [ax0, k0])\n",
    "                T.reads(A[v_ax0, v_k0])\n",
    "                T.writes(vsum[v_ax0])\n",
    "                # 无缓冲区存储通常发生在有外部计算调用的情况下\n",
    "                # 在这种情况下，我们将其视为不透明操作\n",
    "                T.call_packed(\"some_func\")\n",
    "\n",
    "mod = Module\n",
    "new_mod = relax.transform.AnnotateTIROpPattern()(mod)\n",
    "assert new_mod[\"no_buffer_stores\"].attrs[\"op_pattern\"] == OpPatternKind.kOpaque"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f37be3c0",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "py313",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
