{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "96713f14",
   "metadata": {},
   "source": [
    "# Relax 量化/反量化算子模块\n",
    "\n",
    "此模块提供了 Relax 框架中的量化(quantize)和反量化(dequantize)算子的 Python 前端接口。这些算子用于神经网络模型量化过程中的张量类型转换，支持通道级别的量化参数设置。量化可以将高精度浮点数模型转换为低精度整数模型，以减少模型大小和加速推理。\n",
    "\n",
    "qdq 在 TVM 的量化生态中扮演着关键角色，主要负责：\n",
    "\n",
    "1. **提供量化转换接口**：作为 Python 前端 API，连接用户代码与 TVM 底层量化实现\n",
    "2. **支持模型压缩与加速**：通过高精度到低精度的转换，实现模型大小减小和推理速度提升\n",
    "3. **保持量化精度**：实现了标准的量化/反量化数学公式，保证转换过程中的精度损失最小化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "b9abf117",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入必要的模块和类\n",
    "import tvm\n",
    "from tvm import relax, tir\n",
    "from tvm.ir import Op\n",
    "from tvm.script import relax as R"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0f28668d",
   "metadata": {},
   "source": [
    "## 主要功能与数学原理\n",
    "\n",
    "该模块提供两个核心函数：\n",
    "\n",
    "### 1. `quantize` 函数\n",
    "\n",
    "- **功能**：将浮点类型张量（如 `float32`）转换为整数类型张量（如 `int8`）\n",
    "- 量化数学公式：`Q_output = clamp(round(input_tensor/scale) + zero_point, out_dtype::min, out_dtype::max)`\n",
    "    其中：\n",
    "    - `input_tensor/scale`: 将输入数据归一化到零点附近\n",
    "    - `round()`: 四舍五入到最近的整数\n",
    "    - `+ zero_point`: 添加零点偏移，使零点(通常是0)能够精确表示\n",
    "    - `clamp()`: 将结果限制在目标数据类型的最小和最大值之间\n",
    "- 过程：归一化 → 四舍五入 → 添加零点偏移 → 值域裁剪\n",
    "- **典型应用**：模型训练后的量化、量化感知训练中的量化运算\n",
    "\n",
    "```{note}\n",
    "该算子接收输入张量，并生成具有相同形状的量化输出。输入张量可以是任意形状。量化过程会将浮点数值映射到整数域，同时保留原始数据的相对关系。\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "6759b94a",
   "metadata": {
    "tags": [
     "hide-output"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[31mSignature:\u001b[39m\n",
      "relax.op.quantize(\n",
      "    data: tvm.ir.expr.RelaxExpr,\n",
      "    scale: tvm.ir.expr.RelaxExpr,\n",
      "    zero_point: tvm.ir.expr.RelaxExpr,\n",
      "    axis: int = -\u001b[32m1\u001b[39m,\n",
      "    out_dtype: str = \u001b[33m'int8'\u001b[39m,\n",
      ")\n",
      "\u001b[31mDocstring:\u001b[39m\n",
      "Quantize op\n",
      "This operator takes input and produces quantized output. The input tensor can be of any shape.\n",
      "The output shape is the same as input shape.\n",
      "\n",
      "Q_output = clamp((round(input_tensor/scale) + zero_point), out_dtype::min, out_dtype::max)\n",
      "\n",
      "Parameters\n",
      "----------\n",
      "data : tvm.relax.Expr\n",
      "    The input tensor to be quantized.\n",
      "\n",
      "scale : tvm.relax.Expr\n",
      "    The output scale.\n",
      "\n",
      "zero_point : tvm.relax.Expr\n",
      "    The output zero_point.\n",
      "\n",
      "axis : int\n",
      "    The channel axis for quantization. Default value is -1 which corresponds to the last axis.\n",
      "\n",
      "out_dtype : str, optional\n",
      "    The data type of the output tensor.\n",
      "\n",
      "Returns\n",
      "-------\n",
      "result : tvm.relax.Expr\n",
      "    The computed result.\n",
      "\u001b[31mFile:\u001b[39m      /media/pc/data/lxw/ai/tvm/python/tvm/relax/op/qdq.py\n",
      "\u001b[31mType:\u001b[39m      function"
     ]
    }
   ],
   "source": [
    "relax.op.quantize?"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3c3d189e",
   "metadata": {},
   "source": [
    "参数描述：\n",
    "- `data : tvm.relax.Expr`\n",
    "    需要被量化的输入张量，通常为浮点类型\n",
    "\n",
    "- `scale : tvm.relax.Expr`\n",
    "    量化缩放因子，用于控制量化的精度。\n",
    "    当为张量时，可以实现通道级(per-channel)量化\n",
    "\n",
    "- `zero_point : tvm.relax.Expr`\n",
    "    量化零点，使得零点在整数域中能够精确表示。\n",
    "    当为张量时，可以实现通道级(per-channel)量化\n",
    "\n",
    "- `axis : int, 可选`\n",
    "    量化的通道轴。默认值为 `-1`，表示最后一个轴\n",
    "\n",
    "- `out_dtype : str, 可选`\n",
    "    输出张量的数据类型，默认为 `\"int8\"`\n",
    "\n",
    "返回值：\n",
    "- `result : tvm.relax.Expr`\n",
    "    量化后的整数张量，形状与输入相同"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a6e0c55e",
   "metadata": {},
   "source": [
    "### 2. `dequantize` 函数\n",
    "- **功能**：将整数类型张量转换回浮点类型张量\n",
    "- **数学原理**：`output = clamp(scale * (input_tensor - zero_point), out_dtype::min, out_dtype::max)`\n",
    "    其中：\n",
    "    - `input_tensor - zero_point`: 减去零点偏移，恢复归一化的值\n",
    "    - `* scale`: 乘以缩放因子，恢复原始数据范围\n",
    "    - `clamp()`: 将结果限制在目标数据类型的最小和最大值之间\n",
    "  - 减去零点偏移 → 乘以缩放因子 → 值域裁剪\n",
    "- **典型应用**：量化模型推理过程中的反量化操作、量化模型与浮点模型的结果比较\n",
    "\n",
    "```{note}\n",
    "该算子接收量化后的输入张量，并生成具有相同形状的反量化输出。反量化过程是量化的逆运算，用于在需要时恢复原始浮点数据的近似值。\n",
    "```\n",
    "\n",
    "参数：\n",
    "    \n",
    "- `data : tvm.relax.Expr`\n",
    "    需要被反量化的输入张量，通常为整数类型\n",
    "\n",
    "- `scale : tvm.relax.Expr`\n",
    "    量化缩放因子，必须与量化时使用的scale相同。\n",
    "    当为张量时，可以实现通道级(per-channel)反量化\n",
    "\n",
    "- `zero_point : tvm.relax.Expr`\n",
    "    量化零点，必须与量化时使用的zero_point相同。\n",
    "    当为张量时，可以实现通道级(per-channel)反量化\n",
    "\n",
    "- `axis : int, 可选`\n",
    "    反量化的通道轴。默认值为-1，表示最后一个轴\n",
    "\n",
    "- `out_dtype : str, 可选`\n",
    "    输出张量的数据类型，默认为\"float32\"\n",
    "\n",
    "返回值\n",
    "-------\n",
    "- `result : tvm.relax.Expr`\n",
    "    反量化后的浮点张量，形状与输入相同"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "605a5b25",
   "metadata": {},
   "source": [
    "## 使用场景与优势\n",
    "\n",
    "该模块主要应用于以下场景：\n",
    "\n",
    "1. **模型压缩**：将FP32模型转换为INT8/UINT8模型，减少约75%的存储空间\n",
    "2. **推理加速**：在支持整数运算的硬件（如GPU、NPU）上显著提升推理性能\n",
    "3. **低精度推理**：在边缘设备等计算资源受限环境中部署深度学习模型\n",
    "4. **量化感知训练**：在训练过程中模拟量化效果，提高量化后模型的精度\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d9176b3b",
   "metadata": {},
   "source": [
    "## 关键参数与使用方法\n",
    "\n",
    "### 共用参数说明\n",
    "- `data`：输入张量（量化时为浮点型，反量化时为整型）\n",
    "- `scale`：缩放因子（可以是标量或张量，张量时实现通道级量化）\n",
    "- `zero_point`：零点值（可以是标量或张量）\n",
    "- `axis`：通道轴（默认为 `-1`，表示最后一维）\n",
    "- `out_dtype`：输出数据类型（量化默认int8，反量化默认float32）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ea4d8c4e",
   "metadata": {},
   "source": [
    "### 输入输出示例\n",
    "\n",
    "#### 量化算子示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "68276b08",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div class=\"highlight\" style=\"background: \"><pre style=\"line-height: 125%;\"><span></span>R<span style=\"color: #A2F; font-weight: bold\">.</span>quantize(metadata[<span style=\"color: #BA2121\">&quot;relax.expr.Constant&quot;</span>][<span style=\"color: #008000\">0</span>], R<span style=\"color: #A2F; font-weight: bold\">.</span>const(<span style=\"color: #008000\">0.10000000149011612</span>, <span style=\"color: #BA2121\">&quot;float32&quot;</span>), R<span style=\"color: #A2F; font-weight: bold\">.</span>const(<span style=\"color: #008000\">128</span>, <span style=\"color: #BA2121\">&quot;int32&quot;</span>), out_dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;int8&quot;</span>, axis<span style=\"color: #A2F; font-weight: bold\">=-</span><span style=\"color: #008000\">1</span>)\n",
       "<span style=\"color: #007979; font-style: italic\"># Metadata omitted. Use show_meta=True in script() method to show it.</span>\n",
       "</pre></div>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# 输入：float32张量、scale和zero_point参数\n",
    "input_tensor = relax.const([1.2, 2.3, -0.5], dtype=\"float32\")\n",
    "scale = relax.const(0.1, dtype=\"float32\")\n",
    "zero_point = relax.const(128, dtype=\"int32\")\n",
    "\n",
    "# 执行量化操作\n",
    "quantized_tensor = relax.op.quantize(input_tensor, scale, zero_point, out_dtype=\"int8\")\n",
    "# 输出：int8张量，值约为[140, 151, 123]\n",
    "quantized_tensor.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "604dd15b",
   "metadata": {},
   "source": [
    "#### 反量化算子示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "1749db34",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div class=\"highlight\" style=\"background: \"><pre style=\"line-height: 125%;\"><span></span>R<span style=\"color: #A2F; font-weight: bold\">.</span>dequantize(metadata[<span style=\"color: #BA2121\">&quot;relax.expr.Constant&quot;</span>][<span style=\"color: #008000\">0</span>], R<span style=\"color: #A2F; font-weight: bold\">.</span>const(<span style=\"color: #008000\">0.10000000149011612</span>, <span style=\"color: #BA2121\">&quot;float32&quot;</span>), R<span style=\"color: #A2F; font-weight: bold\">.</span>const(<span style=\"color: #008000\">128</span>, <span style=\"color: #BA2121\">&quot;int32&quot;</span>), out_dtype<span style=\"color: #A2F; font-weight: bold\">=</span><span style=\"color: #BA2121\">&quot;float32&quot;</span>, axis<span style=\"color: #A2F; font-weight: bold\">=-</span><span style=\"color: #008000\">1</span>)\n",
       "<span style=\"color: #007979; font-style: italic\"># Metadata omitted. Use show_meta=True in script() method to show it.</span>\n",
       "</pre></div>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# 输入：uint8张量、scale和zero_point参数\n",
    "input_tensor = relax.const([140, 151, 123], dtype=\"uint8\")\n",
    "scale = relax.const(0.1, dtype=\"float32\")\n",
    "zero_point = relax.const(128, dtype=\"int32\")\n",
    "\n",
    "# 执行反量化操作\n",
    "float_tensor = relax.op.dequantize(input_tensor, scale, zero_point)\n",
    "# 输出：float32张量\n",
    "float_tensor.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0dcbfdae",
   "metadata": {},
   "source": [
    "## 测试量化和反量化算子的正确性"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "9f3d4510",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建输入变量：输入张量、缩放因子和零点\n",
    "x = relax.Var(\"x\", R.Tensor((2, 3), \"float32\"))\n",
    "dx = relax.Var(\"dx\", R.Tensor((2, 3), \"uint8\"))\n",
    "s = relax.Var(\"s\", R.Tensor([3], \"float32\"))\n",
    "zp = relax.Var(\"zp\", R.Tensor([3], \"int8\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "f6bbf432",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 验证量化算子是否返回正确的算子\n",
    "assert relax.op.quantize(x, s, zp, 1, \"int8\").op == Op.get(\"relax.quantize\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "cd21950c",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 验证反量化操作是否返回正确的算子\n",
    "assert relax.op.dequantize(dx, s, zp, 1, \"float32\").op == Op.get(\"relax.dequantize\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "8ffc30a6",
   "metadata": {},
   "outputs": [],
   "source": [
    "def _check_inference(bb: relax.BlockBuilder, call: relax.Call, expected_sinfo: relax.StructInfo):\n",
    "    # 辅助函数：检查操作的结构信息推断是否正确\n",
    "    ret = bb.normalize(call)\n",
    "    tvm.ir.assert_structural_equal(ret.struct_info, expected_sinfo)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a8dfd91",
   "metadata": {},
   "source": [
    "## 测试量化和反量化算子的结构信息推断"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "d2bf6cb1",
   "metadata": {},
   "outputs": [],
   "source": [
    "bb = relax.BlockBuilder()\n",
    "# 创建输入变量\n",
    "x = relax.Var(\"x\", R.Tensor((2, 3), \"float32\"))\n",
    "dx = relax.Var(\"dx\", R.Tensor((2, 3), \"uint8\"))\n",
    "s = relax.Var(\"s\", R.Tensor([3], \"float32\"))\n",
    "zp = relax.Var(\"zp\", R.Tensor([3], \"int8\"))\n",
    "\n",
    "# 检查量化操作的结构信息推断\n",
    "_check_inference(\n",
    "    bb, relax.op.quantize(x, s, zp, 1, \"int8\"), relax.TensorStructInfo((2, 3), \"int8\")\n",
    ")\n",
    "# 检查反量化操作的结构信息推断\n",
    "_check_inference(\n",
    "    bb,\n",
    "    relax.op.dequantize(dx, s, zp, 1, \"float32\"),\n",
    "    relax.TensorStructInfo((2, 3), \"float32\"),\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "895a4505",
   "metadata": {},
   "source": [
    "## 测试符号形状输入下量化和反量化算子的结构信息推断"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "81b99fdc",
   "metadata": {},
   "outputs": [],
   "source": [
    "bb = relax.BlockBuilder()\n",
    "# 创建符号变量表示维度\n",
    "n = tir.Var(\"n\", \"int64\")\n",
    "# 创建输入变量，其中第一个维度是符号变量\n",
    "x = relax.Var(\"x\", R.Tensor((n, 3), \"float32\"))\n",
    "dx = relax.Var(\"dx\", R.Tensor((n, 3), \"int8\"))\n",
    "s = relax.Var(\"s\", R.Tensor([3], \"float32\"))\n",
    "zp = relax.Var(\"zp\", R.Tensor([3], \"int8\"))\n",
    "\n",
    "# 检查符号形状下量化操作的结构信息推断\n",
    "_check_inference(\n",
    "    bb, relax.op.quantize(x, s, zp, 1, \"int8\"), relax.TensorStructInfo((n, 3), \"int8\")\n",
    ")\n",
    "# 检查符号形状下反量化操作的结构信息推断\n",
    "_check_inference(\n",
    "    bb,\n",
    "    relax.op.dequantize(dx, s, zp, 1, \"float32\"),\n",
    "    relax.TensorStructInfo((n, 3), \"float32\"),\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d691b8dd",
   "metadata": {},
   "source": [
    "## 测试 float8_e4m3fn 数据类型下量化和反量化算子的结构信息推断"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "9e162665",
   "metadata": {},
   "outputs": [],
   "source": [
    "bb = relax.BlockBuilder()\n",
    "n = tir.Var(\"n\", \"int64\")\n",
    "x = relax.Var(\"x\", R.Tensor((n, 3), \"float32\"))\n",
    "dx = relax.Var(\"dx\", R.Tensor((n, 3), \"float8_e4m3fn\"))\n",
    "s = relax.Var(\"s\", R.Tensor([3], \"float32\"))\n",
    "zp = relax.Var(\"zp\", R.Tensor([3], \"float16\"))\n",
    "\n",
    "# 检查 float8_e4m3fn 类型的量化操作结构信息推断\n",
    "_check_inference(\n",
    "    bb,\n",
    "    relax.op.quantize(x, s, zp, 1, \"float8_e4m3fn\"),\n",
    "    relax.TensorStructInfo((n, 3), \"float8_e4m3fn\"),\n",
    ")\n",
    "# 检查 float8_e4m3fn 类型的反量化操作结构信息推断\n",
    "_check_inference(\n",
    "    bb,\n",
    "    relax.op.dequantize(dx, s, zp, 1, \"float32\"),\n",
    "    relax.TensorStructInfo((n, 3), \"float32\"),\n",
    ")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cb895843",
   "metadata": {},
   "source": [
    "## 测试 float8_e5m2 数据类型下量化和反量化算子的结构信息推断"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "ce76b6ab",
   "metadata": {},
   "outputs": [],
   "source": [
    "dtype = \"float8_e5m2\"\n",
    "bb = relax.BlockBuilder()\n",
    "n = tir.Var(\"n\", \"int64\")\n",
    "x = relax.Var(\"x\", R.Tensor((n, 3), \"float32\"))\n",
    "dx = relax.Var(\"dx\", R.Tensor((n, 3), dtype))\n",
    "s = relax.Var(\"s\", R.Tensor([3], \"float32\"))\n",
    "zp = relax.Var(\"zp\", R.Tensor([3], \"float16\"))\n",
    "\n",
    "# 检查 float8_e5m2 类型的量化操作结构信息推断\n",
    "_check_inference(\n",
    "    bb, relax.op.quantize(x, s, zp, 1, dtype), relax.TensorStructInfo((n, 3), dtype)\n",
    ")\n",
    "# 检查 float8_e5m2 类型的反量化操作结构信息推断\n",
    "_check_inference(\n",
    "    bb,\n",
    "    relax.op.dequantize(dx, s, zp, 1, \"float32\"),\n",
    "    relax.TensorStructInfo((n, 3), \"float32\"),\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "cd454fc5",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "py313",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
