{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# AnnotateUsedMemory C++ 源码"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```cpp\n",
    "/*!\n",
    " * \\brief Annotates the minimum required memory of each primitive function callsite by analyzing\n",
    " * the liveness of the input/output tensors at each function callsite and calculating the total\n",
    " * amount of memory these tensors require. This is added as a \"used_memory\" annotation to the\n",
    " * function in question as a list of the number of bytes for each callsite. In addition, the\n",
    " * containing function is annotated with an \"io_used_memory\" annotation which refers to the total\n",
    " * memory required for the IO tensors.\n",
    " *\n",
    " * Note: This pass does not support dynamic shapes, it is the users responsibility to check this\n",
    " * pass isn't applied where dynamic shapes may be input.\n",
    " */\n",
    "TVM_DLL Pass AnnotateUsedMemory();\n",
    "```\n",
    "\n",
    "这段代码是用于分析每个原始函数调用站点所需的最小内存。它通过分析每个函数调用站点的输入/输出张量的活跃性，并计算这些张量所需的总内存来实现这一目标。这个信息被添加为一个名为 `\"used_memory\"` 的注解，以字节为单位列出每个调用站点所需的内存大小。此外，被注释为 `\"io_used_memory\"` 的函数，表示 IO 张量所需的总内存。\n",
    "\n",
    "需要注意的是，此 Pass 不支持动态形状，用户需要自行检查是否在可能输入动态形状的情况下应用了此 Pass。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "简单的例子：\n",
    "\n",
    "修改前：\n",
    "```python\n",
    "def @main(%input: Tensor[(1, 2, 2, 4), int8]) -> Tensor[(1, 2, 2, 4), int8] {\n",
    "  let %x_0 = fn (%x: Tensor[(1, 2, 2, 4), int8], Primitive=1) -> Tensor[(1, 2, 2, 4), int8] {\n",
    "    nn.max_pool2d(%x, pool_size=[1, 1], padding=[0, 0, 0, 0])\n",
    "  };\n",
    "  let %x_1 = %x_0(%input);\n",
    "  %x_1\n",
    "}\n",
    "```\n",
    "\n",
    "修改后：\n",
    "```python\n",
    "def @main(%input: Tensor[(1, 2, 2, 4), int8], io_used_memory=32) -> Tensor[(1, 2, 2, 4), int8] {\n",
    "  let %x_0: fn (%x: Tensor[(1, 2, 2, 4), int8], Primitive=1, used_memory=[32]) -> Tensor[(1, 2, 2, 4), int8] {\n",
    "    nn.max_pool2d(%x, pool_size=[1, 1], padding=[0, 0, 0, 0])\n",
    "  };\n",
    "  let %x_1: Tensor[(1, 2, 2, 4), int8] = %x_0(%input);\n",
    "  %x_1\n",
    "}\n",
    "```\n",
    "\n",
    "在上面的简单示例中，`io_used_memory`和 `used_memory` 是相同的，因为只有一个原始函数。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```cpp\n",
    "class AnnotateUsedMemoryMutator : public transform::DeviceAwareExprMutator {\n",
    " public:\n",
    "  AnnotateUsedMemoryMutator(const IRModule& module, const transform::ControlFlowGraph& cfg,\n",
    "                            const transform::LivenessAnalysis& lva)\n",
    "      : DeviceAwareExprMutator(module), control_flow_graph_(cfg), liveness_(lva) {}\n",
    "\n",
    "  /*!\n",
    "   * \\brief Mutates the input function. In addition, an \"io_used_memory\" annotation is\n",
    "   * added to the input function which refers to the total size required for the IO\n",
    "   * tensors.\n",
    "   */\n",
    "  Function operator()(const Function& func) {\n",
    "    uint64_t io_used_memory = 0;\n",
    "\n",
    "    // Inputs\n",
    "    for (const Var& param : func->params) {\n",
    "      Type type = param->checked_type();\n",
    "      ICHECK(type.defined()) << \"InferType pass should be run before AnnotateUsedMemory.\";\n",
    "      ICHECK(!IsDynamic(type)) << \"AnnotateUsedMemory does not support dynamic shapes.\";\n",
    "      io_used_memory += CalculateRelayExprSizeBytes(type);\n",
    "    }\n",
    "\n",
    "    // Outputs\n",
    "    Type type = func->body->checked_type();\n",
    "    ICHECK(type.defined()) << \"InferType pass should be run before AnnotateUsedMemory.\";\n",
    "    ICHECK(!IsDynamic(type)) << \"AnnotateUsedMemory does not support dynamic shapes.\";\n",
    "    io_used_memory += CalculateRelayExprSizeBytes(type);\n",
    "\n",
    "    Expr new_func_body = VisitExpr(func->body);\n",
    "    Function new_func = WithFields(func, func->params, new_func_body);\n",
    "    return WithAttr(std::move(new_func), \"io_used_memory\",\n",
    "                    tvm::IntImm(tvm::DataType::UInt(64), io_used_memory));\n",
    "  }\n",
    "\n",
    "  /*!\n",
    "   * \\brief Establish which let bindings have primitive function values.\n",
    "   */\n",
    "  std::pair<Var, Expr> PreVisitLetBinding_(const Var& var, const Expr& value) override {\n",
    "    if (const auto* func_node = value.as<FunctionNode>()) {\n",
    "      ICHECK(func_node->attrs.HasNonzeroAttr(attr::kPrimitive))\n",
    "          << \"Expect top-level functions to be primitive.\";\n",
    "      let_bound_prim_func_.insert(var);\n",
    "    }\n",
    "    return DeviceAwareExprMutator::PreVisitLetBinding_(var, value);\n",
    "  }\n",
    "\n",
    "  /*!\n",
    "   * \\brief Visit let nodes and perform one of two actions depending on their value:\n",
    "   *\n",
    "   * 1. CallNode - Calculate \"used_memory\" annotation value at the callsite of\n",
    "   *               primitive functions.\n",
    "   *\n",
    "   * 2. FunctionNode - Annotate functions with \"used_memory\" annotation based on the\n",
    "   *                   previous analysis at the callsite.\n",
    "   *\n",
    "   */\n",
    "  Expr PostVisitLet_(const LetNode* pre_let_node, const LetNode* post_let_node) override {\n",
    "    Var let_var = post_let_node->var;\n",
    "    Expr let_value = IgnoreOnDevice(post_let_node->value);\n",
    "\n",
    "    if (let_value->IsInstance<CallNode>()) {\n",
    "      Call callsite = Downcast<Call>(let_value);\n",
    "      if (CheckPrimitiveFunctionCall(callsite)) {\n",
    "        Var call_op = Downcast<Var>(callsite->op);\n",
    "\n",
    "        // Find all the vars that are live at the callsite. This is done by merging the\n",
    "        // in and out varset's and then removing the var that references the primitive\n",
    "        // function itself since we don't want this included in the calculation.\n",
    "        const transform::ControlFlowGraph::NodePtr cfg_node =\n",
    "            control_flow_graph_.let_map.at(GetRef<Let>(pre_let_node));\n",
    "        transform::VarSet live_tensors = liveness_.live_in.at(cfg_node);\n",
    "        const transform::VarSet& live_out = liveness_.live_out.at(cfg_node);\n",
    "        live_tensors.insert(live_out.begin(), live_out.end());\n",
    "        live_tensors.erase(call_op);\n",
    "\n",
    "        // Calculate size of live tensors and store to allow annotation when the function\n",
    "        // gets visited.\n",
    "        uint64_t used_memory = 0;\n",
    "        for (const auto& var : live_tensors) {\n",
    "          Type type = var->checked_type();\n",
    "          ICHECK(type.defined()) << \"InferType pass should be run before AnnotateUsedMemory.\";\n",
    "          ICHECK(!IsDynamic(type)) << \"AnnotateUsedMemory does not support dynamic shapes.\";\n",
    "          used_memory += CalculateRelayExprSizeBytes(type);\n",
    "        }\n",
    "        IntImm annotation(DataType::UInt(64), used_memory);\n",
    "        used_memory_annotations_[call_op].push_back(annotation);\n",
    "      }\n",
    "    } else if (let_value->IsInstance<FunctionNode>()) {\n",
    "      Function func = Downcast<Function>(let_value);\n",
    "      ICHECK(used_memory_annotations_.find(let_var) != used_memory_annotations_.end())\n",
    "          << \"Could not find used_memory value for primitive function bound at \"\n",
    "          << let_var->name_hint();\n",
    "      Array<IntImm> used_memory = used_memory_annotations_[let_var];\n",
    "      used_memory_annotations_.erase(let_var);\n",
    "\n",
    "      Function new_func = WithAttr(std::move(func), \"used_memory\",\n",
    "                                   Array<IntImm>(used_memory.rbegin(), used_memory.rend()));\n",
    "      return Let(let_var, new_func, post_let_node->body, post_let_node->span);\n",
    "    }\n",
    "\n",
    "    return DeviceAwareExprMutator::PostVisitLet_(pre_let_node, post_let_node);\n",
    "  }\n",
    "\n",
    " private:\n",
    "  /*!\n",
    "   * \\brief Check if a call is a primitive function callsite.\n",
    "   */\n",
    "  bool CheckPrimitiveFunctionCall(const Call& callsite) {\n",
    "    if (auto var = callsite->op.as<Var>()) {\n",
    "      if (let_bound_prim_func_.find(var.value()) != let_bound_prim_func_.end()) {\n",
    "        return true;\n",
    "      }\n",
    "    }\n",
    "    return false;\n",
    "  }\n",
    "\n",
    "  /*! \\brief Control flow graph representation of the main function. */\n",
    "  transform::ControlFlowGraph control_flow_graph_;\n",
    "  /*! \\brief Liveness analysis of the main function. */\n",
    "  transform::LivenessAnalysis liveness_;\n",
    "  /*! \\brief Var's that reference primitive functions. */\n",
    "  std::unordered_set<Var, ObjectPtrHash, ObjectPtrEqual> let_bound_prim_func_;\n",
    "  /*! \\brief Stores the calculated uint64 used_memory values so they can be annotated on the\n",
    "   * relevant function. */\n",
    "  std::unordered_map<Var, Array<IntImm>, ObjectPtrHash, ObjectPtrEqual> used_memory_annotations_;\n",
    "};\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这段代码定义了一个名为 `AnnotateUsedMemoryMutator` 的类，该类继承自 `transform::DeviceAwareExprMutator`。这个类的主要目的是在输入函数中添加一个名为 `\"io_used_memory\"` 的注解，该注解表示 IO 张量所需的总大小。\n",
    "\n",
    "`AnnotateUsedMemoryMutator` 类有一个构造函数，它接受三个参数：IR 模块、控制流图和活跃度分析。这些参数用于初始化类的私有成员变量。\n",
    "\n",
    "类中的 `operator()` 方法接受一个函数作为输入，并返回一个新的函数，其中包含了 `\"io_used_memory\"` 注解。这个方法首先计算输入和输出张量的总大小，然后遍历函数体中的表达式，对每个 `let` 节点进行处理。对于调用原始函数的 `call` 节点，它会计算活跃张量的大小并将结果存储起来；对于绑定原始函数的 function 节点，它会使用之前存储的结果来添加 `\"used_memory\"` 注解。\n",
    "\n",
    "此外，类还包含一些辅助方法，如`PreVisitLetBinding_`、`PostVisitLet_`和`CheckPrimitiveFunctionCall`，这些方法用于在遍历过程中处理特定的 `let` 节点和调用站点。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "py312x",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
