{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "55304d2f",
   "metadata": {},
   "source": [
    "# 卷积神经网络推理 (3)：基于 L2 缓存优化的 Winograd 卷积的 C++ 实现"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "00057e67",
   "metadata": {},
   "source": [
    "> 公开时间：2022-09-04"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "825d7f1a",
   "metadata": {},
   "source": [
    ":::{note}\n",
    "\n",
    "该工作是第五届 Ubiquant Challenge 量化新星挑战赛|并行程序设计竞赛的**初赛入围**工作，有一定的补充和删减。该工作在【天之孔】队长强宜澄的牵头下完成。\n",
    "\n",
    "这份文档是使用 Winograd 算法高效实现 $3 \\times 3$ 卷积核神经网络的推理过程，并会作补充讨论。程序与初赛提交版本不同，对于程序可读性与规范性作改进，但算法思路完全一致。\n",
    "\n",
    "【天之孔】\n",
    "1. Fate/Grand Order (作为 Fate/EXTRA CCC 衍生作品) [瞑生院祈荒](https://zh.moegirl.org.cn/%E6%9D%80%E7%94%9F%E9%99%A2%E7%A5%88%E8%8D%92) 宝具。在人从苦界解脱并落入天之孔时，瞑生院可以感受到至高的愉悦。\n",
    "2. 天坑。代指天坑专业，如本队队员所处的化学专业、高分子专业。\n",
    "\n",
    ":::"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3a7b12f4",
   "metadata": {},
   "source": [
    ":::{danger}\n",
    "\n",
    "初赛部分的程序与大部分文档是由我完成的，**本篇文档也只介绍这部分初赛工作，且绝非是最高效实现**。这些也都整理到 [gitee: ajz34/winograd6x3](https://gitee.com/ajz34/winograd6x3) 中。\n",
    "\n",
    "文档作者并没有计算机或高性能计算的专业知识。**不能保证文档内容正确性或术语严格。**\n",
    "\n",
    "在并行程序设计决赛中，我们小组 (第二名) 在 Winograd 题中所使用的代码，是由强哥完成 (紧抱大腿 ≥ω≤)。尽管实现的仍然是 Winograd $F(6, 3)$、图像与卷积核变换代码与初赛相同，但除此之外的算法逻辑与程序完全不同、且更为高效。核心的改进点应是在指令集级别，参考 openBlas 对 DGEMM 等函数的多级缓存优化、micro-kernel 优化、计算过程隐藏读写延迟，以及适应性更广的边界情形处理等等。\n",
    "\n",
    "赛题 baseline 请参考 [gitee: benjie-miao/winograd-baseline](https://gitee.com/benjie-miao/winograd-baseline)。决赛获得冠军队伍的工作，请参考 [github: Robslhc/ubiquant-winograd](http://github.com/Robslhc/ubiquant-winograd)。\n",
    "\n",
    ":::"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "111811dc",
   "metadata": {},
   "source": [
    "在上一篇文档中，我们已经介绍了 Winograd 算法的具体过程与实现；但对 Winograd 算法的强大之处，之前只作定性上的讨论。\n",
    "\n",
    "在这一篇文档，我们会讨论 Winograd 算法的实现，并且具体地表明 Winograd $F(6,3)$ 算法相对于 Direct 算法的效率提升比例。我们可能会发现，**合理地利用缓存 (cache) 和并行**，对程序效率的提升是非常可观的；它甚至可能超过 Winograd 算法本身的效率提升。同时，我们也会**尽可能利用 AVX 指令集**，尽可能优化编译过程。最后我们程序的实现效率在大并行的情况下，在相当短的代码量下 (不使用汇编或大量宏代码共约 400 行)，**VGG16 平均效率会高于 oneDNN 库函数实现**。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bef8a3c7-c8ba-4c7e-8721-d7a54c96e732",
   "metadata": {},
   "source": [
    "尽管本文档并非是最高效率的实现，但我希望能借这样的机会，以非专业的角度，表述我在学习高性能计算问题时的一些步伐与心得。也希望能对同样非专业的同行们以一些启发。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ebe9d02d-adaa-4c09-9054-09ee2edc5bf7",
   "metadata": {},
   "source": [
    ":::{warning}\n",
    "\n",
    "请注意，本文档使用 cling-xeus 引擎的 Jupyter Notebook。该引擎通常可以正确地编译并执行 C++ 代码，但无法在效率上与开足优化的 g++ 相比。因此，本文并不对 Jupyter Notebook 的 code block 进行耗时比较；而是使用 g++ 编译所得程序实现比较。\n",
    "\n",
    ":::"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "04c589a6",
   "metadata": {},
   "outputs": [],
   "source": [
    "// NOTE: cling-xeus jupyter automatically includes common headers\n",
    "//       such as vector, algorithms, chrono, ...\n",
    "#include <iostream>\n",
    "#include <chrono>\n",
    "#include <immintrin.h>\n",
    "#include <xmmintrin.h>\n",
    "\n",
    "// use an appropriate omp path for your own \n",
    "#include \"/share/Pub/zyzhu/miniconda3/lib/gcc/x86_64-conda-linux-gnu/9.3.0/include/omp.h\"\n",
    "\n",
    "// override default libgomp to avoid link failure in xcpp\n",
    "#pragma cling load(\"libiomp5\")\n",
    "\n",
    "using namespace std;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "c94ff299",
   "metadata": {},
   "outputs": [],
   "source": [
    "omp_set_num_threads(8);"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6274457a",
   "metadata": {},
   "source": [
    "一些方便的函数定义如下："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "32d7bc02-f733-42aa-b035-e2749e01a4e6",
   "metadata": {},
   "source": [
    "- `allclose` 函数用于验证两个向量的每个元素是否相等："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "a30ef8d4-56d9-4102-a5ac-22fd244fe6cb",
   "metadata": {
    "tags": [
     "hide_input"
    ]
   },
   "outputs": [],
   "source": [
    "/// Check if vector a[s] and b[s] are close by checking every element\n",
    "bool allclose(float * a, float * b, size_t s,\n",
    "              float rtol=1e-4, float atol=1e-4) {\n",
    "    for (size_t i = 0; i < s; ++i)\n",
    "        if (abs(a[i] - b[i]) > (atol + rtol * abs(b[i]))) return false;\n",
    "    return true;\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d0dc5ba9-deae-401f-a7b4-eaa4500380f0",
   "metadata": {},
   "source": [
    "- `ceildiv` 函数用于给出天花板整数除："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "fcb03b5b-692a-45e2-98f8-887abdd3e750",
   "metadata": {
    "tags": [
     "hide_input"
    ]
   },
   "outputs": [],
   "source": [
    "template<typename T>\n",
    "inline T ceildiv(T a, T b) {\n",
    "    return (a + b - 1) / b;\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ef4be196-60a1-491a-9c44-1e8d889c6fc9",
   "metadata": {},
   "source": [
    "- `product` 函数给出数组的连乘积："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "3dc0d8a9-9e1d-4345-9241-0b1eed10dca3",
   "metadata": {
    "tags": [
     "hide_input"
    ]
   },
   "outputs": [],
   "source": [
    "template<typename T>\n",
    "inline T product(const std::vector<T>& v) {\n",
    "    return std::accumulate(v.cbegin(), v.cend(), 1, std::multiplies<T>());\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8f10d4cf-36bc-4747-88ef-9561b90cd149",
   "metadata": {},
   "source": [
    "- `rand_vec` 函数会生成随机数向量，随机数区间为 [0, 10)："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "71c0af46-be75-4109-bafa-76076e6c42ec",
   "metadata": {
    "tags": [
     "hide_input"
    ]
   },
   "outputs": [],
   "source": [
    "template<typename T>\n",
    "void rand_vec(T* v, size_t n) {\n",
    "    for (size_t i = 0; i < n; ++i)\n",
    "        v[i] = (T(rand()) / T(RAND_MAX)) * 10;\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e9927cf4",
   "metadata": {},
   "source": [
    "- `Range` 类用于储存迭代序号信息 (通过成员变量 `start`, `end` 给出迭代的起始、终点位置，成员函数 `size` 给出迭代次数)："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "7de92d7a",
   "metadata": {
    "tags": [
     "hide_input"
    ]
   },
   "outputs": [],
   "source": [
    "struct Range {\n",
    "    int start;\n",
    "    int end;\n",
    "    Range(): start(-1), end(-1) {}\n",
    "    Range(int start, int end): start(start), end(end) {}\n",
    "    inline int size() const { return end - start; }\n",
    "};"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "767bb0d7",
   "metadata": {},
   "source": [
    "## 示例问题：经过更改的 VGG16 conv(3.2)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0bdd4c28",
   "metadata": {},
   "source": [
    "我们在该文档的大部分时候，都会使用下述 VGG16 conv(3.2) 卷积层作为演示示例。只有在文档最后的测评阶段，我们才会使用完整的 VGG16 网络 [^Simonyan2015]。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "04bd9042",
   "metadata": {},
   "source": [
    "[^Simonyan2015]: Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. *ICLR 2015*. arXiv: [1409.1556](http://arxiv.org/abs/1409.1556)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "27835f69",
   "metadata": {},
   "source": [
    "- 输入图像的高度 `IH` $H_\\mathrm{in} = 56$，宽度 `IW` $W_\\mathrm{in} = 56$；\n",
    "- 输入通道数 `IC` $C_\\mathrm{in} = 256$，输出通道数 `OC` $C_\\mathrm{out} = 256$；\n",
    "- 每批图像数量 `N` $N = 8$。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cda38059",
   "metadata": {},
   "source": [
    "导出量是\n",
    "\n",
    "- 输出图像的高度 `OH` $H_\\mathrm{out} = 54$，宽度 `OW` $W_\\mathrm{out} = 54$。\n",
    "- Winograd 算法需要在高方向上处理 `TH` $\\tilde H = \\lceil H_\\mathrm{out} / 6 \\rceil = 9$ 次，宽方向也是 `TW` $\\tilde W = 9$。当前情形不需要考虑无法整除时的边界情况。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e647e285",
   "metadata": {},
   "source": [
    "关键的输入、输出张量是\n",
    "\n",
    "- 输入图像 `image` $d_{i,c,x,y}$，维度 $(N, C_\\mathrm{in}, H_\\mathrm{in}, W_\\mathrm{in})$；\n",
    "- 卷积核 `filtr` $g_{k,c,u,v}$，维度 $(C_\\mathrm{out}, C_\\mathrm{in}, 3, 3)$\n",
    "- 输出图像 `result` $Y_{i,k,x,y}$，维度 $(N, C_\\mathrm{out}, H_\\mathrm{out}, W_\\mathrm{out})$；\n",
    "\n",
    "其中，输入图像与卷积核是输入量、输出图像是输出量。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "7538c9ce",
   "metadata": {},
   "outputs": [],
   "source": [
    "size_t IH = 56, IW = 56, IC = 256, OC = 256, N = 8;\n",
    "size_t OH = IH - 2, OW = IW - 2;\n",
    "size_t TH = ceildiv<size_t>(OH, 6), TW = ceildiv<size_t>(OW, 6);\n",
    "\n",
    "vector<size_t> dim_image  {  N, IC, IH, IW };\n",
    "vector<size_t> dim_filtr  { OC, IC,  3,  3 };\n",
    "vector<size_t> dim_result {  N, OC, OH, OW };\n",
    "\n",
    "size_t size_image  = product<size_t>(dim_image);\n",
    "size_t size_filtr  = product<size_t>(dim_filtr);\n",
    "size_t size_result = product<size_t>(dim_result);"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "89470de1",
   "metadata": {},
   "source": [
    "在 C++ 程序中，我们有意不使用便利的库 (譬如 Eigen)，也不使用 std::vector 储存矩阵或张量。我们简单地使用 `float *`。\n",
    "\n",
    "- 使用 `float *` 的好处是，它就是简单的指针；如果程序要开放以 C 语言为函数的接口，那么就省去了从 `std::vector<float>` 到 `float *` 的转换过程。\n",
    "- 在控制向量的对齐时，使用 `float *` 会非常方便。向量的对齐 (align) 通常在存读数据时有一定优势。\n",
    "- 但 `float *` 的明显缺点是，其使用不是很安全，并且在高维张量索引上比较麻烦。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "4f3548fb",
   "metadata": {},
   "outputs": [],
   "source": [
    "float * image, * filtr, * result;\n",
    "image  = (float *) aligned_alloc(64, size_image  * sizeof(float));\n",
    "filtr  = (float *) aligned_alloc(64, size_filtr  * sizeof(float));\n",
    "result = (float *) aligned_alloc(64, size_result * sizeof(float));"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c081ee47",
   "metadata": {},
   "source": [
    "我们使用 (伪) 随机数初始化："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "8c387066-1cae-47d6-bc9d-94d9e42e8065",
   "metadata": {},
   "outputs": [],
   "source": [
    "srand(0);\n",
    "rand_vec(image, size_image);\n",
    "rand_vec(filtr, size_filtr);"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c88e5d78",
   "metadata": {},
   "source": [
    "## Direct 卷积的平凡算法"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "59593f92",
   "metadata": {},
   "source": [
    "我们曾经在第一篇文档的 [极简 Python 实现](cnn_direct.ipynb#卷积网络过程图示、公式与极简-Python-实现) 中给出 Direct 卷积的平凡算法的公式与实现，并在 [最简单 C++ 实现](cnn_direct.ipynb#真实环境的卷积效率-(2)：最简单的-C/C++-实现) 中展示了平凡算法的 (糟糕的) 效率。\n",
    "\n",
    "这里运行代码的目的是生成作为参考的结果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "66b0b870",
   "metadata": {},
   "outputs": [],
   "source": [
    "float * result_ref;\n",
    "result_ref = (float *) aligned_alloc(64, size_result * sizeof(float));\n",
    "for (int i = 0; i < size_result; ++i) result_ref[i] = 0;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "5632709c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "elapsed time: 71711.4 msec"
     ]
    }
   ],
   "source": [
    "auto start = chrono::steady_clock::now();\n",
    "\n",
    "for (size_t i = 0; i < N; ++i)\n",
    "for (size_t k = 0; k < OC; ++k)\n",
    "for (size_t x = 0; x < OH; ++x)\n",
    "for (size_t y = 0; y < OW; ++y)\n",
    "for (size_t c = 0; c < IC; ++c)\n",
    "for (size_t u = 0; u < 3; ++u)\n",
    "for (size_t v = 0; v < 3; ++v)\n",
    "    result_ref[((i * OC + k) * OH +   x) * OW +   y] += \\\n",
    "    image     [((i * IC + c) * IH + x+u) * IW + y+v] * \\\n",
    "    filtr     [((k * IC + c) *  3 +   u) *  3 +   v];\n",
    "\n",
    "auto end = chrono::steady_clock::now();\n",
    "chrono::duration<double> elapsed_seconds = end - start;\n",
    "cout << \"elapsed time: \" << elapsed_seconds.count() * 1000 << \" msec\";"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0bbc62f3-883f-45e5-b20f-806014ce3699",
   "metadata": {},
   "source": [
    "## Winograd $F(6, 3)$ 优化策略概述"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a9337889-e25d-4ae6-b00e-8bfd12f279b2",
   "metadata": {},
   "source": [
    "### Winograd $F(6, 3)$ 算法回顾"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "94ef87a1-e48c-4cfd-bdb8-084c5739bd51",
   "metadata": {},
   "source": [
    "首先我们需要回顾 Winograd $F(6, 3)$ 算法。该算法滤波前的卷积核大小是 $3$，滤波放大后的卷积核大小是 $\\mu = 6+2 = 8$。下面通篇中，用斜体标注的矩阵的角标是完整的。该算法在最简单的拆分下，可以看作是\n",
    "\n",
    "1. 输入图像变换 (角标索引 $i, c, \\tilde x, \\tilde y$，其中 $\\tilde x$ 取值是 $\\{0, 6, 12, \\cdots, H_\\mathrm{in} - \\mu = 50\\}$，$\\tilde y$ 取值是 $\\{0, 6, 12, \\cdots, W_\\mathrm{in} - \\mu = 50\\}$；输入图像张量的另一种表示 $d^{(\\tilde x, \\tilde y)}_{i,c,t,w} = d_{i,c,\\tilde x+t,\\tilde y+w}$)\n",
    "\n",
    "    $$\n",
    "    V_{i,c,r,s}^{(\\tilde x, \\tilde y)} = \\sum_t^\\mu \\sum_s^\\mu B_{tr} d^{(\\tilde x, \\tilde y)}_{i,c,t,w} B_{ws} \\;\\; \\text{or} \\;\\; \\mathrm{V}_{i,c}^{(\\tilde x, \\tilde y)} = \\mathrm{B}^\\dagger \\mathrm{d}^{(\\tilde x, \\tilde y)}_{i,c} \\mathrm{B}\n",
    "    $$\n",
    "\n",
    "2. 卷积核变换 (角标索引 $k, c$)\n",
    "\n",
    "    $$\n",
    "    U_{k,c,r,s} = \\sum_u^K \\sum_v^K G_{r,u} g_{k,c,u,v} G_{s,v} \\;\\; \\mathrm{or} \\;\\; \\mathrm{U}_{k,c} = \\mathrm{G} \\mathrm{g}_{k,c} \\mathrm{G}^\\dagger\n",
    "    $$\n",
    "    \n",
    "3. 数乘 (角标索引 $i, k, \\tilde x, \\tilde y, r, s$)\n",
    "\n",
    "    $$\n",
    "    M_{i,k,r,s}^{(\\tilde x, \\tilde y)} = \\sum_c^{C_\\mathrm{in}} U_{k,c,r,s} V_{i,c,r,s}^{(\\tilde x, \\tilde y)} \\;\\; \\mathrm{or} \\;\\; \\mathrm{M}_{i,k}^{(\\tilde x, \\tilde y)} = \\sum_c^{C_\\mathrm{in}} \\mathrm{U}_{k,c} \\odot \\mathrm{V}_{i,c}^{(\\tilde x, \\tilde y)}\n",
    "    $$\n",
    "\n",
    "4. 输出图像变换\n",
    "\n",
    "    $$\n",
    "    Y_{i,k,\\tilde x+a, \\tilde y+b} = Y_{i,k,a,b}^{(\\tilde x, \\tilde y)} = \\sum_r^\\mu \\sum_s^\\mu A_{r,a} M^{(\\tilde x, \\tilde y)}_{i,k,r,s} A_{s,b} \\;\\; \\mathrm{or} \\;\\; \\mathrm{Y}_{i,k}^{(\\tilde x, \\tilde y)} = \\mathrm{A}^\\dagger \\mathrm{M}^{(\\tilde x, \\tilde y)}_{i,k} \\mathrm{A}\n",
    "    $$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1728543b-6aa5-4cf6-9498-ae62b6438c95",
   "metadata": {},
   "source": [
    "### 优化策略"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9d193775-549e-4565-a208-d5f0703d35d8",
   "metadata": {},
   "source": [
    "对于整个卷积过程，或 Winograd 算法中，浮点计算数 FLOPs 最大的步骤一般应当是数乘步骤 (这应当是需要验证的，但这里我们假设如此)。因此，**第三步数乘的优化尤为关键**。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9f1ee276-bd4e-474f-9e04-e4119b1b7c38",
   "metadata": {},
   "source": [
    "计算效率的基础优化大致分为两类情况：对浮点运算数的优化 (节省 CPU 运算时间)、以及对内存通信的优化 (节省 CPU 高级缓存、缓存到内存的通信时间)。对于 Winograd $F(6,3)$ 算法而言，数乘运算次数是确定的。如果希望改进数乘算法，那么就需要使用更大的 Winograd 滤波；这当然是一种策略，但我们马上也会认识到这种做法的局限性。**我们着重于内存通信的优化**。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "98cc373f-a01d-4d03-a287-e187f6eaed33",
   "metadata": {},
   "source": [
    "我们注意到，如果对待运算的张量不加以内存大小限制，那么内存通信就必须要在频率较低的 L3 缓存、甚至主内存空间完成。\n",
    "\n",
    "- 对于滤波后的卷积核 $U_{k, c, r, s}$，其维度是 $(C_\\mathrm{out}, C_\\mathrm{in}, \\mu, \\mu) = (256, 256, 8, 8)$，因此其占用的内存空间是 16 MB，或 (一个浮点数为 32 bit 或 4 Byte)\n",
    "    \n",
    "    $$\n",
    "    256 \\times 256 \\times 8 \\times 8 \\times 4 \\, \\text{Byte} = 16 \\, \\text{MB}\n",
    "    $$\n",
    "    \n",
    "- 对于变换后的输入图像 $V_{i,c,r,s}^{(\\tilde x, \\tilde y)}$，其维度是 $(N, C_\\mathrm{in}, \\mu, \\mu, \\tilde H, \\tilde W) = (8, 256, 8, 8, 9, 9)$，占用的内存空间是 40.5 MB。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2ff6fef2-9478-48b1-a1c0-285d3859bb4b",
   "metadata": {},
   "source": [
    "我们用于测试算法的设备是 Intel Xeon Gold 6150 (4 sockets)，共 36 物理内核 (cores), 72 线程 (threads)。在这个设备上，我们程序能有效进行的并发数是物理内核数 36，而非线程数。一些细节参数是 (带宽由 Intel Advisor 给出)\n",
    "\n",
    "| 缓存或内存 | 缓存大小 | 带宽 |\n",
    "|:--:|:--:|:--:|\n",
    "| L1 | 32 kB / core | 483 GB / sec·core |\n",
    "| L2 | 1024 kB / core | 228 GB / sec·core |\n",
    "| L3 | 24.75 MB / 18 cores | 213 GB / sec |\n",
    "| DRAM | | 101 GB / sec |"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "003fb957-70a4-4534-99dc-63c027cdf6bd",
   "metadata": {},
   "source": [
    "需要指出，L2 缓存的带宽是每个物理内核的速度，因此相比较 L3 带宽的效能提升是相当巨大的。同时 L2 缓存的大小也较大。而 L1 缓存的空间太小，要最大化利用 L1 缓存的难度比较大。因此，对内存通信的优化问题中，基于 L2 缓存优化是第一个、也是最重要的优化策略。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "efba0b9c-6aa5-4000-8c7e-7335f3a6640c",
   "metadata": {},
   "source": [
    "注意到变换后的卷积核 $U_{k, c, r, s}$ 的占用空间相对较小、使用次数较多。因此我们可以考虑**对该卷积核，依 L2 缓存大小，进行分批次计算**。\n",
    "\n",
    "具体来说，令 $\\tilde C, \\tilde K$ 分别是单次分批处理时，变换后卷积矩阵 $\\mathrm{U}$ 的输入、输出通道大小。在我们的程序实现中，经过一些试错性尝试，认为效率比较高的做法是，每批次输入通道数是 $\\tilde C = 32$，输出通道数 $\\tilde K = 64$。在这种情形下，每批次数乘计算的 $\\mathrm{U}$ 矩阵的维度是 $(\\tilde C, \\tilde K, \\mu, \\mu) = (32, 84, 8, 8)$，即 512 kB；该大小确实地控制在了 L2 缓存大小 1024 kB / core 以下。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d7c80c48-20fb-4406-8635-87ccdb4afa77",
   "metadata": {},
   "source": [
    "为了较严格地用公式表达，我们定义矩阵 $\\mathrm{U}_{k,c}^{(\\tilde k, \\tilde c)} = \\mathrm{U}_{\\tilde k + k, \\tilde c + c}$。这里 $\\tilde k, \\tilde c$ 的取值分别是 $\\{0, \\tilde K, 2 \\tilde K, \\cdots, C_\\mathrm{out} - \\tilde K \\}$ 和 $\\{0, \\tilde C, 2 \\tilde C, \\cdots, C_\\mathrm{out} - \\tilde C \\}$，表示分批的批次；$k \\in [0, \\tilde K), c \\in [0, \\tilde C)$ 则表示在每一个批次中卷积核矩阵的角标。\n",
    "\n",
    "由此，我们可以比较清晰地写出我们所使用的基于 L2 缓存优化的 Winograd 算法。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4f90298e-34ef-4bf4-b695-381fca6a167a",
   "metadata": {},
   "source": [
    "![winograd L2 batched algorithms](figures/alg-l2-batch.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a3919926-18e6-4418-9720-4f31edacf66e",
   "metadata": {},
   "source": [
    "上述的算法过程看起来比最初提到的算法要臃肿。原始的算法中，输入、输出图像都只需要读取一次即可。但当前的算法中，输入图像会读取 $C_\\mathrm{out} / \\tilde K = 4$ 次，而输出图像的内存则要经过 $C_\\mathrm{in} / \\tilde C = 8$ 次写入。这看起来似乎是巨大的资源浪费；但为了高效地实现最耗时的数乘步骤，其余步骤中内存通信量的牺牲是值得的。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0111cf2a-1f0a-498b-bcd7-3a6cd752111f",
   "metadata": {},
   "source": [
    "上面算法伪代码中，特意留出了第 4 行与第 8 行，表明单次用于数乘运算的变换后卷积核矩阵 $\\mathrm{U}^{(\\tilde k, \\tilde c)}_{k, c}$ 与变换后输入图像矩阵 $\\mathrm{V}^{(\\tilde x, \\tilde y, \\tilde c)}_{i, c}$ 的内存占用大小。如前所述，$\\mathrm{U}$ 的内存占用是 512 kB，小于 L2 缓存。我们还会发现，对于 $\\mathrm{V}$，其维度是 $(\\tilde C, \\mu, \\mu) \\rightarrow (32, 8, 8)$ 即 8 kB。这个大小也小于 L1 缓存的 32 kB。因此作为基于 L2 缓存优化的副产物，数乘运算也对 L1 缓存一定程度上友好。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "00ccb638-8f85-4c80-adf5-cb1bdd826deb",
   "metadata": {},
   "source": [
    "## Winograd $F(6,3)$ 单步计算程序与复杂度分析"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c5a7d92b-9992-43dc-92b6-bf7de250dee5",
   "metadata": {},
   "source": [
    "### 滤波变换：一般总结"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e89a30a2-f3cd-4880-88a9-bab44ba82087",
   "metadata": {},
   "source": [
    "我们简单再回顾一下上面伪代码算法中出现的滤波变换：\n",
    "- 计算卷积核变换：$\\mathrm{U}^{(\\tilde k, \\tilde c)}_{k,c} {}^\\dagger = \\mathrm{G} (\\mathrm{G} \\mathrm{g}^{(\\tilde k, \\tilde c)}_{k,c})^\\dagger$\n",
    "- 计算输入图像变换：$\\mathrm{V}_{i,c}^{(\\tilde x, \\tilde y, \\tilde c)} {}^\\dagger = \\mathrm{B}^\\dagger (\\mathrm{B}^\\dagger \\mathrm{d}^{(\\tilde x, \\tilde y, \\tilde c)}_{i,c})^\\dagger$\n",
    "- 计算输出图像变换：$\\mathrm{Y}_{i,k}^{(\\tilde x, \\tilde y)} \\mathrel{+}= \\mathrm{A}^\\dagger (\\mathrm{A}^\\dagger \\mathrm{M}^{(\\tilde x, \\tilde y, \\tilde c)}_{i,k} {}^\\dagger)^\\dagger$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7a134f58-61df-42a4-b5ba-511edd1220ba",
   "metadata": {},
   "source": [
    "这三种变换有共同的特征：1) 一个数值确定的矩阵 ($\\mathrm{G}, \\mathrm{B}^\\dagger, \\mathrm{A}^\\dagger$) 与数值可变的矩阵乘积；2) 上述结果转置；3) 上述数值确定的矩阵 ($\\mathrm{G}, \\mathrm{B}^\\dagger, \\mathrm{A}^\\dagger$) 再次进行乘积。\n",
    "\n",
    "那么在后续的实现中，就要考虑到 1) 矩阵转置的高效实现；上述涉及到的矩阵维度都不超过 $(\\mu, \\mu)$ 即 $(8, 8)$ 维度，因此我们也只着重考察 8x8 转置问题。2) 与数值确定的矩阵 ($\\mathrm{G}, \\mathrm{B}^\\dagger, \\mathrm{A}^\\dagger$) 的乘积可以使用手写的、特化的程序，而不需要使用通用的矩阵乘法 (譬如 Level 3 Blas 的 SGEMM 等等)，从而最大程度上节约实际所需的浮点运算数。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "12c1c099-a238-49ec-80d4-e35ba0b6c2d4",
   "metadata": {},
   "source": [
    "在后续的文段中，为了方便，我们会统一将数值可变的矩阵记为 $\\mathrm{D}$。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1f2a9aa8-c7a3-4e3d-9551-de26adf5184b",
   "metadata": {},
   "source": [
    "### SIMD 与 Intrinsic 指令"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "054a84d0-c375-4589-a6a3-489713a9c90f",
   "metadata": {},
   "source": [
    "现在的 CPU 通常支持 SIMD (Single Instruction, Multiple Data) 机制 (一种向量化编程的模式)。对于 Skylake 系列的 CPU (支持 AVX512 指令集)，如果要进行普通的单个浮点数加法 $b + c$ 或乘法 $b \\cdot c$，那么需要一个汇编指令 `addss` 或 `mulss`。而执行向量长度为 16 的向量数乘与加法 $\\boldsymbol{a} \\mathrel{+}= \\boldsymbol{b} \\odot \\boldsymbol{c}$ 单浮点数运算 (Fused-Multiple-Add, FMA)，也只需要一个汇编指令 `vfmadd***ps` 即可。长度为 16 的 FMA 指令需要处理 16 次乘法与 16 次加法，即总共是 32 次加与乘法。而在 Skylake 系列上，FMA (32 次加与乘法)、单次加法、单次乘法单个指令的延迟 (latency) 与吞吐量 (throughput) 是完全一致的。因而若能合理地利用 SIMD 指令，运算的速度最快可以提升 32 倍！\n",
    "\n",
    "在这份文档中，为了在一次指令下完成更多的计算任务，我们的所有计算过程基于 `__m512` 编写。`__m512` 是一种基于向量化 (Vectorization) 的类型；使用其的目的是尽量在单个指令下完成多个数据的计算；一个 `__m512` 向量可以储存 16 个单浮点数。在 C++ 本身的语言特性中，没有对其的规范；但对于大多数基于 x86 的 CPU，可以使用固有指令 (Intrinsic)，在 C 或 C++ 的高级语言级别进行编程。[Intel Intrinsics Guide](https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html) 非常完整地罗列了可用的 Intrinsics。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c245df35-5ca7-4652-9dd1-fed29483692c",
   "metadata": {},
   "source": [
    "需要指出，尽管一般来说，一行 Intrinsic 对应一行 Assembly (最底层的汇编指令)；但 Intrinsic 应当只是给编译器**推荐**合适的 Assembly，而并**不是要求**编译器必须使用我们所希望的 Assembly。因而若要清楚地确定当前的计算机是如何实现一段计算过程，那么最好要从 Assembly 来看。同时，像 `-O3`, `--march=native` 等编译指令事实上也会对一般的程序 (不使用 Intrinsic 编程的程序) 进行向量化优化；因而对于相对简单的问题，只要代码写得足够清晰高效，不需要 Intrinsic 一样可以达到很高的效率。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "494b5749-7888-40b6-a97d-d1a7470c33ff",
   "metadata": {},
   "source": [
    "在当前的问题中，出现的 Assembly 会有 (下表的 Latency 与 Throughput 数据取自于 Intel 所提供的 Skylake 系列；编译时加入 `-S` 选项可以生成汇编文件 `*.s` 以详细查看汇编代码)\n",
    "\n",
    "| Instruction | 指令目的 | Latency | Throughput |\n",
    "|--|--|:--:|:--:|\n",
    "| `vmovaps`, `vmovups` | 内存移动 | 5~8 | 0.5~1 |\n",
    "| `vaddps`, `vsubps`, `vmulps` | 加、减、乘运算 | 4 | 0.5 |\n",
    "| `vfmadd***ps`, `vfmsub***ps` | FMA 运算 | 4 | 0.5 |\n",
    "| `vunpcklps`, `vunpckhps`, `vshufps` | 向量混合 | 1 | 1 |\n",
    "| `vpermi2ps`, `vpermt2ps` | 向量重排 | 3 | 1 |"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d6799348-c5dc-4e58-8dbe-b6ab017769d5",
   "metadata": {},
   "source": [
    "上述表格给予我们至少两个信息 (以 Skylake 为前提)：\n",
    "\n",
    "1. FMA 运算与加、减、乘法运算的耗时和延迟是一致的；因此在可以用 FMA 运算的地方，要尽量使用。这可以最大提升 2 倍的运算力。\n",
    "2. 内存移动的消耗通常比运算要更大。要尽可能减少内存移动。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b8833e50-32ba-4ed5-aa9f-9b06f35ef5c1",
   "metadata": {},
   "source": [
    "### __m512 下 8x8 转置的实现"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a816ed29-ea1e-4905-b208-2412b79640b7",
   "metadata": {},
   "source": [
    "对于使用 `__m512` 向量类型，我们会发现一个问题：一个 `__m512` 向量可以储存的 float 类型数是 16；它不能直接用于 8x8 的转置。\n",
    "\n",
    "在这里，由于当前问题所需要进行的矩阵转置数量很大，我们所使用的解决方案是，使用 8 个 `__m512` 向量，一次性处理两个矩阵转置问题。\n",
    "\n",
    "下述函数 `mm_transpose_8x8` 的输入 `row` 是 `__m512` 类型的两个横向并排待转置的矩阵，输出 `tr` 是两个横向并排的结果矩阵。需要注意，输入的指令集向量 `row` 的数据会被更改。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "387a43a3-9b90-4bbd-8a61-435d9c9b3d3e",
   "metadata": {},
   "outputs": [],
   "source": [
    "/// +---+---+     +---+---+\n",
    "/// |   |   | --> | T | T |\n",
    "/// +---+---+     +---+---+\n",
    "inline void mm_transpose_8x8(__m512 row[8], __m512 tr[8]) {\n",
    "    const __m512i i0x20 = _mm512_set_epi32(033, 032, 031, 030, 013, 012, 011, 010, 023, 022, 021, 020, 003, 002, 001, 000);\n",
    "    const __m512i i0x31 = _mm512_set_epi32(037, 036, 035, 034, 017, 016, 015, 014, 027, 026, 025, 024, 007, 006, 005, 004);\n",
    "    tr[0] = _mm512_unpacklo_ps(row[0], row[1]);\n",
    "    tr[1] = _mm512_unpackhi_ps(row[0], row[1]);\n",
    "    tr[2] = _mm512_unpacklo_ps(row[2], row[3]);\n",
    "    tr[3] = _mm512_unpackhi_ps(row[2], row[3]);\n",
    "    tr[4] = _mm512_unpacklo_ps(row[4], row[5]);\n",
    "    tr[5] = _mm512_unpackhi_ps(row[4], row[5]);\n",
    "    tr[6] = _mm512_unpacklo_ps(row[6], row[7]);\n",
    "    tr[7] = _mm512_unpackhi_ps(row[6], row[7]);\n",
    "    row[0] = _mm512_shuffle_ps(tr[0], tr[2], _MM_SHUFFLE(1, 0, 1, 0));\n",
    "    row[1] = _mm512_shuffle_ps(tr[0], tr[2], _MM_SHUFFLE(3, 2, 3, 2));\n",
    "    row[2] = _mm512_shuffle_ps(tr[1], tr[3], _MM_SHUFFLE(1, 0, 1, 0));\n",
    "    row[3] = _mm512_shuffle_ps(tr[1], tr[3], _MM_SHUFFLE(3, 2, 3, 2));\n",
    "    row[4] = _mm512_shuffle_ps(tr[4], tr[6], _MM_SHUFFLE(1, 0, 1, 0));\n",
    "    row[5] = _mm512_shuffle_ps(tr[4], tr[6], _MM_SHUFFLE(3, 2, 3, 2));\n",
    "    row[6] = _mm512_shuffle_ps(tr[5], tr[7], _MM_SHUFFLE(1, 0, 1, 0));\n",
    "    row[7] = _mm512_shuffle_ps(tr[5], tr[7], _MM_SHUFFLE(3, 2, 3, 2));\n",
    "    tr[0] = _mm512_permutex2var_ps(row[0], i0x20, row[4]);\n",
    "    tr[1] = _mm512_permutex2var_ps(row[1], i0x20, row[5]);\n",
    "    tr[2] = _mm512_permutex2var_ps(row[2], i0x20, row[6]);\n",
    "    tr[3] = _mm512_permutex2var_ps(row[3], i0x20, row[7]);\n",
    "    tr[4] = _mm512_permutex2var_ps(row[0], i0x31, row[4]);\n",
    "    tr[5] = _mm512_permutex2var_ps(row[1], i0x31, row[5]);\n",
    "    tr[6] = _mm512_permutex2var_ps(row[2], i0x31, row[6]);\n",
    "    tr[7] = _mm512_permutex2var_ps(row[3], i0x31, row[7]);\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0f64e5c4-4360-42ee-bc22-4702b612b169",
   "metadata": {},
   "source": [
    "矩阵转置的演示效果如下所示。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "891c20f3-0e5e-42d0-a361-f9d88ffc4829",
   "metadata": {
    "tags": [
     "hide_input"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Original Matrix:\n",
      "   0   1   2   3   4   5   6   7  64  65  66  67  68  69  70  71\n",
      "   8   9  10  11  12  13  14  15  72  73  74  75  76  77  78  79\n",
      "  16  17  18  19  20  21  22  23  80  81  82  83  84  85  86  87\n",
      "  24  25  26  27  28  29  30  31  88  89  90  91  92  93  94  95\n",
      "  32  33  34  35  36  37  38  39  96  97  98  99 100 101 102 103\n",
      "  40  41  42  43  44  45  46  47 104 105 106 107 108 109 110 111\n",
      "  48  49  50  51  52  53  54  55 112 113 114 115 116 117 118 119\n",
      "  56  57  58  59  60  61  62  63 120 121 122 123 124 125 126 127\n",
      "\n",
      "Transposed Matrix (8x8):\n",
      "   0   8  16  24  32  40  48  56  64  72  80  88  96 104 112 120\n",
      "   1   9  17  25  33  41  49  57  65  73  81  89  97 105 113 121\n",
      "   2  10  18  26  34  42  50  58  66  74  82  90  98 106 114 122\n",
      "   3  11  19  27  35  43  51  59  67  75  83  91  99 107 115 123\n",
      "   4  12  20  28  36  44  52  60  68  76  84  92 100 108 116 124\n",
      "   5  13  21  29  37  45  53  61  69  77  85  93 101 109 117 125\n",
      "   6  14  22  30  38  46  54  62  70  78  86  94 102 110 118 126\n",
      "   7  15  23  31  39  47  55  63  71  79  87  95 103 111 119 127\n"
     ]
    }
   ],
   "source": [
    "// Initialize intrinsics\n",
    "float a[128], b[16]; __m512 t[8], r[8];\n",
    "for (size_t i = 0; i < 64; ++i) {\n",
    "    a[i / 8 * 16 + i % 8] = i;\n",
    "    a[i / 8 * 16 + i % 8 + 8] = i + 64;\n",
    "}\n",
    "for (size_t i = 0; i < 8; ++i) t[i] = _mm512_loadu_ps(&a[i * 16]);\n",
    "\n",
    "printf(\"Original Matrix:\\n\");\n",
    "for (size_t i = 0; i < 8; ++i) {\n",
    "    _mm512_store_ps(&b[0], t[i]);\n",
    "    for (size_t j = 0; j < 16; ++j) printf(\"%4.0f\", b[j]);\n",
    "    printf(\"\\n\");\n",
    "}\n",
    "\n",
    "// Transform 8x8 for intrinsics (t -> r)\n",
    "mm_transpose_8x8(&t[0], &r[0]);\n",
    "printf(\"\\nTransposed Matrix (8x8):\\n\");\n",
    "for (size_t i = 0; i < 8; ++i) {\n",
    "    _mm512_store_ps(&b[0], r[i]);\n",
    "    for (size_t j = 0; j < 16; ++j) printf(\"%4.0f\", b[j]);\n",
    "    printf(\"\\n\");\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0176442b-84af-41e0-8cd7-4fa88aae0f0e",
   "metadata": {},
   "source": [
    "### 将连续内存空间转换到可计算的指令集上"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88ce19f3-64a7-48ce-802d-ae40c5724f75",
   "metadata": {},
   "source": [
    "这里还遇到一个问题。譬如对于这种情形：一个矩阵有 8x8 个元素，我们希望对其行向量进行加减操作。但对于 `__m512` 类型的向量，每个向量都可以放 16 个浮点数。因此连续内存下，两个 8x8 个元素的矩阵，储存方式其实是放在两个 4x16 的向量中。这样的存储方式会给计算带来些许麻烦。\n",
    "\n",
    "为了使计算更为方便，我们可以将 2 个连续内存下的 8x8 矩阵进行重排，使得第一个矩阵在 8 个 `__m512` 的左侧、第二个矩阵在右侧。这样，向量的加法计算就可以在两个并排的 8x8 矩阵下容易完成了。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "76505f7c-48f1-499c-b77f-d4303e0222f4",
   "metadata": {},
   "source": [
    "具体的做法是，对于连续内存到计算空间下的转换定义为 `mm_transpose_8x16_row2col`："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "3c738423-4548-46b7-8d50-f2d1de5dadaa",
   "metadata": {},
   "outputs": [],
   "source": [
    "//   +-------+-------+     +-------+-------+\n",
    "//   +       1       +     +       +       +\n",
    "//   +-------+-------+ --> +   1   +   2   +\n",
    "//   +       2       +     +       +       +\n",
    "//   +-------+-------+     +-------+-------+\n",
    "inline void mm_transpose_8x16_row2col(__m512 row[8], __m512 tr[8]) {\n",
    "    const __m512i ihi = _mm512_set_epi32(027, 026, 025, 024, 023, 022, 021, 020, 007, 006, 005, 004, 003, 002, 001, 000);\n",
    "    const __m512i ilo = _mm512_set_epi32(037, 036, 035, 034, 033, 032, 031, 030, 017, 016, 015, 014, 013, 012, 011, 010);\n",
    "    tr[0] = _mm512_permutex2var_ps(row[0], ihi, row[4]);\n",
    "    tr[1] = _mm512_permutex2var_ps(row[0], ilo, row[4]);\n",
    "    tr[2] = _mm512_permutex2var_ps(row[1], ihi, row[5]);\n",
    "    tr[3] = _mm512_permutex2var_ps(row[1], ilo, row[5]);\n",
    "    tr[4] = _mm512_permutex2var_ps(row[2], ihi, row[6]);\n",
    "    tr[5] = _mm512_permutex2var_ps(row[2], ilo, row[6]);\n",
    "    tr[6] = _mm512_permutex2var_ps(row[3], ihi, row[7]);\n",
    "    tr[7] = _mm512_permutex2var_ps(row[3], ilo, row[7]);\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a0fcebb8-8fce-4d3f-975d-b0accf03d000",
   "metadata": {},
   "source": [
    "其使用效果是"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "39ab3716-7a8d-4191-b312-0969baa8e524",
   "metadata": {
    "tags": [
     "hide_input"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Original Matrix:\n",
      "   0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15\n",
      "  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31\n",
      "  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47\n",
      "  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63\n",
      "  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79\n",
      "  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95\n",
      "  96  97  98  99 100 101 102 103 104 105 106 107 108 109 110 111\n",
      " 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127\n",
      "\n",
      "Transposed Matrix (8x16, row->col):\n",
      "   0   1   2   3   4   5   6   7  64  65  66  67  68  69  70  71\n",
      "   8   9  10  11  12  13  14  15  72  73  74  75  76  77  78  79\n",
      "  16  17  18  19  20  21  22  23  80  81  82  83  84  85  86  87\n",
      "  24  25  26  27  28  29  30  31  88  89  90  91  92  93  94  95\n",
      "  32  33  34  35  36  37  38  39  96  97  98  99 100 101 102 103\n",
      "  40  41  42  43  44  45  46  47 104 105 106 107 108 109 110 111\n",
      "  48  49  50  51  52  53  54  55 112 113 114 115 116 117 118 119\n",
      "  56  57  58  59  60  61  62  63 120 121 122 123 124 125 126 127\n"
     ]
    }
   ],
   "source": [
    "// Initialize intrinsics\n",
    "float a[128], b[16]; __m512 t[8], r[8];\n",
    "for (size_t i = 0; i < 128; ++i) a[i] = i;\n",
    "for (size_t i = 0; i < 8; ++i) t[i] = _mm512_loadu_ps(&a[i * 16]);\n",
    "\n",
    "printf(\"Original Matrix:\\n\");\n",
    "for (size_t i = 0; i < 8; ++i) {\n",
    "    _mm512_store_ps(&b[0], t[i]);\n",
    "    for (size_t j = 0; j < 16; ++j) printf(\"%4.0f\", b[j]);\n",
    "    printf(\"\\n\");\n",
    "}\n",
    "\n",
    "// Transform 8x16 row->col\n",
    "mm_transpose_8x16_row2col(&t[0], &r[0]);\n",
    "printf(\"\\nTransposed Matrix (8x16, row->col):\\n\");\n",
    "for (size_t i = 0; i < 8; ++i) {\n",
    "    _mm512_store_ps(&b[0], r[i]);\n",
    "    for (size_t j = 0; j < 16; ++j) printf(\"%4.0f\", b[j]);\n",
    "    printf(\"\\n\");\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "05758d5a-be0e-43be-9b55-b8cdf86715f0",
   "metadata": {},
   "source": [
    "如果希望从计算空间转换到连续内存，则反其道而行之。函数定义为 `mm_transpose_8x16_col2row`："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "9ac22a9c-ebab-470d-b5a3-34658083d86e",
   "metadata": {},
   "outputs": [],
   "source": [
    "//   +-------+-------+     +-------+-------+\n",
    "//   +       +       +     +       1       +\n",
    "//   +   1   +   2   + --> +-------+-------+\n",
    "//   +       +       +     +       2       +\n",
    "//   +-------+-------+     +-------+-------+\n",
    "inline void mm_transpose_8x16_col2row(__m512 row[8], __m512 tr[8]) {\n",
    "    const __m512i ihi = _mm512_set_epi32(027, 026, 025, 024, 023, 022, 021, 020, 007, 006, 005, 004, 003, 002, 001, 000);\n",
    "    const __m512i ilo = _mm512_set_epi32(037, 036, 035, 034, 033, 032, 031, 030, 017, 016, 015, 014, 013, 012, 011, 010);\n",
    "    tr[0] = _mm512_permutex2var_ps(row[0], ihi, row[1]);\n",
    "    tr[1] = _mm512_permutex2var_ps(row[2], ihi, row[3]);\n",
    "    tr[2] = _mm512_permutex2var_ps(row[4], ihi, row[5]);\n",
    "    tr[3] = _mm512_permutex2var_ps(row[6], ihi, row[7]);\n",
    "    tr[4] = _mm512_permutex2var_ps(row[0], ilo, row[1]);\n",
    "    tr[5] = _mm512_permutex2var_ps(row[2], ilo, row[3]);\n",
    "    tr[6] = _mm512_permutex2var_ps(row[4], ilo, row[5]);\n",
    "    tr[7] = _mm512_permutex2var_ps(row[6], ilo, row[7]);\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4bad54ca-eec4-4378-8e00-792e2c409353",
   "metadata": {},
   "source": [
    "其使用效果是"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "b9505367-a443-444c-91ce-38e5f7132d13",
   "metadata": {
    "tags": [
     "hide_input"
    ]
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Transposed Matrix (8x16, col->row):\n",
      "   0   1   2   3   4   5   6   7   8   9  10  11  12  13  14  15\n",
      "  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31\n",
      "  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47\n",
      "  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63\n",
      "  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79\n",
      "  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95\n",
      "  96  97  98  99 100 101 102 103 104 105 106 107 108 109 110 111\n",
      " 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127\n"
     ]
    }
   ],
   "source": [
    "// Transform 8x16 row->col\n",
    "mm_transpose_8x16_col2row(&r[0], &t[0]);\n",
    "printf(\"\\nTransposed Matrix (8x16, col->row):\\n\");\n",
    "for (size_t i = 0; i < 8; ++i) {\n",
    "    _mm512_store_ps(&b[0], t[i]);\n",
    "    for (size_t j = 0; j < 16; ++j) printf(\"%4.0f\", b[j]);\n",
    "    printf(\"\\n\");\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5fbe1e4d-cbd2-44b0-996b-d911ab64d282",
   "metadata": {},
   "source": [
    "### 输入图像变换"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "50ef788e-4c1d-42ca-a5d3-4ea8df379352",
   "metadata": {},
   "source": [
    "输入图像变换过程为\n",
    "\n",
    "$$\n",
    "V^\\dagger = B^\\dagger (B^\\dagger D)^\\dagger\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "51a34bee-3099-42d1-9b51-161d5b5a6124",
   "metadata": {},
   "source": [
    "这个过程中的关键步骤是 $T = B^\\dagger D$。尽管矩阵乘法的实现非常简单，但会产生大量不必要的浮点运算。因此，这类矩阵乘法运算要手动将实现。\n",
    "\n",
    "若记向量 $T_i, D_i$ 分别是矩阵 $T, D$ 的第 $i$ 行，那么\n",
    "\n",
    "$$\n",
    "\\begin{align*}\n",
    "T_0 &= D_0 + 5.25 \\times (D_4 - D_2) - D_6 \\\\\n",
    "T_1 &= \\big( D_2 - 4.25 \\times D_4 + D_6 \\big) + \\big( D_1 - 4.25 \\times D_3 + D_5 \\big) \\\\\n",
    "T_2 &= \\big( D_2 - 4.25 \\times D_4 + D_6 \\big) - \\big( D_1 - 4.25 \\times D_3 + D_5 \\big) \\\\\n",
    "T_3 &= \\big( 0.25 \\times D_2 - 1.25 \\times D_4 + D_6 \\big) + \\big( 0.5 \\times D_1 - 2.5 \\times D_3 + 2 \\times D_5 \\big) \\\\\n",
    "T_4 &= \\big( 0.25 \\times D_2 - 1.25 \\times D_4 + D_6 \\big) - \\big( 0.5 \\times D_1 - 2.5 \\times D_3 + 2 \\times D_5 \\big) \\\\\n",
    "T_5 &= \\big( 4 \\times D_2 - 5 \\times D_4 + D_6 \\big) + \\big( 2 \\times D_1 - 2.5 \\times D_3 + 0.5 \\times D_5 \\big) \\\\\n",
    "T_6 &= \\big( 4 \\times D_2 - 5 \\times D_4 + D_6 \\big) - \\big( 2 \\times D_1 - 2.5 \\times D_3 + 0.5 \\times D_5 \\big) \\\\\n",
    "T_7 &= D_7 + 5.25 \\times (D_3 - D_5) - D_1\n",
    "\\end{align*}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ae64b1ed-57fb-4ecf-b912-6e0aba948f20",
   "metadata": {},
   "source": [
    "程序的实现为 `transform_BtD_6x3`。其运算是通过输入 8 行指令向量 `D`、输出 8 行指令向量 `BtD` 实现的。对于 GCC 编译器，其当前版本对指令集的支持程度可能好于 ICC 编译器；对于指令集加、减、乘法和 FMA 运算，不需要使用 Intrinsic (譬如 `_mm512_fmadd_ps` 这类又长又难于看懂的指令函数)，编译器可以正确地编译出高效的 SIMD 汇编。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "439c7ddc-6754-420a-8c25-9ee3126b8636",
   "metadata": {},
   "outputs": [],
   "source": [
    "inline void transform_BtD_6x3(const __m512 D[8], __m512 BtD[8]) {\n",
    "    __m512 s0, s1;\n",
    "    BtD[0] = D[0] + 5.25f * (D[4] - D[2]) - D[6];\n",
    "    s0 = D[1] - 4.25f * D[3] + D[5];\n",
    "    s1 = D[2] - 4.25f * D[4] + D[6];\n",
    "    BtD[1] = s0 + s1;\n",
    "    BtD[2] = s1 - s0;\n",
    "    s0 = 0.5f * D[1] - 2.5f * D[3] + 2.f * D[5];\n",
    "    s1 = D[6] + 0.25f * D[2] - 1.25f * D[4];\n",
    "    BtD[3] = s0 + s1;\n",
    "    BtD[4] = s1 - s0;\n",
    "    s0 = 2.f * D[1] - 2.5f * D[3] + 0.5f * D[5];\n",
    "    s1 = D[6] + 4.f * D[2] - 5.f * D[4];\n",
    "    BtD[5] = s0 + s1;\n",
    "    BtD[6] = s1 - s0;\n",
    "    BtD[7] = D[7] - D[1] + 5.25f * (D[3] - D[5]);\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7e541e16-bb7d-4d70-9243-543ad6644aa0",
   "metadata": {},
   "source": [
    "在实际的程序实现中，我们需要一次性对两个输入图像矩阵进行变换。这两个矩阵未必需要在相连的内存空间中；但输出时最好需要在相连的内存实现。\n",
    "\n",
    "下述程序 `perform_image_transform` 将两个输入图像 8x8 矩阵 `im1`, `im2` 代入，输出连续内存空间的 8x16 的指令集 `v` 的函数。该函数同时需要传入图像矩阵的首行维度 `IW` $W_\\mathrm{in}$。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "64d2b3ec-dea5-409c-a093-946ae8b83f60",
   "metadata": {},
   "outputs": [],
   "source": [
    "inline void perform_image_transform(const float im1[], const float im2[], __m512 v[], int IW) {\n",
    "    __m512 zmm_a[8], zmm_b[8];\n",
    "    // Initialize __m512 of two input images\n",
    "    for (int i = 0; i < 8; ++i) {\n",
    "        zmm_b[i] = _mm512_loadu_ps(&im1[i * IW]);\n",
    "        zmm_b[i] = _mm512_insertf32x8(zmm_b[i], _mm256_loadu_ps(&im2[i * IW]), 1);\n",
    "    }\n",
    "    // Perform B.T @ D\n",
    "    transform_BtD_6x3(zmm_b, zmm_a);\n",
    "    // Perform (B.T @ D).T\n",
    "    mm_transpose_8x8(zmm_a, zmm_b);\n",
    "    // Perform V.T = B.T @ (B.T @ D).T\n",
    "    transform_BtD_6x3(zmm_b, zmm_a);\n",
    "    // Consequential memory for two transformed images\n",
    "    mm_transpose_8x16_col2row(zmm_a, v);\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5ebd913e-9e77-4346-b16c-d992c8096803",
   "metadata": {},
   "source": [
    "### 卷积核变换"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e06f70b1-c0fc-4d43-b03e-9e8c7072d369",
   "metadata": {},
   "source": [
    "卷积核变换过程为\n",
    "\n",
    "$$\n",
    "U^\\dagger = G (G D)^\\dagger\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9a044f6e-75b4-4986-82e4-480b2eb27adc",
   "metadata": {},
   "source": [
    "这个过程中关键的步骤是 $T = GD$。该过程涉及到的输入 $D$ 或者 $(GD)^\\dagger$ 是 3 行的矩阵，但输出是 8 行的矩阵。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bf01799e-1acf-4c6c-a374-7ddb34925542",
   "metadata": {},
   "source": [
    "$$\n",
    "\\begin{align*}\n",
    "T_0 &= D_0 \\\\\n",
    "T_1 &= -2/9 \\times (D_0 + D_2) - 2/9 \\times D_1 \\\\\n",
    "T_2 &= -2/9 \\times (D_0 + D_2) + 2/9 \\times D_1 \\\\\n",
    "T_3 &= (1/90 \\times D_0 + 2/45 \\times D_2) + 1/45 \\times D_1 \\\\\n",
    "T_4 &= (1/90 \\times D_0 + 2/45 \\times D_2) - 1/45 \\times D_1 \\\\\n",
    "T_5 &= (32/45 \\times D_0 + 8/45 \\times D_2) + 16/45 \\times D_1 \\\\\n",
    "T_6 &= (32/45 \\times D_0 + 8/45 \\times D_2) - 16/45 \\times D_1 \\\\\n",
    "T_7 &= D_2\n",
    "\\end{align*}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7e7ba120-cd50-4580-a3b7-ed4c01d85c0d",
   "metadata": {},
   "source": [
    "程序的实现为 `transform_GD_6x3`。其运算是通过输入 3 行指令集向量 `D`、输出 8 行指令集向量 `GD` 实现的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "fca5748f-8226-45b6-a179-0e93d05cbe1b",
   "metadata": {},
   "outputs": [],
   "source": [
    "inline void transform_GD_6x3(const __m512 D[3], __m512 GD[8]) {\n",
    "    __m512 s0, s1;\n",
    "    GD[0] = D[0];\n",
    "    GD[7] = D[2];\n",
    "    s0 = -2.f/9.f * (D[0] + D[2]);\n",
    "    s1 = -2.f/9.f * D[1];\n",
    "    GD[1] = s0 + s1;\n",
    "    GD[2] = s0 - s1;\n",
    "    s0 = 1.f/90.f * D[0] + 2.f/45.f * D[2];\n",
    "    s1 = 1.f/45.f * D[1];\n",
    "    GD[3] = s0 + s1;\n",
    "    GD[4] = s0 - s1;\n",
    "    s0 = 32.f/45.f * D[0] + 8.f/45.f * D[2];\n",
    "    s1 = 16.f/45.f * D[1];\n",
    "    GD[5] = s0 + s1;\n",
    "    GD[6] = s0 - s1;\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b95f110-00d4-43d7-a0bd-0abef32a4469",
   "metadata": {},
   "source": [
    "下述程序 `perform_filter_transform` 将两个卷积核 3x3 矩阵 `f[0:9], f[9:18]` 代入 (这里要求两个卷积核是内存连续的)，输出连续内存空间的 8x16 的指令集 `zmm_u` 的函数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "ffcfd120-d3b0-48a7-857e-988cd51ffa41",
   "metadata": {},
   "outputs": [],
   "source": [
    "inline void perform_filter_transform(const float f[], __m512 zmm_u[]) {\n",
    "    // Initialize __m512 of two input filters (convolution kernels)\n",
    "    __m512 zmm_a[8], zmm_b[8];\n",
    "    for (int i = 0; i < 3; ++i) {\n",
    "        zmm_b[i] = _mm512_loadu_ps(&f[3 * i]);\n",
    "        zmm_b[i] = _mm512_insertf32x8(zmm_b[i], _mm256_loadu_ps(&f[9 + 3 * i]), 1);\n",
    "    }\n",
    "    // Perform G @ D\n",
    "    transform_GD_6x3(zmm_b, zmm_a);\n",
    "    // Perform (G @ D).T\n",
    "    mm_transpose_8x8(zmm_a, zmm_b);\n",
    "    // Perform U.T = G @ (G @ D).T\n",
    "    transform_GD_6x3(zmm_b, zmm_a);\n",
    "    // Consequential memory for two transformed filters\n",
    "    mm_transpose_8x16_col2row(zmm_a, zmm_u);\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "364b7d85-9478-466e-96bc-ca3e71bd8c6e",
   "metadata": {},
   "source": [
    "### 输出图像变换"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f0a8d093-d25a-4ae0-916e-21727af0e50a",
   "metadata": {},
   "source": [
    "输出图像变换过程比较类似于输入图像变换：\n",
    "\n",
    "$$\n",
    "Y = A^\\dagger (A^\\dagger D)^\\dagger\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "59aadfc1-3c2c-47d2-aad6-2b4d605915b3",
   "metadata": {},
   "source": [
    "这个过程中关键的步骤是 $T = A^\\dagger D$。该过程涉及到的输入 $D$ 或者 $(GD)^\\dagger$ 是 8 行的矩阵，输出是 6 行的矩阵。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "644609db-a27e-48df-a7c1-9955abb7e3c9",
   "metadata": {},
   "source": [
    "$$\n",
    "\\begin{align*}\n",
    "T_0 &= (D_1 + D_2) + (D_3 + D_4) + (D_5 + D_6) + D_0 \\\\\n",
    "T_2 &= (D_1 + D_2) + 4 \\times (D_3 + D_4) + \\frac{1}{4} \\times (D_5 + D_6) \\\\\n",
    "T_4 &= (D_1 + D_2) + 16 \\times (D_3 + D_4) + \\frac{1}{16} \\times (D_5 + D_6) \\\\\n",
    "T_1 &= (D_1 - D_2) + 2 \\times (D_3 - D_4) + \\frac{1}{2} \\times (D_5 - D_6) \\\\\n",
    "T_3 &= (D_1 - D_2) + 8 \\times (D_3 - D_4) + \\frac{1}{8} (D_5 - D_6) \\\\\n",
    "T_5 &= (D_1 - D_2) + 32 \\times (D_3 - D_4) + \\frac{1}{32} (D_5 - D_6) + D_7\n",
    "\\end{align*}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d9dbfeb5-dbae-4c4c-9f99-edc5e4947462",
   "metadata": {},
   "source": [
    "程序的实现为 `transform_AtD_6x3`。其运算是通过输入 8 行指令集向量 `D`、输出 6 行指令集向量 `AtD` 实现的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "fff82f20-267f-42dc-a3a2-1ee2c7587492",
   "metadata": {},
   "outputs": [],
   "source": [
    "inline void transform_AtD_6x3(const __m512 D[8], __m512 AtD[6]) {\n",
    "    __m512 s0, s1, s2;\n",
    "    s0 = D[1] + D[2];\n",
    "    s1 = D[3] + D[4];\n",
    "    s2 = D[5] + D[6];\n",
    "    AtD[0] = s0 + s1 + s2 + D[0];\n",
    "    AtD[2] = s0 + 4.f * s1 + 0.25f * s2;\n",
    "    AtD[4] = s0 + 16.f * s1 + 0.0625f * s2;\n",
    "    s0 = D[1] - D[2];\n",
    "    s1 = D[3] - D[4];\n",
    "    s2 = D[5] - D[6];\n",
    "    AtD[1] = s0 + 2.f * s1 + 0.5f * s2;\n",
    "    AtD[3] = s0 + 8.f * s1 + 0.125f * s2;\n",
    "    AtD[5] = s0 + 32.f * s1 + 0.03125f * s2 + D[7];\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6d5533b6-c2d0-4cd2-8927-5602a12d9cd0",
   "metadata": {},
   "source": [
    "下述程序 `perform_store_transform` 将两个横向并列的矩阵 $M^\\dagger$ 通过向量 `zmm_m` 代入，随后将图像输出到两个图像的指针 `r1, r2` 上。需要注意，输出的图像是 6x6 矩阵，因此不能简单地直接使用 `_mm256_storeu_ps` 将 8 个浮点数写入内存 (会将两个无效数据误写入可能存放有意义数据的位置上)，而是要使用带遮罩的向量存储方式。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "9c8ed938-9bba-4dd9-9fca-1470ff0fb266",
   "metadata": {},
   "outputs": [],
   "source": [
    "inline void perform_store_transform(__m512 zmm_m[8], float *r1, float *r2, int OW) {\n",
    "    __m512 zmm_a[8], zmm_b[8];\n",
    "    unsigned char mask = 63;\n",
    "    mm_transpose_8x16_row2col(zmm_m, zmm_b);\n",
    "    transform_AtD_6x3(zmm_b, zmm_a);\n",
    "    mm_transpose_8x8(zmm_a, zmm_b);\n",
    "    transform_AtD_6x3(zmm_b, zmm_a);\n",
    "    for (int i = 0; i < 6; ++i) {\n",
    "        _mm256_mask_storeu_ps(&r1[i * OW], mask, _mm512_extractf32x8_ps(zmm_a[i], 0) + _mm256_loadu_ps(&r1[i * OW]));\n",
    "        _mm256_mask_storeu_ps(&r2[i * OW], mask, _mm512_extractf32x8_ps(zmm_a[i], 1) + _mm256_loadu_ps(&r2[i * OW]));\n",
    "    }\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "63351655-cb14-4a21-a561-8429d1d6445f",
   "metadata": {},
   "source": [
    "### 数乘"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "be9dd936-cb01-459b-9e06-711e073b05c8",
   "metadata": {},
   "source": [
    "我们先回顾到数乘的公式表达是\n",
    "\n",
    "$$\n",
    "\\mathrm{M}_{i,k}^{(\\tilde x, \\tilde y, \\tilde c)} {}^\\dagger = \\sum_c^{\\tilde C} \\mathrm{U}^{(\\tilde k, \\tilde c)}_{k,c} {}^\\dagger \\odot \\mathrm{V}_{i,c}^{(\\tilde x, \\tilde y, \\tilde c)} {}^\\dagger\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "53cb78ee-50b4-4555-a1c3-3fdc62df89bd",
   "metadata": {},
   "source": [
    "回顾到之前的伪代码，其中的 $i, \\tilde x, \\tilde y, \\tilde k, \\tilde c$ 是已经确定的数值；那么上述角标复杂的表达式可以简化为\n",
    "\n",
    "$$\n",
    "\\mathrm{M}_{k} {}^\\dagger = \\sum_c^{\\tilde C} \\mathrm{U}_{k,c} {}^\\dagger \\odot \\mathrm{V}_{c} {}^\\dagger\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b0165b94-cc49-4d2a-bf97-670af25b6ac5",
   "metadata": {},
   "source": [
    "该过程还需要角标 $k$ 的参与。下述的程序 `perform_mult` 中，\n",
    "\n",
    "- `u1`, `u2` 分别是两个 $k$ 取值下的 $\\mathrm{U}_{k,c}^\\dagger$ 或者写为张量元素的形式 $U_{k,c,s,r}$ (维度 $(c,s,r) \\rightarrow (\\tilde C, \\mu, \\mu)$)；\n",
    "- `v` 就是 $\\mathrm{V}_{c}^\\dagger$ 张量，维度与 `u1` 或 `u2` 相同；\n",
    "- `sizeIC` 是输入通道数的分割大小 $\\tilde C$；\n",
    "- `zmm_m` 是两个 $k$ 取值下的 $\\mathrm{M}_{k} {}^\\dagger$，维度是 $(2, \\mu, \\mu)$ 即 $(2, 8, 8)$；这两个 8x8 矩阵分别储存在两个 4x16 的指令集向量中。\n",
    "\n",
    "执行 `perform_mult` 程序时，还需要对 $k$ 进行循环。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "cfee1c4b-547c-4b3c-aa54-bce82fe4eabf",
   "metadata": {},
   "outputs": [],
   "source": [
    "inline void perform_mult(const __m512 u1[], const __m512 u2[], const __m512 v[], __m512 zmm_m[8], int sizeIC) {\n",
    "    for (int i = 0; i < 8; ++i)\n",
    "        zmm_m[i] = _mm512_setzero_ps();\n",
    "    for (int ci = 0; ci < sizeIC; ++ci) {\n",
    "        for (int i = 0; i < 4; ++i) {\n",
    "            zmm_m[i] += u1[0] * v[0];\n",
    "            zmm_m[i + 4] += u2[0] * v[0];\n",
    "            ++u1; ++u2; ++v;\n",
    "        }\n",
    "    }\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "adca35e6-e7cd-493b-b1dd-957552c09414",
   "metadata": {},
   "source": [
    "## Winograd $F(6,3)$ 总程序"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d770e639-00de-417f-8876-b09471f31fca",
   "metadata": {},
   "source": [
    "有了所有单步程序后，我们可以回归到伪代码的运算过程了。由于整体的 Winograd 伪代码难以再分割出子过程了，因此下面需要用一段完整的代码实现最后统合的一步了。\n",
    "\n",
    "对程序作一些补充说明：\n",
    "\n",
    "- 变量 `hintOC`, `hintIC` 分别是输出通道数的分割大小 $\\tilde K$ 与输入通道数分割大小 $\\tilde C$ 的初始设定值。这可以作为程序的常值超参数进行调整，用于确定哪一种分割会使程序运行地更快。但之所以说是“初始设定值”，是因为有可能会遇到输出通道数为 96、输入通道数为 3 等特殊情况。具体分割时需要考虑到边界情况。\n",
    "- 所有子程序都需要通过强制 inline 嵌入到 `winconv` 主程序中；否则效率可能会受到严重影响。\n",
    "- 程序最一开始需要对输出图像置零；这一步在 $C_\\mathrm{in}$ 较小时是相对耗时的。可以通过并行置零的方式提升效率，实测会比 `memset` 等做法要快很多。\n",
    "- 变量 `startOC`, `startIC` 分别是 $\\tilde k, \\tilde c$；变量 `x_`, `y_` 分别是 $\\tilde x, \\tilde y$。\n",
    "- 下述程序的 `//` 注释中的 `line` 是指上面伪代码的行号。\n",
    "- 下述程序尚没有对 $C_\\mathrm{in}$ 为奇数、$H_\\mathrm{in}, W_\\mathrm{in}$ 模 6 不余 2 的边界情况作实现。关于这些边界情况，请参考实际程序 [winograd_f6x3.cpp](winograd_f6x3.cpp) 的做法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "41f28713-270a-4b41-bfe5-524f70c59f0d",
   "metadata": {},
   "outputs": [],
   "source": [
    "constexpr int hintOC = 64;\n",
    "constexpr int hintIC = 32;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "bdaa342b-8493-4c20-a142-732c597c8341",
   "metadata": {},
   "outputs": [],
   "source": [
    "void winconv(const float *__restrict__ image, const int IH,\n",
    "             const int IW, const int IC, const float *__restrict__ filter,\n",
    "             const int OC, const int N, float *__restrict__ result) {\n",
    "    const int OH = IH - 2;\n",
    "    const int OW = IW - 2;\n",
    "    const int TH = ceildiv(OH, 6);\n",
    "    const int TW = ceildiv(OW, 6);\n",
    "\n",
    "    const int sliceOC = min(OC, hintOC);\n",
    "    const int sliceIC = min(IC, hintIC);\n",
    "\n",
    "    const int size_result = N * OC * OH * OW;\n",
    "    \n",
    "    // line 1: zero initialize output image Y\n",
    "#pragma omp parallel\n",
    "#pragma omp for simd aligned(result) schedule(static)\n",
    "    for (int i = 0; i < size_result; ++i) result[i] = 0;\n",
    "    \n",
    "    // line 2: for tilde(k), tilde(c)\n",
    "    for (int startOC = 0; startOC < OC; startOC += sliceOC) {\n",
    "        Range rOC(startOC, min(startOC + sliceOC, OC));\n",
    "        for (int startIC = 0; startIC < IC; startIC += sliceIC) {\n",
    "            Range rIC(startIC, min(startIC + sliceIC, IC));\n",
    "\n",
    "// line 3: declare parallel for every CPU cores\n",
    "#pragma omp parallel default(shared)\n",
    "            {\n",
    "                // line 4, 8: allocate U in L2-cache, V in L1-cache\n",
    "                __m512 V[rIC.size() * 4];\n",
    "                __m512 U[rOC.size() * rIC.size() * 4];\n",
    "                \n",
    "                // line 5: compute U, transformation of convolutional kernel\n",
    "                for (int k = rOC.start; k < rOC.end; ++k) {\n",
    "                    for (int c = rIC.start; c < rIC.end; c += 2) {\n",
    "                        int ki = k - rOC.start, ci = c - rIC.start;\n",
    "                        const float *f = &filter[(k * IC + c) * 9];\n",
    "                        __m512 *u = &U[(ki * rIC.size() + ci) * 4];\n",
    "                        perform_filter_transform(f, u);\n",
    "                    }\n",
    "                }\n",
    "\n",
    "// line 6: embrassingly parallel following for loop\n",
    "//         adding `collapse` directive if N is not comparable to available number of CPU cores\n",
    "#pragma omp for schedule(static)\n",
    "                // line 7: for i, tilde(x), tilde(y)\n",
    "                for (int i = 0; i < N; ++i) {\n",
    "                    for (int x_ = 0; x_ < TH; ++x_) {\n",
    "                        for (int y_ = 0; y_ < TW; ++y_) {\n",
    "                            int x = x_ * 6, y = y_ * 6;\n",
    "                            \n",
    "                            // line 9: compute V, transformation of input image\n",
    "                            for (int c = rIC.start; c < rIC.end; c += 2) {\n",
    "                                int ci = c - rIC.start;\n",
    "                                const float *im1 = &image[((i * IC + c) * IH + x) * IW + y];\n",
    "                                const float *im2 = &im1[IW * IH];\n",
    "                                __m512 *zmm_v = &V[ci * 4];\n",
    "                                perform_image_transform(im1, im2, zmm_v, IW);\n",
    "                            }\n",
    "                            \n",
    "                            // line 10: compute M, perform multiplication\n",
    "                            // line 11: compute Y, transformation of output image\n",
    "                            for (int k = rOC.start; k < rOC.end; k += 2) {\n",
    "                                int ki = k - rOC.start;\n",
    "                                __m512 *zmm_u1 = &U[ki * rIC.size() * 4], *zmm_u2 = &zmm_u1[rIC.size() * 4];\n",
    "                                __m512 *zmm_v = V;\n",
    "                                __m512 zmm_m[8];\n",
    "                                perform_mult(zmm_u1, zmm_u2, zmm_v, zmm_m, rIC.size());\n",
    "                                float *r1 = &result[((i * OC + k) * OH + x) * OW + y];\n",
    "                                float *r2 = &r1[OH * OW];\n",
    "                                perform_store_transform(zmm_m, r1, r2, OW);\n",
    "                            }\n",
    "                        }\n",
    "                    }\n",
    "                }\n",
    "            }\n",
    "        }\n",
    "    }\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f2750585-f210-4d74-ad9b-aba90b19c402",
   "metadata": {},
   "source": [
    "我们可以用下述程序运行是否正确，并考察效率。尽管当前的 Jupyter xeus-cling 引擎未必能达到很高效率，但我们可以看见程序运行时间从 Naive Direct 的大约 70 sec 锐减到大约 100 ms，**效率提升可以达到约 700 倍**。要注意到若只考虑 Winograd 算法相对 Direct 算法的算术运算数的减少量，对于 Winograd $F(6,3)$ 也不可能超过 5 倍。这确实能体现基于 L2 缓存的程序效率优化的重要意义了。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "48bbae7c-ac18-4f27-9d8f-3684990c3735",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "elapsed time: 108.482 msec"
     ]
    }
   ],
   "source": [
    "auto start = chrono::steady_clock::now();\n",
    "\n",
    "winconv(image, IH, IW, IC, filtr, OC, N, result);\n",
    "\n",
    "auto end = chrono::steady_clock::now();\n",
    "chrono::duration<double> elapsed_seconds = end - start;\n",
    "cout << \"elapsed time: \" << elapsed_seconds.count() * 1000 << \" msec\";"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "d3413f5d-42f4-40ba-a14d-85a9199c6230",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "true"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "allclose(result, result_ref, size_result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a55f3444-ff75-4fc4-afce-0cf9d9d2a308",
   "metadata": {},
   "source": [
    "## 实机测试"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5a6f6480-4cef-447f-b28a-37fe3cd17296",
   "metadata": {},
   "source": [
    "在这一节中，我们会对我们的程序进行效率测评。\n",
    "\n",
    "测评用设备与参数\n",
    "\n",
    "- CPU 为 Intel Xeon Gold 6154 (x4)；\n",
    "    - 物理内核 (core) 数 72，NUMA 节点数 4；\n",
    "    - L1d 32 kB / core, L1i 32 kB / core, L2 1024 kB / core；\n",
    "    - L3 24.75 MB；\n",
    "- GCC 10.2.0；\n",
    "- 编译选项 `-fopenmp -O3 -march=native`。\n",
    "\n",
    "关于具体的编译过程，参考 [CMakeLists.txt](CMakeLists.txt)。编译所用的程序为 `run_winograd_f6x3`。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "507b7b47-4e75-488c-b0ff-4ce6867d2305",
   "metadata": {},
   "source": [
    "运行的网络为完整的 16 层 VGG16 网络；网络参数的定义文件在 `vgg16.conf`。其中 Batch 大小 $N = 64$。以 10 次连续计算的平均时间记为运行时间。可能出于测评的体量较小、内存对齐、预热效应等可能的影响，效率测评结果会有一定波动。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6a277569-16e0-41f1-83a3-63a1102438bd",
   "metadata": {},
   "source": [
    "算法的效率以平均 GFLOPS 给出。GFLOPS 可以看作是每秒执行的算术次数；计算方式是通过 Direct Convolution 确定，而非是当前的 Winograd 算法。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "954fa04f-c08c-4e4c-92e6-fcb68fa2d0b4",
   "metadata": {},
   "source": [
    "### 参数 $\\tilde K, \\tilde C$ 的选择"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d8319bf5-f739-4afc-bd92-b496457708bc",
   "metadata": {},
   "source": [
    "我们先前说道，输出通道分批数 $\\tilde K$、输入通道分批数 $\\tilde C$ 设置的大小应当要使矩阵 $\\mathrm{U}^{(\\tilde k, \\tilde c)}_{k, c}$ (维度 $(\\tilde K, \\tilde C, \\mu, \\mu)$) 契合 L2 缓存大小。下述表格就是调整 $\\tilde K$ (`hintOC`) 与 $\\tilde C$ (`hintIC`) 时所给出的 GFLOPS；每列最高效率的数值将加粗。计算在 64 核并行下完成。下述表格的数据是五次平行运行后取的最大值。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "00a27e62-9cb2-4d5d-bbe5-e2e79857c355",
   "metadata": {},
   "source": [
    "| $\\tilde C \\text{\\\\} \\tilde K$ | 8 | 16 | 24 | 32 | 48 | 64 | 96 | 128 |\n",
    "| --:| -- | -- | -- | -- | -- | -- | -- | -- |\n",
    "|   **8** |   1596   |   2465   |   2891   |   3251   |   3394   |   3603   |   3712   |   3477   |\n",
    "|  **16** |   2305   |   3413   |   3844   |   4210   |   4521   |   4674   |   4686   | **4766** |\n",
    "|  **24** |   2594   |   3762   |   4171   |   4678   |   4727   |   5236   | **5171** |   4327   |\n",
    "|  **32** |   2574   |   3756   |   4270   | **4734** | **5094** | **5389** |   4774   |   2824   |\n",
    "|  **48** |   2600   |   3595   | **4306** |   4557   |   4747   |   4365   |   2615   |   1894   |\n",
    "|  **64** |   2674   |   3703   |   4280   |   4266   |   3980   |   2676   |   1942   |      -   |\n",
    "|  **96** |   2682   | **3731** |   3978   |   3887   |   2574   |   1961   |      -   |      -   |\n",
    "| **128** | **2732** |   3711   |   3419   |   2514   |   1857   |      -   |      -   |      -   |"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9f55be9c-44ad-4984-94ce-e62b04a78268",
   "metadata": {},
   "source": [
    "从上表中，我们发现 $\\tilde K = 64$, $\\tilde C = 32$ 确实是最佳情况，算法效率达到 5389 GFLOPS。\n",
    "\n",
    "但这就遇到另一个问题：为何不能是 $\\tilde K = 32$, $\\tilde C = 64$ 呢？在这种情况下，每批次 $\\mathrm{U}^{(\\tilde k, \\tilde c)}_{k, c}$ 大小仍然是 512 kB 并小于 L2 缓存大小。这其中有几种无法断言的可能性：\n",
    "\n",
    "- 一般来说，高速缓存最好能最大化地利用。但或许每批次 $\\mathrm{U}^{(\\tilde k, \\tilde c)}_{k, c}$ 不能占用太大的缓存大小。因为图片输入、输出、转换等过程都需要 L2 缓存介入；如果其它繁杂的数据流被限制住了，那么就算与 $\\mathrm{U}^{(\\tilde k, \\tilde c)}_{k, c}$ 计算密集型任务算得再快，还是得经常等其它数据通过小水管载入到高速缓存中。所以尽管 L2 缓存是 1024 kB / core，但占用 512 kB / core 是比较合适的。这可能解释了为何 $\\tilde K = 96$, $\\tilde C = 32$ 效率相对较低的原因。\n",
    "- 我们先前也提及，在 $i, \\tilde x, \\tilde y, \\tilde c$ 角标确定的情况下，每批次 $\\mathrm{V}^{(\\tilde x, \\tilde y, \\tilde c)}_{i, c}$ 的内存占用是 $(\\tilde C, \\mu, \\mu)$ 大小；当 $\\tilde C = 32$ 时，可以使得 $\\mathrm{V}^{(\\tilde x, \\tilde y, \\tilde c)}_{i, c}$ 的内存占用为 8 kB，小于 L1d 缓存的 32 kB。基于许多 $\\tilde K$ 的取值下 $\\tilde C$ 都在 32 附近有较高效率，我尝试推测：如果 $\\tilde C$ 再大一些，就可能因为 L1d 缓存不能容纳其它计算的需求，而对效率有显著影响。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fe76bb85-d3bd-49c7-8c88-34ed9470f552",
   "metadata": {},
   "source": [
    "### 与 Intel DNNL 的效率比较"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "28c3a209-f8b9-448f-8f21-211c22c0c389",
   "metadata": {},
   "source": [
    "我们也将程序与 Intel DNNL 进行横向对比。使用 64 core 并行；网络结构相同的层会对 GFLOPS 取平均。Intel DNNL 是指 oneAPI 2022.2 的机器学习程序库。对比情况如下 (网络详情也可以参考 [^Lavin2016] Table 3)。效率以 GFLOPS 为单位呈现。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0708e52e-0511-48ac-b111-2a63cfd54df4",
   "metadata": {},
   "source": [
    "[^Lavin2016]: Lavin, A.; Gray, S. Fast Algorithms for Convolutional Neural Networks. In *2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*; IEEE: Las Vegas, NV, USA, **2016**; pp 4013–4021. doi: [10.1109/CVPR.2016.435](https://doi.org/10.1109/CVPR.2016.435)."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e61464a3-2bce-4ca4-92d2-7de68140cf11",
   "metadata": {},
   "source": [
    "| Layer     | Depth | $(C_\\mathrm{in} \\times H_\\mathrm{in} \\times W_\\mathrm{in})$ | $C_\\mathrm{out}$ | DNNL Direct | DNNL Winograd | My Winograd |\n",
    "|:----------|-------|:---------------:|----:|------------:|--------------:|-------------------:|\n",
    "| conv(1.1) | 1     | 3 x 224 x 224   | 64  |   160       |   352         |   828              |\n",
    "| conv(1.2) | 1     | 64 x 224 x 224  | 64  |   2423      |   4572        | **5332**           |\n",
    "| conv(2.1) | 1     | 64 x 112 x 112  | 128 |   3160      |   4458        | **5384**           |\n",
    "| conv(2.2) | 1     | 128 x 112 x 112 | 128 |   4676      | **6344**      |   5836             |\n",
    "| conv(3.1) | 1     | 128 x 56 x 56   | 256 |   5312      |   5183        | **5686**           |\n",
    "| conv(3.2) | 3     | 256 x 56 x 56   | 256 |   6524      |   4283        | **6565**           |\n",
    "| conv(4.1) | 1     | 256 x 28 x 28   | 512 | **5933**    |   3399        |   5769             |\n",
    "| conv(4.2) | 4     | 512 x 28 x 28   | 512 | **7264**    |   5466        |   5329             |\n",
    "| conv(5)   | 5     | 512 x 14 x 14   | 512 | **5112**    |   2588        |   4578             |\n",
    "| VGG16     |       |                 |     |   4420      |   4250        | **5497**           |"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b151724c-0191-4353-9afd-b27964fb48c7",
   "metadata": {},
   "source": [
    "可以见到，我们的实现并非在每个网络上都超过了 DNNL。\n",
    "\n",
    "- 可能是受限于 Winograd 算法本身，在网络的最后三四层，即图像很小、卷积核较大的情况，使用直接卷积计算有可能效率更高。\n",
    "- 我们也存在 conv(2.2) 效率没有超过 DNNL Winograd 的效率。\n",
    "- DNNL Winograd 的实现可能基于 $F(2,3)$ 和 $F(4,3)$；我们上述的实现是 $F(6,3)$。\n",
    "- DNNL 的库文件很大；将程序载入内存的时间也考虑在了测评过程中。以及 DNNL 使用了 JIT 技术，有可能 JIT 处理时间也有一定影响。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3fbdf0fd-e259-4059-9943-e3f8a1ea1a55",
   "metadata": {},
   "source": [
    "我们也指出，该上述的代码思路并非是决赛阶段所使用的思路。Winograd $F(6, 3)$ 有办法可能更快。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cb93e4d5-a340-481c-8240-daef96670df9",
   "metadata": {},
   "source": [
    "### 并行效率"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "67cda469-8089-4fd0-9898-45c85441f580",
   "metadata": {},
   "source": [
    "下图呈现了我们程序在不同核数下的并行效率。可以认为并行效率相对可观。\n",
    "\n",
    "也必须指出，上面的程序是对 Batch 数 $N$ 作简单并行的；如果 Batch 数并不是很友好的数字 (譬如 80)，或者 Batch 数为 64 却被要求在 48 核机器上并行，那么该程序的并行效率应当不会很好。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "919d0cbc-77c6-4daa-8b0f-db2d13e740e7",
   "metadata": {},
   "source": [
    "![Parallel Efficiency](figures/para.svg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "80a9c365-4c69-4207-9f0c-6ee34e65596c",
   "metadata": {},
   "source": [
    "### Roofline 图"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7a7e6dcb-8796-45f6-b10e-0e51902b1e7c",
   "metadata": {},
   "source": [
    "Intel Advisor (`advixe-cl`) 可以绘制 Roofline 图。Roofline 图是可以直观分析程序片段的内存负载量与计算指令调用数量。它同时还可以交互式地给出程序片段的运行时间、以及联结程序与汇编代码。可以是相当强大的代码效率分析工具。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cce18d28-128e-4d6f-a744-32e6c81b4014",
   "metadata": {},
   "source": [
    "下图是对 64 core 并行的 Winograd VGG16 程序运行过程所绘制的 Roofline 图。目前我还未完全了解其中的功能；但通过与下述 Roofline 图的简单交互，应当可以确定，\n",
    "\n",
    "- 黄色点是数乘运算点，运行耗时约 129 sec；该程序所调用的内存通讯的带宽接近 L2 带宽，确实符合我们程序设计的预期。\n",
    "- 两个绿色点分别是输入图像变换与卷积核变换，分别耗时 36 sec 与 33 sec。\n",
    "- 红色点是输出图像变换与内存写入过程，耗时约 154 sec。事实上，我们编写的程序的最耗时步骤，对于整个 VGG16 网络而言其实是输出图像部分。这也是 Winograd 算法的特点：尽管可以降低数乘部分运算的次数，但其余部分的转换与读写量的增加制约了效率。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "76261914-ded0-42f2-b549-402a3d632aca",
   "metadata": {},
   "source": [
    "![roofline](figures/roof.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "504fae09-22a7-450c-a37e-133de0d8b20f",
   "metadata": {},
   "source": [
    "## 后记与致谢"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0713bf0a-7899-4b9a-9a50-7f3d0639bf73",
   "metadata": {},
   "source": [
    "我曾经一直都只是高性能计算的使用者；计算化学目前毫无疑问地消耗着着大量的计算资源。尽管也会进行程序编写，但实际上至多是调库玩家；先前并不了解高性能计算的逻辑，甚至不了解高速缓存和汇编代码为何物。\n",
    "\n",
    "感谢赛方九坤投资、以及队长强宜澄。因为这场竞赛，我从无到有、确实地一窥高性能计算的一角，也有不少收获。对于提升与改进程序效率，除了低标度 (Lower-Scaling/Complexity) 算法外，确实地可以多一个视角来看待。也再次感谢赛方提供的自主饮食、奖励和周边 0w0；以及强哥大腿，他的努力促成了我们小队在决赛阶段获得第二名的佳绩，我也能沾光 =w=。希望强哥后面的求学之路能顺利。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2d3299e5-701c-4c23-9326-9ffa06c1ad4f",
   "metadata": {},
   "source": [
    "正因为计算化学耗费的资源量巨大；由同时在我狭隘的目之所及的化学出身从业者内 (我认为 Southern Methodist University 的 Devin Matthews 应是这类从业者的范例)，较少有对底层计算机架构有深入理解和强编程能力 (就像 Winograd 算法这类写得真高效的话，譬如冠军代码或我们小队队长在决赛时所用的代码，会又 hard 又 lengthy 又 dirty >.< 能力确实需要很强)；因此，高性能计算在计算化学中的应用前景和提升空间或许是可预见的。化学问题经常会遇到大量且复杂的张量缩并；受制于自己的眼界，要如何具体地借助高性能计算提升计算化学算法，与我而言还需要一些思考。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "944a7ca1-519c-495c-a745-0ad99943c560",
   "metadata": {},
   "source": [
    "> 写这篇文档的时候已经博六了，毕业课题还没搞定。确实可以慢慢想哈哈 >.<"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "21605952-c740-45ab-95a8-b9887d5d6d8b",
   "metadata": {},
   "source": [
    "赛题中还包含了关于 HDF5 的读存效率问题。在我先前的项目中，确实遇到需要与磁盘进行大量交互的问题 (将电子积分或激发张量存入硬盘的二阶梯度算法实现)。对 HDF5 赛题的学习，也确实料及了关于如何合理测评基于硬盘的算法的效率的方式，以及一定条件下合理地利用 chunk 提升硬盘读写的效率的思路。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "53bcd31e-b554-4bdd-a958-cb8bcb463e73",
   "metadata": {},
   "source": [
    "事实上参赛时段的前后，因为一些外因，我多少有些负面情绪；这次比赛确实给予我以一些信心；也要感谢各同学老师的宽容。以及感谢课题组闲置计算资源的支持。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c3deb9e5-9e9f-4a3e-9676-4963ce5374a6",
   "metadata": {},
   "source": [
    "还有许多问题值得进取，还有许多技术可以玩味。以后如果有时间，还蛮想在类 GPU 上多些了解。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4961c2cd-551d-45c4-8da2-4c55e074740f",
   "metadata": {},
   "source": [
    "对基于 CPU 的高性能计算初步学习，[LAFF 系列课程](http://ulaff.net/) 的第三个课程 [LAFF-On Programming for High Performance](https://www.cs.utexas.edu/users/flame/laff/pfhp/) 相信是不错的初步材料。这个学习材料是在决赛后发现的。它介绍了比较基础的指令集级别的矩阵乘法 DGEMM 性能优化思路与实现。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "C++14",
   "language": "C++14",
   "name": "xcpp14"
  },
  "language_info": {
   "codemirror_mode": "text/x-c++src",
   "file_extension": ".cpp",
   "mimetype": "text/x-c++src",
   "name": "c++",
   "version": "14"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": true,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": true,
   "toc_window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
