{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "<div class=\"jumbotron\">\n",
    "    <p class=\"display-1 h1\">注意力评分函数</p>\n",
    "    <hr class=\"my-4\">\n",
    "    <p>主讲：李岩</p>\n",
    "    <p>管理学院</p>\n",
    "    <p>liyan@cumtb.edu.cn</p>\n",
    "</div>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "origin_pos": 0,
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "在注意力机制中，我们需要计算查询和键之间的相似度，\n",
    "这个相似度决定了注意力权重的分配。\n",
    "**注意力评分函数**（attention scoring function）就是用来衡量这种相似度的函数。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 注意力机制的基本数学框架\n",
    "\n",
    "用数学语言描述，假设有一个查询\n",
    "$\\mathbf{q} \\in \\mathbb{R}^q$和\n",
    "$m$个键值对\n",
    "$(\\mathbf{k}_1, \\mathbf{v}_1), \\ldots, (\\mathbf{k}_m, \\mathbf{v}_m)$，\n",
    "其中$\\mathbf{k}_i \\in \\mathbb{R}^k$，$\\mathbf{v}_i \\in \\mathbb{R}^v$。\n",
    "注意力汇聚函数$f$就被表示成值的加权和：\n",
    "\n",
    "$$f(\\mathbf{q}, (\\mathbf{k}_1, \\mathbf{v}_1), \\ldots, (\\mathbf{k}_m, \\mathbf{v}_m)) = \\sum_{i=1}^m \\alpha(\\mathbf{q}, \\mathbf{k}_i) \\mathbf{v}_i \\in \\mathbb{R}^v,$$\n",
    ":eqlabel:`eq_attn-pooling`\n",
    "\n",
    "其中查询$\\mathbf{q}$和键$\\mathbf{k}_i$的注意力权重（标量）\n",
    "是通过注意力评分函数$a$将两个向量映射成标量，\n",
    "再经过softmax运算得到的：\n",
    "\n",
    "$$\\alpha(\\mathbf{q}, \\mathbf{k}_i) = \\mathrm{softmax}(a(\\mathbf{q}, \\mathbf{k}_i)) = \\frac{\\exp(a(\\mathbf{q}, \\mathbf{k}_i))}{\\sum_{j=1}^m \\exp(a(\\mathbf{q}, \\mathbf{k}_j))} \\in \\mathbb{R}.$$\n",
    ":eqlabel:`eq_attn-scoring-alpha`\n",
    "\n",
    "**关键点**：选择不同的注意力评分函数$a$会导致不同的注意力汇聚操作。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 注意力机制的三步流程\n",
    "\n",
    "![计算注意力汇聚的输出为值的加权和](../img/9_attention_mechanisms/attention-output.svg)\n",
    ":label:`fig_attention_output`\n",
    "\n",
    "**步骤1：计算注意力分数**\n",
    "$$s_i = a(\\mathbf{q}, \\mathbf{k}_i)$$\n",
    "- 使用评分函数$a$计算查询和每个键的相似度\n",
    "- 得到分数向量：$\\mathbf{s} = [s_1, s_2, \\ldots, s_m]$\n",
    "\n",
    "**步骤2：应用softmax归一化**\n",
    "$$\\alpha_i = \\frac{\\exp(s_i)}{\\sum_{j=1}^m \\exp(s_j)}$$\n",
    "- 将分数转换为概率分布\n",
    "- 得到权重向量：$\\boldsymbol{\\alpha} = [\\alpha_1, \\alpha_2, \\ldots, \\alpha_m]$\n",
    "- 权重满足：$\\sum_{i=1}^m \\alpha_i = 1$\n",
    "\n",
    "**步骤3：加权求和**\n",
    "$$\\text{output} = \\sum_{i=1}^m \\alpha_i \\mathbf{v}_i$$\n",
    "- 用权重对值进行加权\n",
    "- 得到最终的注意力输出\n",
    "\n",
    "**关键理解**：评分函数$a$的选择直接影响注意力机制的行为！"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-18T06:58:57.046810Z",
     "iopub.status.busy": "2023-08-18T06:58:57.045981Z",
     "iopub.status.idle": "2023-08-18T06:59:00.174680Z",
     "shell.execute_reply": "2023-08-18T06:59:00.173514Z"
    },
    "origin_pos": 2,
    "slideshow": {
     "slide_type": "skip"
    },
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [],
   "source": [
    "import math\n",
    "import torch\n",
    "from torch import nn\n",
    "from d2l import torch as d2l"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 掩蔽softmax操作\n",
    "\n",
    "### 为什么需要掩蔽？\n",
    "\n",
    "正如上面提到的，softmax操作用于输出一个概率分布作为注意力权重。\n",
    "在某些情况下，并非所有的值都应该被纳入到注意力汇聚中。\n",
    "\n",
    "**实际场景**：\n",
    "- 为了在 :numref:`sec_machine_translation`中高效处理小批量数据集，\n",
    "某些文本序列被填充了没有意义的特殊词元\n",
    "- 为了仅将有意义的词元作为值来获取注意力汇聚，\n",
    "可以指定一个有效序列长度（即词元的个数）\n",
    "\n",
    "**解决方案**：\n",
    "- 在计算softmax时过滤掉超出指定范围的位置\n",
    "- 任何超出有效长度的位置都被掩蔽并置为0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 实现：masked_softmax函数\n",
    "\n",
    "下面的`masked_softmax`函数实现了掩蔽softmax操作，\n",
    "其中任何超出有效长度的位置都被掩蔽并置为0。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "outputs": [],
   "source": [
    "#@save\n",
    "def masked_softmax(X, valid_lens):\n",
    "    \"\"\"通过在最后一个轴上掩蔽元素来执行softmax操作\"\"\"\n",
    "    # X:3D张量，valid_lens:1D或2D张量\n",
    "    if valid_lens is None:\n",
    "        return nn.functional.softmax(X, dim=-1)\n",
    "    else:\n",
    "        shape = X.shape\n",
    "        if valid_lens.dim() == 1:\n",
    "            valid_lens = torch.repeat_interleave(valid_lens, shape[1])\n",
    "        else:\n",
    "            valid_lens = valid_lens.reshape(-1)\n",
    "        # 最后一轴上被掩蔽的元素使用一个非常大的负值替换，从而其softmax输出为0\n",
    "        X = d2l.sequence_mask(X.reshape(-1, shape[-1]), valid_lens,\n",
    "                              value=-1e6)\n",
    "        return nn.functional.softmax(X.reshape(shape), dim=-1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 演示：一维有效长度\n",
    "\n",
    "考虑由两个$2 \\times 4$矩阵表示的样本，\n",
    "这两个样本的有效长度分别为$2$和$3$。\n",
    "经过掩蔽softmax操作，超出有效长度的值都被掩蔽为0。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "fragment"
    }
   },
   "outputs": [],
   "source": [
    "masked_softmax(torch.rand(2, 2, 4), torch.tensor([2, 3]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "origin_pos": 15,
    "slideshow": {
     "slide_type": "fragment"
    }
   },
   "source": [
    "同样，也可以使用二维张量，为矩阵样本中的每一行指定有效长度。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "fragment"
    }
   },
   "outputs": [],
   "source": [
    "masked_softmax(torch.rand(2, 2, 4), torch.tensor([[1, 3], [2, 4]]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 什么是注意力评分函数？\n",
    "\n",
    "### 核心概念\n",
    "\n",
    "注意力评分函数（attention scoring function）是注意力机制中的**核心组件**，\n",
    "它决定了如何衡量查询（Query）和键（Key）之间的相似度。\n",
    "\n",
    "**作用**：\n",
    "- 计算查询和每个键的**相似度分数**\n",
    "- 这个分数决定了注意力权重的分配\n",
    "- 不同的评分函数会产生不同的注意力模式\n",
    "\n",
    "**类比理解**：\n",
    "- 就像搜索引擎：查询是搜索词，键是文档索引，评分函数决定哪些文档最相关\n",
    "- 评分越高，说明查询和键越匹配，注意力权重越大"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 两种主要的注意力评分函数\n",
    "\n",
    "本节将介绍两个流行的评分函数：\n",
    "\n",
    "### 1. 加性注意力（Additive Attention）\n",
    "\n",
    "**适用场景**：查询和键的维度**不同**时\n",
    "\n",
    "**特点**：\n",
    "- 使用前馈网络计算相似度\n",
    "- 需要学习的参数较多\n",
    "- 表达能力较强\n",
    "\n",
    "### 2. 缩放点积注意力（Scaled Dot-Product Attention）\n",
    "\n",
    "**适用场景**：查询和键的维度**相同**时\n",
    "\n",
    "**特点**：\n",
    "- 计算效率高（矩阵乘法）\n",
    "- 参数少，计算快\n",
    "- Transformer中使用的标准方法\n",
    "\n",
    "**选择建议**：\n",
    "- 如果查询和键维度相同，优先使用缩放点积注意力\n",
    "- 如果维度不同，使用加性注意力"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 加性注意力（Additive Attention）\n",
    ":label:`subsec_additive-attention`\n",
    "\n",
    "### 为什么需要加性注意力？\n",
    "\n",
    "一般来说，当查询和键是不同长度的矢量时，可以使用加性注意力作为评分函数。\n",
    "\n",
    "**问题场景**：\n",
    "- 查询维度：$d_q$（例如：解码器隐状态，256维）\n",
    "- 键维度：$d_k$（例如：编码器隐状态，128维）\n",
    "- **维度不同**，无法直接计算点积\n",
    "\n",
    "**解决方案**：使用前馈网络将查询和键投影到相同的隐藏空间"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 加性注意力的数学形式\n",
    "\n",
    "给定查询$\\mathbf{q} \\in \\mathbb{R}^q$和\n",
    "键$\\mathbf{k} \\in \\mathbb{R}^k$，\n",
    "*加性注意力*（additive attention）的评分函数为\n",
    "\n",
    "$$a(\\mathbf q, \\mathbf k) = \\mathbf w_v^\\top \\text{tanh}(\\mathbf W_q\\mathbf q + \\mathbf W_k \\mathbf k) \\in \\mathbb{R},$$\n",
    ":eqlabel:`eq_additive-attn`\n",
    "\n",
    "其中可学习的参数是$\\mathbf W_q\\in\\mathbb R^{h\\times q}$、\n",
    "$\\mathbf W_k\\in\\mathbb R^{h\\times k}$和\n",
    "$\\mathbf w_v\\in\\mathbb R^{h}$。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "outputs": [],
   "source": [
    "#@save\n",
    "class AdditiveAttention(nn.Module):\n",
    "    \"\"\"加性注意力\"\"\"\n",
    "    def __init__(self, key_size, query_size, num_hiddens, dropout, **kwargs):\n",
    "        super(AdditiveAttention, self).__init__(**kwargs)\n",
    "        self.W_k = nn.Linear(key_size, num_hiddens, bias=False)\n",
    "        self.W_q = nn.Linear(query_size, num_hiddens, bias=False)\n",
    "        self.w_v = nn.Linear(num_hiddens, 1, bias=False)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "\n",
    "    def forward(self, queries, keys, values, valid_lens):\n",
    "        queries, keys = self.W_q(queries), self.W_k(keys)\n",
    "        features = queries.unsqueeze(2) + keys.unsqueeze(1)\n",
    "        features = torch.tanh(features)\n",
    "        scores = self.w_v(features).squeeze(-1)\n",
    "        self.attention_weights = masked_softmax(scores, valid_lens)\n",
    "        return torch.bmm(self.dropout(self.attention_weights), values)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 公式解析\n",
    "\n",
    "**步骤1：线性投影**\n",
    "- $\\mathbf W_q\\mathbf q$：将查询投影到$h$维隐藏空间\n",
    "- $\\mathbf W_k \\mathbf k$：将键投影到$h$维隐藏空间\n",
    "\n",
    "**步骤2：相加并激活**\n",
    "- $\\mathbf W_q\\mathbf q + \\mathbf W_k \\mathbf k$：在隐藏空间中相加\n",
    "- $\\tanh(\\cdot)$：非线性激活函数\n",
    "\n",
    "**步骤3：计算分数**\n",
    "- $\\mathbf w_v^\\top \\cdot$：将隐藏表示映射为标量分数\n",
    "\n",
    "**关键特点**：\n",
    "- 查询和键可以有不同的维度\n",
    "- 通过投影到相同空间进行比较\n",
    "- 使用$\\tanh$作为激活函数，并且禁用偏置项"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "outputs": [],
   "source": [
    "queries, keys = torch.normal(0, 1, (2, 1, 20)), torch.ones((2, 10, 2))\n",
    "values = torch.arange(40, dtype=torch.float32).reshape(1, 10, 4).repeat(2, 1, 1)\n",
    "valid_lens = torch.tensor([2, 6])\n",
    "\n",
    "attention = AdditiveAttention(key_size=2, query_size=20, num_hiddens=8, dropout=0.1)\n",
    "attention.eval()\n",
    "attention(queries, keys, values, valid_lens)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 计算复杂度分析\n",
    "\n",
    "- **参数量**：$O(h \\times (d_q + d_k) + h)$\n",
    "- **计算复杂度**：$O(n \\times m \\times h)$，其中$n$是查询数，$m$是键数\n",
    "\n",
    "**优势**：\n",
    "- ✅ 灵活性高，可以处理不同维度的查询和键\n",
    "- ✅ 表达能力较强\n",
    "\n",
    "**劣势**：\n",
    "- ❌ 需要学习的参数较多\n",
    "- ❌ 计算复杂度较高"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "outputs": [],
   "source": [
    "d2l.show_heatmaps(attention.attention_weights.reshape((1, 1, 2, 10)),\n",
    "                  xlabel='Keys', ylabel='Queries')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 演示：加性注意力\n",
    "\n",
    "用一个小例子来演示上面的`AdditiveAttention`类，\n",
    "其中查询、键和值的形状为（批量大小，步数或词元序列长度，特征大小），\n",
    "实际输出为$(2,1,20)$、$(2,10,2)$和$(2,10,4)$。\n",
    "注意力汇聚输出的形状为（批量大小，查询的步数，值的维度）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "尽管加性注意力包含了可学习的参数，但由于本例子中每个键都是相同的，\n",
    "所以注意力权重是均匀的，由指定的有效长度决定。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 缩放点积注意力（Scaled Dot-Product Attention）\n",
    "\n",
    "### 为什么使用点积？\n",
    "\n",
    "使用点积可以得到计算效率更高的评分函数，\n",
    "但是点积操作要求查询和键具有相同的长度$d$。\n",
    "\n",
    "**优势**：\n",
    "- **计算效率高**：矩阵乘法可以高度优化（GPU加速）\n",
    "- **参数少**：不需要额外的投影矩阵（无参数！）\n",
    "- **实现简单**：代码简洁，易于理解\n",
    "\n",
    "**要求**：\n",
    "- 查询和键必须具有**相同的维度**$d$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 缩放点积注意力的数学形式\n",
    "\n",
    "*缩放点积注意力*（scaled dot-product attention）评分函数为：\n",
    "\n",
    "$$a(\\mathbf q, \\mathbf k) = \\mathbf{q}^\\top \\mathbf{k}  /\\sqrt{d}.$$\n",
    "\n",
    "### 批量形式的完整公式\n",
    "\n",
    "在实践中，我们通常从小批量的角度来考虑提高效率，\n",
    "例如基于$n$个查询和$m$个键值对计算注意力，\n",
    "其中查询和键的长度为$d$，值的长度为$v$。\n",
    "查询$\\mathbf Q\\in\\mathbb R^{n\\times d}$、\n",
    "键$\\mathbf K\\in\\mathbb R^{m\\times d}$和\n",
    "值$\\mathbf V\\in\\mathbb R^{m\\times v}$的缩放点积注意力是：\n",
    "\n",
    "$$ \\mathrm{softmax}\\left(\\frac{\\mathbf Q \\mathbf K^\\top }{\\sqrt{d}}\\right) \\mathbf V \\in \\mathbb{R}^{n\\times v}.$$\n",
    ":eqlabel:`eq_softmax_QK_V`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 为什么需要缩放？\n",
    "\n",
    "假设查询和键的所有元素都是独立的随机变量，\n",
    "并且都满足零均值和单位方差，\n",
    "那么两个向量的点积的均值为$0$，方差为$d$。\n",
    "\n",
    "**问题**：\n",
    "- 当$d$很大时（如512、1024），点积的方差会很大\n",
    "- 这会导致softmax进入**饱和区域**（梯度很小）\n",
    "- 训练不稳定，收敛慢\n",
    "\n",
    "**解决方案**：\n",
    "为确保无论向量长度如何，\n",
    "点积的方差在不考虑向量长度的情况下仍然是$1$，\n",
    "需要将点积除以$\\sqrt{d}$。\n",
    "\n",
    "**数学证明**：\n",
    "- 点积方差：$\\text{Var}(q \\cdot k) = d$\n",
    "- 缩放后方差：$\\text{Var}\\left(\\frac{q \\cdot k}{\\sqrt{d}}\\right) = \\frac{d}{d} = 1$\n",
    "- 这样无论$d$多大，方差都保持为1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 计算步骤详解\n",
    "\n",
    "**步骤1：计算注意力分数矩阵**\n",
    "$$\\mathbf{S} = \\frac{\\mathbf{Q}\\mathbf{K}^T}{\\sqrt{d}}$$\n",
    "- 形状：$(n, m)$，其中$n$是查询数，$m$是键数\n",
    "- 每个元素$S_{ij}$表示第$i$个查询与第$j$个键的相似度\n",
    "\n",
    "**步骤2：应用softmax归一化**\n",
    "$$\\mathbf{A} = \\text{softmax}(\\mathbf{S})$$\n",
    "- 对每一行应用softmax（每个查询对所有键的权重）\n",
    "- 形状：$(n, m)$，每行和为1\n",
    "- 每行形成一个概率分布\n",
    "\n",
    "**步骤3：加权求和**\n",
    "$$\\text{Output} = \\mathbf{A}\\mathbf{V}$$\n",
    "- 形状：$(n, v)$，其中$v$是值的维度\n",
    "- 每个查询得到一个加权后的值向量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "outputs": [],
   "source": [
    "queries = torch.normal(0, 1, (2, 1, 2))\n",
    "attention = DotProductAttention(dropout=0.5)\n",
    "attention.eval()\n",
    "attention(queries, keys, values, valid_lens)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "outputs": [],
   "source": [
    "#@save\n",
    "class DotProductAttention(nn.Module):\n",
    "    \"\"\"缩放点积注意力\"\"\"\n",
    "    def __init__(self, dropout, **kwargs):\n",
    "        super(DotProductAttention, self).__init__(**kwargs)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "\n",
    "    def forward(self, queries, keys, values, valid_lens=None):\n",
    "        d = queries.shape[-1]\n",
    "        scores = torch.bmm(queries, keys.transpose(1,2)) / math.sqrt(d)\n",
    "        self.attention_weights = masked_softmax(scores, valid_lens)\n",
    "        return torch.bmm(self.dropout(self.attention_weights), values)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "outputs": [],
   "source": [
    "d2l.show_heatmaps(attention.attention_weights.reshape((1, 1, 2, 10)),\n",
    "                  xlabel='Keys', ylabel='Queries')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 计算复杂度分析\n",
    "\n",
    "- **时间复杂度**：$O(nmd)$\n",
    "  - $\\mathbf{Q}\\mathbf{K}^T$：$O(nmd)$\n",
    "  - softmax：$O(nm)$\n",
    "  - $\\mathbf{A}\\mathbf{V}$：$O(nmv)$\n",
    "  - 总体：$O(nmd)$（假设$d \\approx v$）\n",
    "\n",
    "- **空间复杂度**：$O(nm)$（存储注意力矩阵）\n",
    "\n",
    "- **参数量**：$0$（无参数！）\n",
    "\n",
    "**优势**：\n",
    "- ✅ 计算效率高（矩阵乘法可以并行）\n",
    "- ✅ 无参数，计算快\n",
    "- ✅ Transformer中使用的标准方法\n",
    "\n",
    "**劣势**：\n",
    "- ❌ 要求查询和键维度相同\n",
    "- ❌ 对于长序列，$O(nm)$的空间复杂度较大"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 两种注意力评分函数的对比\n",
    "\n",
    "### 加性注意力 vs 缩放点积注意力\n",
    "\n",
    "| 特性 | 加性注意力 | 缩放点积注意力 |\n",
    "|------|-----------|--------------|\n",
    "| **查询/键维度** | 可以不同 | 必须相同 |\n",
    "| **计算方式** | 前馈网络 | 矩阵乘法 |\n",
    "| **参数量** | $O(h(d_q + d_k))$ | $0$（无参数） |\n",
    "| **计算复杂度** | $O(nmh)$ | $O(nmd)$ |\n",
    "| **适用场景** | 维度不同时 | 维度相同时 |\n",
    "| **典型应用** | Bahdanau注意力 | Transformer |\n",
    "\n",
    "### 选择建议\n",
    "\n",
    "**使用加性注意力当**：\n",
    "- 查询和键的维度不同\n",
    "- 需要更强的表达能力\n",
    "- 可以接受更高的计算成本\n",
    "\n",
    "**使用缩放点积注意力当**：\n",
    "- 查询和键的维度相同\n",
    "- 需要高效计算\n",
    "- 希望减少参数量\n",
    "\n",
    "**现代趋势**：\n",
    "- Transformer及其变体主要使用**缩放点积注意力**\n",
    "- 因为效率高，且通过线性投影可以统一维度\n",
    "\n",
    "### 实际应用\n",
    "\n",
    "**Bahdanau注意力（2014）**：\n",
    "- 使用加性注意力\n",
    "- 查询：解码器隐状态（可能256维）\n",
    "- 键：编码器隐状态（可能128维）\n",
    "- 维度不同，所以使用加性注意力\n",
    "\n",
    "**Transformer（2017）**：\n",
    "- 使用缩放点积注意力\n",
    "- 查询、键、值都通过线性投影统一到相同维度\n",
    "- 然后使用高效的矩阵乘法计算\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 小结\n",
    "\n",
    "* 将注意力汇聚的输出计算可以作为值的加权平均，选择不同的注意力评分函数会带来不同的注意力汇聚操作。\n",
    "* 当查询和键是不同长度的矢量时，可以使用可加性注意力评分函数。当它们的长度相同时，使用缩放的点积注意力评分函数的计算效率更高。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 练习\n",
    "\n",
    "1. 修改小例子中的键，并且可视化注意力权重。可加性注意力和缩放的点积注意力是否仍然产生相同的结果？为什么？\n",
    "1. 只使用矩阵乘法，能否为具有不同矢量长度的查询和键设计新的评分函数？\n",
    "1. 当查询和键具有相同的矢量长度时，矢量求和作为评分函数是否比点积更好？为什么？"
   ]
  }
 ],
 "metadata": {
  "celltoolbar": "幻灯片",
  "hide_input": false,
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  },
  "latex_envs": {
   "LaTeX_envs_menu_present": true,
   "autoclose": true,
   "autocomplete": true,
   "bibliofile": "biblio.bib",
   "cite_by": "apalike",
   "current_citInitial": 1,
   "eqLabelWithNumbers": true,
   "eqNumInitial": 1,
   "hotkeys": {
    "equation": "Ctrl-E",
    "itemize": "Ctrl-I"
   },
   "labels_anchors": false,
   "latex_user_defs": false,
   "report_style_numbering": false,
   "user_envs_cfg": false
  },
  "required_libs": [],
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": true,
   "toc_window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
