{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "origin_pos": 0,
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "<div class=\"jumbotron\">\n",
    "    <p class=\"display-1 h1\">多头注意力</p>\n",
    "    <hr class=\"my-4\">\n",
    "    <p>主讲：李岩</p>\n",
    "    <p>管理学院</p>\n",
    "    <p>liyan@cumtb.edu.cn</p>\n",
    "</div>\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 为什么需要多头注意力？\n",
    "\n",
    "### 单头注意力的局限性\n",
    "\n",
    "**单头注意力的问题**：\n",
    "- 给定相同的查询、键和值集合\n",
    "- 只能学习到**一种**注意力模式\n",
    "- 无法同时捕获**不同类型的依赖关系**\n",
    "\n",
    "**例子**（自然语言处理）：\n",
    "- 短距离依赖：相邻词之间的关系（如\"the cat\"）\n",
    "- 长距离依赖：相隔较远的词之间的关系（如主谓一致）\n",
    "- 语法依赖：句法结构关系\n",
    "- 语义依赖：语义相似性关系\n",
    "\n",
    "**单头注意力的限制**：\n",
    "- 只能关注一种类型的依赖关系\n",
    "- 无法同时处理多种关系\n",
    "\n",
    "### 多头注意力的核心思想\n",
    "\n",
    "**设计动机**：\n",
    "- 让模型基于**相同的注意力机制**学习到**不同的行为**\n",
    "- 将不同的行为**组合起来**\n",
    "- 捕获序列内**各种范围的依赖关系**\n",
    "\n",
    "**实现方式**：\n",
    "- 使用$h$组**不同的线性投影**变换查询、键和值\n",
    "- 每组投影学习不同的**子空间表示**\n",
    "- 每个子空间关注不同类型的依赖关系\n",
    "\n",
    "**类比**：\n",
    "- 就像用多个\"专家\"同时分析同一个问题\n",
    "- 每个专家关注不同的方面\n",
    "- 最后综合所有专家的意见"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 什么是子空间表示？\n",
    "\n",
    "**子空间（Subspace）**：\n",
    "- 原始空间的一个**低维子集**\n",
    "- 通过线性变换得到\n",
    "- 可以捕获数据的特定特征\n",
    "\n",
    "**例子**：\n",
    "- 原始查询：100维向量\n",
    "- 子空间1：20维，关注语法关系\n",
    "- 子空间2：20维，关注语义关系\n",
    "- 子空间3：20维，关注位置关系\n",
    "- ...\n",
    "\n",
    "**优势**：\n",
    "- 每个子空间专门学习一种特征\n",
    "- 多个子空间可以并行学习\n",
    "- 组合后表达能力更强\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 多头注意力的工作流程\n",
    "\n",
    "![多头注意力：多个头连结然后线性变换](../img/9_attention_mechanisms/multi-head-attention.svg)\n",
    "\n",
    "**图1：多头注意力架构**\n",
    "\n",
    "**关键组件**：\n",
    "- $h$个独立的线性投影（$W_q, W_k, W_v$）\n",
    "- $h$个并行的注意力头\n",
    "- 拼接操作\n",
    "- 输出线性投影（$W_o$）\n",
    "\n",
    "**步骤**：\n",
    "\n",
    "1. **线性投影**：\n",
    "   - 用$h$组不同的线性变换投影查询、键和值\n",
    "   - 每组投影到不同的子空间\n",
    "\n",
    "2. **并行计算**：\n",
    "   - $h$组变换后的查询、键和值并行送入注意力汇聚\n",
    "   - 每个头独立计算注意力\n",
    "\n",
    "3. **拼接**：\n",
    "   - 将$h$个头的输出拼接在一起\n",
    "\n",
    "4. **最终变换**：\n",
    "   - 通过另一个线性投影得到最终输出\n",
    "\n",
    "**关键特点**：\n",
    "- 每个头关注输入的不同部分\n",
    "- 可以表示比简单加权平均更复杂的函数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 多头注意力的数学形式化\n",
    "\n",
    "### 输入和参数\n",
    "\n",
    "**输入**：\n",
    "- 查询：$\\mathbf{q} \\in \\mathbb{R}^{d_q}$\n",
    "- 键：$\\mathbf{k} \\in \\mathbb{R}^{d_k}$\n",
    "- 值：$\\mathbf{v} \\in \\mathbb{R}^{d_v}$\n",
    "\n",
    "**参数**：\n",
    "- $h$：注意力头数\n",
    "- $p_o$：输出维度（`num_hiddens`）\n",
    "- $p_q = p_k = p_v = p_o / h$：每个头的维度\n",
    "\n",
    "### 每个头的计算\n",
    "\n",
    "对于第$i$个头：\n",
    "\n",
    "**步骤1：线性投影到子空间**\n",
    "- 查询投影：$\\mathbf{q}_i = \\mathbf W_i^{(q)}\\mathbf q \\in \\mathbb{R}^{p_q}$\n",
    "- 键投影：$\\mathbf{k}_i = \\mathbf W_i^{(k)}\\mathbf k \\in \\mathbb{R}^{p_k}$\n",
    "- 值投影：$\\mathbf{v}_i = \\mathbf W_i^{(v)}\\mathbf v \\in \\mathbb{R}^{p_v}$\n",
    "\n",
    "**步骤2：计算注意力**\n",
    "$$\\mathbf{h}_i = \\text{Attention}(\\mathbf{q}_i, \\mathbf{k}_i, \\mathbf{v}_i) = \\text{softmax}\\left(\\frac{\\mathbf{q}_i \\mathbf{k}_i^T}{\\sqrt{p_k}}\\right)\\mathbf{v}_i$$\n",
    "\n",
    "**步骤3：拼接所有头**\n",
    "$$\\mathbf{H} = [\\mathbf{h}_1; \\mathbf{h}_2; \\ldots; \\mathbf{h}_h] \\in \\mathbb{R}^{h p_v}$$\n",
    "\n",
    "**步骤4：输出投影**\n",
    "$$\\text{Output} = \\mathbf W_o \\mathbf{H} \\in \\mathbb{R}^{p_o}$$\n",
    "\n",
    "其中$\\mathbf W_o \\in \\mathbb{R}^{p_o \\times h p_v}$是输出投影矩阵。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 为什么每个头关注不同部分？\n",
    "\n",
    "**设计原理**：\n",
    "- 每个头使用**不同的线性投影**\n",
    "- 投影矩阵是**独立学习**的\n",
    "- 不同的投影会关注不同的特征\n",
    "\n",
    "**实际效果**：\n",
    "- 头1：可能关注语法关系\n",
    "- 头2：可能关注语义关系\n",
    "- 头3：可能关注位置关系\n",
    "- 头4：可能关注长距离依赖\n",
    "- ...\n",
    "\n",
    "**组合优势**：\n",
    "- 比简单加权平均更强大\n",
    "- 可以同时捕获多种依赖关系\n",
    "- 表达能力更强\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 关键设计选择：维度设置\n",
    "\n",
    "### 为什么设置$p_q = p_k = p_v = p_o / h$？\n",
    "\n",
    "这是Transformer中的标准设置，原因如下：\n",
    "\n",
    "### 1. 参数效率\n",
    "\n",
    "**方案1：每个头独立维度**（不采用）\n",
    "- 每个头：$p_o$维\n",
    "- 总参数量：$O(h \\times p_o \\times (d_q + d_k + d_v))$\n",
    "- 问题：参数量随头数线性增长\n",
    "\n",
    "**方案2：共享维度**（采用）\n",
    "- 每个头：$p_o / h$维\n",
    "- 总参数量：$O(p_o \\times (d_q + d_k + d_v))$\n",
    "- 优势：**参数量与头数无关**！\n",
    "\n",
    "**例子**：\n",
    "- 如果$p_o = 512$，$h = 8$\n",
    "- 每个头的维度：$512 / 8 = 64$\n",
    "- 总参数量与单头512维相同，但表达能力更强\n",
    "\n",
    "### 2. 并行计算\n",
    "\n",
    "- 所有头可以并行计算\n",
    "- 每个头的维度较小，计算更快\n",
    "- 通过形状变换实现高效的批量处理\n",
    "\n",
    "### 3. 表达能力\n",
    "\n",
    "- 虽然每个头的维度较小，但多个头组合后表达能力仍然很强\n",
    "- 类似于集成学习的思想"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-18T07:01:32.189972Z",
     "iopub.status.busy": "2023-08-18T07:01:32.189240Z",
     "iopub.status.idle": "2023-08-18T07:01:34.516491Z",
     "shell.execute_reply": "2023-08-18T07:01:34.515475Z"
    },
    "origin_pos": 2,
    "slideshow": {
     "slide_type": "skip"
    },
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [],
   "source": [
    "import math\n",
    "import torch\n",
    "from torch import nn\n",
    "from d2l import torch as d2l"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 实现设计选择\n",
    "\n",
    "### 注意力函数选择\n",
    "\n",
    "在实现过程中通常选择**缩放点积注意力**作为每一个注意力头。\n",
    "这是因为：\n",
    "- 计算效率高（矩阵乘法）\n",
    "- 无参数（除了投影矩阵）\n",
    "- Transformer的标准选择\n",
    "\n",
    "### 并行计算的实现技巧\n",
    "\n",
    "**关键思路**：\n",
    "- 先通过线性投影得到$p_o$维表示\n",
    "- 然后通过形状变换拆分为$h$个$p_o/h$维的子空间\n",
    "- 所有头并行计算注意力\n",
    "- 最后拼接并通过输出投影\n",
    "\n",
    "**实现方式**：\n",
    "- 使用`transpose_qkv`函数将数据\"展开\"到批次维度\n",
    "- 一次性计算所有头的注意力\n",
    "- 使用`transpose_output`函数恢复原始形状"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 形状变换：实现并行计算的关键\n",
    "\n",
    "### 为什么需要形状变换？\n",
    "\n",
    "**目的**：实现并行计算\n",
    "\n",
    "**问题**：\n",
    "- 多个头需要独立计算\n",
    "- 但希望一次性处理所有头（提高效率）\n",
    "\n",
    "**解决方案**：\n",
    "- 通过形状变换将多个头\"展开\"到批次维度\n",
    "- 让每个头的数据看起来像独立的批次\n",
    "- 可以一次性计算所有头的注意力\n",
    "\n",
    "### transpose_qkv函数\n",
    "\n",
    "**输入**：`(batch_size, num_items, num_hiddens)`\n",
    "\n",
    "**三步变换**：\n",
    "1. **重塑**：`reshape(batch_size, num_items, num_heads, num_hiddens/num_heads)`\n",
    "   - 将`num_hiddens`维度拆分为`num_heads`和`num_hiddens/num_heads`\n",
    "\n",
    "2. **转置**：`permute(0, 2, 1, 3)`\n",
    "   - 将`num_heads`维度移到第2维\n",
    "\n",
    "3. **重塑**：`reshape(-1, num_items, num_hiddens/num_heads)`\n",
    "   - 将`batch_size`和`num_heads`合并到第1维\n",
    "\n",
    "**输出**：`(batch_size*num_heads, num_items, num_hiddens/num_heads)`\n",
    "\n",
    "### transpose_output函数\n",
    "\n",
    "**输入**：`(batch_size*num_heads, num_items, num_hiddens/num_heads)`\n",
    "\n",
    "**三步变换**（逆变换）：\n",
    "1. **重塑**：`reshape(-1, num_heads, num_items, num_hiddens/num_heads)`\n",
    "   - 将第1维拆分为`batch_size`和`num_heads`\n",
    "\n",
    "2. **转置**：`permute(0, 2, 1, 3)`\n",
    "   - 将`num_heads`移回第3维\n",
    "\n",
    "3. **重塑**：`reshape(batch_size, num_items, num_hiddens)`\n",
    "   - 合并`num_heads`和`num_hiddens/num_heads`维度\n",
    "\n",
    "**输出**：`(batch_size, num_items, num_hiddens)`\n",
    "\n",
    "### 形状变换示例\n",
    "\n",
    "**假设**：`batch_size=2`, `num_queries=4`, `num_hiddens=100`, `num_heads=5`\n",
    "\n",
    "**transpose_qkv变换**：\n",
    "- 输入：`(2, 4, 100)`\n",
    "- 步骤1：`(2, 4, 5, 20)`\n",
    "- 步骤2：`(2, 5, 4, 20)`\n",
    "- 输出：`(10, 4, 20)` ← 可以并行计算\n",
    "\n",
    "**transpose_output变换**：\n",
    "- 输入：`(10, 4, 20)`\n",
    "- 步骤1：`(2, 5, 4, 20)`\n",
    "- 步骤2：`(2, 4, 5, 20)`\n",
    "- 输出：`(2, 4, 100)` ← 恢复原始形状"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-18T07:01:34.521491Z",
     "iopub.status.busy": "2023-08-18T07:01:34.521131Z",
     "iopub.status.idle": "2023-08-18T07:01:34.530492Z",
     "shell.execute_reply": "2023-08-18T07:01:34.529556Z"
    },
    "origin_pos": 7,
    "slideshow": {
     "slide_type": "slide"
    },
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [],
   "source": [
    "#@save\n",
    "class MultiHeadAttention(nn.Module):\n",
    "    \"\"\"多头注意力\"\"\"\n",
    "    def __init__(self, key_size, query_size, value_size, num_hiddens,\n",
    "                 num_heads, dropout, bias=False, **kwargs):\n",
    "        super(MultiHeadAttention, self).__init__(**kwargs)\n",
    "        self.num_heads = num_heads\n",
    "        self.attention = d2l.DotProductAttention(dropout)\n",
    "        self.W_q = nn.Linear(query_size, num_hiddens, bias=bias)\n",
    "        self.W_k = nn.Linear(key_size, num_hiddens, bias=bias)\n",
    "        self.W_v = nn.Linear(value_size, num_hiddens, bias=bias)\n",
    "        self.W_o = nn.Linear(num_hiddens, num_hiddens, bias=bias)\n",
    "\n",
    "    def forward(self, queries, keys, values, valid_lens):\n",
    "        # queries，keys，values的形状:\n",
    "        # (batch_size，查询或者“键－值”对的个数，num_hiddens)\n",
    "        # valid_lens　的形状:\n",
    "        # (batch_size，)或(batch_size，查询的个数)\n",
    "        # 经过变换后，输出的queries，keys，values　的形状:\n",
    "        # (batch_size*num_heads，查询或者“键－值”对的个数，\n",
    "        # num_hiddens/num_heads)\n",
    "        queries = transpose_qkv(self.W_q(queries), self.num_heads)\n",
    "        keys = transpose_qkv(self.W_k(keys), self.num_heads)\n",
    "        values = transpose_qkv(self.W_v(values), self.num_heads)\n",
    "\n",
    "        if valid_lens is not None:\n",
    "            # 在轴0，将第一项（标量或者矢量）复制num_heads次，\n",
    "            # 然后如此复制第二项，然后诸如此类。\n",
    "            valid_lens = torch.repeat_interleave(\n",
    "                valid_lens, repeats=self.num_heads, dim=0)\n",
    "\n",
    "        # output的形状:(batch_size*num_heads，查询的个数，\n",
    "        # num_hiddens/num_heads)\n",
    "        output = self.attention(queries, keys, values, valid_lens)\n",
    "\n",
    "        # output_concat的形状:(batch_size，查询的个数，num_hiddens)\n",
    "        output_concat = transpose_output(output, self.num_heads)\n",
    "        return self.W_o(output_concat)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-18T07:01:34.534820Z",
     "iopub.status.busy": "2023-08-18T07:01:34.534308Z",
     "iopub.status.idle": "2023-08-18T07:01:34.540852Z",
     "shell.execute_reply": "2023-08-18T07:01:34.539927Z"
    },
    "origin_pos": 12,
    "slideshow": {
     "slide_type": "slide"
    },
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [],
   "source": [
    "#@save\n",
    "def transpose_qkv(X, num_heads):\n",
    "    \"\"\"为了多注意力头的并行计算而变换形状\"\"\"\n",
    "    # 输入X的形状:(batch_size，查询或者“键－值”对的个数，num_hiddens)\n",
    "    # 输出X的形状:(batch_size，查询或者“键－值”对的个数，num_heads，\n",
    "    # num_hiddens/num_heads)\n",
    "    X = X.reshape(X.shape[0], X.shape[1], num_heads, -1)\n",
    "\n",
    "    # 输出X的形状:(batch_size，num_heads，查询或者“键－值”对的个数,\n",
    "    # num_hiddens/num_heads)\n",
    "    X = X.permute(0, 2, 1, 3)\n",
    "\n",
    "    # 最终输出的形状:(batch_size*num_heads,查询或者“键－值”对的个数,\n",
    "    # num_hiddens/num_heads)\n",
    "    return X.reshape(-1, X.shape[2], X.shape[3])\n",
    "\n",
    "\n",
    "#@save\n",
    "def transpose_output(X, num_heads):\n",
    "    \"\"\"逆转transpose_qkv函数的操作\"\"\"\n",
    "    X = X.reshape(-1, num_heads, X.shape[1], X.shape[2])\n",
    "    X = X.permute(0, 2, 1, 3)\n",
    "    return X.reshape(X.shape[0], X.shape[1], -1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### valid_lens的处理\n",
    "\n",
    "**问题**：\n",
    "- 每个头都需要`valid_lens`信息\n",
    "- 但`valid_lens`的形状是`(batch_size,)`\n",
    "\n",
    "**解决方案**：\n",
    "- 使用`torch.repeat_interleave`复制`num_heads`次\n",
    "- 将`(batch_size,)`变为`(batch_size*num_heads,)`\n",
    "\n",
    "**例子**：\n",
    "- 原始：`[3, 2]`（2个样本）\n",
    "- 复制后：`[3, 3, 3, 3, 3, 2, 2, 2, 2, 2]`（10个，对应10个头）\n",
    "\n",
    "**原因**：\n",
    "- 每个头处理的是不同的数据视图\n",
    "- 但有效长度信息相同\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 模型测试\n",
    "\n",
    "下面使用键和值相同的小例子来测试我们编写的`MultiHeadAttention`类。\n",
    "\n",
    "**测试设置**：\n",
    "- `num_hiddens = 100`：输出维度\n",
    "- `num_heads = 5`：注意力头数\n",
    "- `batch_size = 2`：2个样本\n",
    "- `num_queries = 4`：4个查询\n",
    "- `num_kvpairs = 6`：6个键值对\n",
    "- `valid_lens = [3, 2]`：有效长度\n",
    "\n",
    "**预期输出形状**：`(batch_size, num_queries, num_hiddens)` = `(2, 4, 100)`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-18T07:01:34.545405Z",
     "iopub.status.busy": "2023-08-18T07:01:34.544605Z",
     "iopub.status.idle": "2023-08-18T07:01:34.571251Z",
     "shell.execute_reply": "2023-08-18T07:01:34.570476Z"
    },
    "origin_pos": 17,
    "slideshow": {
     "slide_type": "slide"
    },
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "MultiHeadAttention(\n",
       "  (attention): DotProductAttention(\n",
       "    (dropout): Dropout(p=0.5, inplace=False)\n",
       "  )\n",
       "  (W_q): Linear(in_features=100, out_features=100, bias=False)\n",
       "  (W_k): Linear(in_features=100, out_features=100, bias=False)\n",
       "  (W_v): Linear(in_features=100, out_features=100, bias=False)\n",
       "  (W_o): Linear(in_features=100, out_features=100, bias=False)\n",
       ")"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "num_hiddens, num_heads = 100, 5\n",
    "attention = MultiHeadAttention(num_hiddens, num_hiddens, num_hiddens,\n",
    "                               num_hiddens, num_heads, 0.5)\n",
    "attention.eval()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2023-08-18T07:01:34.574642Z",
     "iopub.status.busy": "2023-08-18T07:01:34.574021Z",
     "iopub.status.idle": "2023-08-18T07:01:34.588848Z",
     "shell.execute_reply": "2023-08-18T07:01:34.587945Z"
    },
    "origin_pos": 20,
    "slideshow": {
     "slide_type": "slide"
    },
    "tab": [
     "pytorch"
    ]
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([2, 4, 100])"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "batch_size, num_queries = 2, 4\n",
    "num_kvpairs, valid_lens =  6, torch.tensor([3, 2])\n",
    "X = torch.ones((batch_size, num_queries, num_hiddens))\n",
    "Y = torch.ones((batch_size, num_kvpairs, num_hiddens))\n",
    "attention(X, Y, Y, valid_lens).shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 多头注意力 vs 单头注意力\n",
    "\n",
    "| 特性 | 单头注意力 | 多头注意力 |\n",
    "|------|-----------|-----------|\n",
    "| **注意力模式** | 1种 | $h$种（并行） |\n",
    "| **依赖关系** | 单一类型 | 多种类型 |\n",
    "| **表达能力** | 较弱 | 更强 |\n",
    "| **参数量** | $O(p_o \\times d)$ | $O(p_o \\times d)$（相同！） |\n",
    "| **计算复杂度** | $O(nmd)$ | $O(nmd)$（相同，但可并行） |\n",
    "| **并行性** | 部分 | 完全并行 |\n",
    "\n",
    "**优势**：\n",
    "- ✅ 更强的表达能力（多个子空间）\n",
    "- ✅ 可以同时捕获多种依赖关系\n",
    "- ✅ 并行计算提高效率\n",
    "- ✅ 参数量不增加（通过维度设置）\n",
    "\n",
    "**代价**：\n",
    "- ❌ 需要更多的内存（存储多个头的输出）\n",
    "- ❌ 实现更复杂（需要形状变换）\n",
    "\n",
    "### 实际应用中的头数选择\n",
    "\n",
    "**常见配置**：\n",
    "- **BERT-base**：12个头，每个头64维（$p_o = 768$）\n",
    "- **BERT-large**：16个头，每个头64维（$p_o = 1024$）\n",
    "- **GPT-2**：12-48个头，取决于模型大小\n",
    "- **GPT-3**：96个头（$p_o = 12288$）\n",
    "\n",
    "**选择原则**：\n",
    "- 头数通常是$p_o$的因子（便于维度设置）\n",
    "- 更多头：更强的表达能力，但计算成本更高\n",
    "- 通常8-16个头是好的平衡点"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 思考问题\n",
    "\n",
    "1. **为什么需要多个头？**\n",
    "   - 单头注意力有什么局限性？\n",
    "   - 多个头如何解决这个问题？\n",
    "\n",
    "2. **子空间表示的含义是什么？**\n",
    "   - 什么是子空间？\n",
    "   - 为什么不同的子空间可以关注不同的特征？\n",
    "\n",
    "3. **为什么设置$p_q = p_k = p_v = p_o / h$？**\n",
    "   - 如果每个头都用$p_o$维会怎样？\n",
    "   - 这样设置的优缺点是什么？\n",
    "\n",
    "4. **形状变换的目的是什么？**\n",
    "   - 为什么需要`transpose_qkv`和`transpose_output`？\n",
    "   - 如何实现并行计算？\n",
    "\n",
    "5. **每个头真的关注不同的部分吗？**\n",
    "   - 如何验证不同头学习到不同的模式？\n",
    "   - 如何可视化不同头的注意力权重？\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 练习\n",
    "\n",
    "1. **可视化不同头的注意力权重**：\n",
    "   - 修改代码，分别可视化每个头的注意力权重\n",
    "   - 观察不同头是否关注不同的位置\n",
    "   - 分析不同头的功能差异\n",
    "\n",
    "2. **注意力头的重要性分析**：\n",
    "   - 设计实验衡量每个头的重要性\n",
    "   - 方法1：逐个移除头，观察性能变化\n",
    "   - 方法2：分析每个头的注意力权重分布\n",
    "   - 方法3：使用梯度分析\n",
    "\n",
    "3. **头数的选择**：\n",
    "   - 尝试不同的头数（1, 2, 4, 8, 16）\n",
    "   - 观察对模型性能的影响\n",
    "   - 找到最佳的头数\n",
    "\n",
    "4. **维度设置实验**：\n",
    "   - 尝试不同的维度设置\n",
    "   - 比较$p_q = p_k = p_v = p_o / h$与其他设置\n",
    "   - 分析参数量和性能的权衡\n",
    "\n",
    "5. **并行计算效率**：\n",
    "   - 比较并行计算和串行计算的效率\n",
    "   - 分析形状变换的开销\n",
    "   - 优化并行计算的实现\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "### 下一步学习\n",
    "\n",
    "- **Transformer架构**：多头注意力是Transformer的核心组件\n",
    "- **自注意力**：查询、键、值来自同一序列\n",
    "- **位置编码**：如何为序列添加位置信息\n",
    "- **BERT和GPT**：基于Transformer的预训练模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "slideshow": {
     "slide_type": "slide"
    }
   },
   "source": [
    "## 小结\n",
    "\n",
    "* 多头注意力融合了来自于多个注意力汇聚的不同知识，这些知识的不同来源于相同的查询、键和值的不同的子空间表示。\n",
    "* 基于适当的张量操作，可以实现多头注意力的并行计算。\n",
    "* 通过设置$p_q = p_k = p_v = p_o / h$，可以在不增加参数量的情况下获得更强的表达能力。\n",
    "* 每个头学习不同的子空间表示，可以同时捕获语法、语义、位置等多种依赖关系。\n",
    "\n",
    "## 关键要点\n",
    "\n",
    "1. **为什么需要多头？**\n",
    "   - 单头注意力只能学习一种注意力模式\n",
    "   - 多头注意力可以同时学习多种模式\n",
    "\n",
    "2. **如何实现并行计算？**\n",
    "   - 通过形状变换将多个头\"展开\"到批次维度\n",
    "   - 一次性计算所有头的注意力\n",
    "\n",
    "3. **维度设置的巧妙之处**\n",
    "   - $p_q = p_k = p_v = p_o / h$确保参数量不随头数增长\n",
    "   - 这是Transformer设计的关键创新之一"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "origin_pos": 24,
    "slideshow": {
     "slide_type": "slide"
    },
    "tab": [
     "pytorch"
    ]
   },
   "source": [
    "[Discussions](https://discuss.d2l.ai/t/5758)\n"
   ]
  }
 ],
 "metadata": {
  "celltoolbar": "幻灯片",
  "hide_input": false,
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  },
  "latex_envs": {
   "LaTeX_envs_menu_present": true,
   "autoclose": true,
   "autocomplete": true,
   "bibliofile": "biblio.bib",
   "cite_by": "apalike",
   "current_citInitial": 1,
   "eqLabelWithNumbers": true,
   "eqNumInitial": 1,
   "hotkeys": {
    "equation": "Ctrl-E",
    "itemize": "Ctrl-I"
   },
   "labels_anchors": false,
   "latex_user_defs": false,
   "report_style_numbering": false,
   "user_envs_cfg": false
  },
  "required_libs": [],
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": true,
   "toc_window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
