{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# LLM最关键的设计（Attention Is All You Need）\n",
    "\n",
    "来自视频 [LLM最关键的设计（Attention Is All You Need）](https://www.bilibili.com/video/BV1DW421R7rz) ，从这一章开始就要跟着老师开始从零实现 GPT2 了。\n",
    "\n",
    "另外，貌似 token 的解释可以对应到中文的 “词元” 。\n",
    "\n",
    "在之前的课程学习中，我们学习 RNN 进行文本翻译任务的时候，是尝试让编码器将一句话编码，并取出最后一个隐藏状态，这个隐藏状态是包含了前文所有上下文信息的，将这个隐藏状态传递给解码器就可以进行翻译的输出解码。但是问题在于，隐藏状态的特征是有限的，假设文本很长，隐藏状态就难以捕获所有信息，而且里隐藏状态越远的文本所获得的信息就越少，这样在长文本翻译任务的时候就不准确了。所以就有人提出，是否可以将所有的隐藏状态一同进行计算，这样就可以解决这个问题。在解码其中，对于当前的输入分别计算每一个隐藏状态的权重，在权重的基础上加和所有的隐藏状态，得到所谓的“背景向量”给到解码器（作为附加的输入），这就是注意力机制。\n",
    "\n",
    "![](./images/注意力机制.png)\n",
    "\n",
    "![](./images/注意力机制的发展史.png)\n",
    "\n",
    "$I_i$ 是解码器当前输入词元的状态，通过 $I_i$ 与各个编码器输出的隐藏状态的计算目的是（计算出与输出最有可能性的隐藏状态并）加权。"
   ],
   "id": "d9cc41a2b1f76e45"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-13T09:14:32.308313Z",
     "start_time": "2025-09-13T09:14:24.626088Z"
    }
   },
   "cell_type": "code",
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F"
   ],
   "id": "e31c8306d6bafacb",
   "outputs": [],
   "execution_count": 2
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-03T08:07:10.134548Z",
     "start_time": "2025-09-03T08:07:10.048282Z"
    }
   },
   "cell_type": "code",
   "source": [
    "sequence_len = 64\n",
    "device = 'cuda' if torch.cuda.is_available() else 'cpu'"
   ],
   "id": "bbb0ccd7e2e0a455",
   "outputs": [],
   "execution_count": 34
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-03T08:07:10.145915Z",
     "start_time": "2025-09-03T08:07:10.141887Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# softmax 特性\n",
    "a = torch.tensor((1, 2, float('-inf'))).float()\n",
    "print(F.softmax(a, dim=-1))"
   ],
   "id": "5d2e6c1eccbf9e8e",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([0.2689, 0.7311, 0.0000])\n"
     ]
    }
   ],
   "execution_count": 35
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-03T08:07:10.170726Z",
     "start_time": "2025-09-03T08:07:10.166049Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# K: (B, T, H)  B -- 批量大小  T -- 文本长度  H -- 特征个数\n",
    "# Q: (B, T, H)\n",
    "# 对齐分数： K @ Q.transpose(-2, -1) （内积为矩阵乘法）  transpose 交换维度\n",
    "# K @ Q.transpose(-2, -1)  (B, T, T)\n",
    "scores = torch.randn(1, 4, 4)\n",
    "scores"
   ],
   "id": "905162ab6e5b5b36",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[ 0.6087,  0.8096,  0.6145, -0.2689],\n",
       "         [-1.6752,  1.1493, -0.4753, -1.9654],\n",
       "         [ 0.6156,  0.1876,  0.6369, -1.0195],\n",
       "         [-0.5846, -0.5445, -0.4847, -0.6873]]])"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 36
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-03T08:07:10.192847Z",
     "start_time": "2025-09-03T08:07:10.186281Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 定义下三角矩阵\n",
    "tril = torch.tril(torch.ones(4, 4))\n",
    "print(tril)\n",
    "s = scores.masked_fill(tril == 0, float('-inf'))\n",
    "s"
   ],
   "id": "8a8214f8f89d64ef",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[1., 0., 0., 0.],\n",
      "        [1., 1., 0., 0.],\n",
      "        [1., 1., 1., 0.],\n",
      "        [1., 1., 1., 1.]])\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "tensor([[[ 0.6087,    -inf,    -inf,    -inf],\n",
       "         [-1.6752,  1.1493,    -inf,    -inf],\n",
       "         [ 0.6156,  0.1876,  0.6369,    -inf],\n",
       "         [-0.5846, -0.5445, -0.4847, -0.6873]]])"
      ]
     },
     "execution_count": 37,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 37
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-13T10:07:29.053607Z",
     "start_time": "2025-09-13T10:07:29.044415Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 定义权重分布\n",
    "print(F.softmax(s, dim=-1))"
   ],
   "id": "85e9af09b8aec7c1",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([5.1091e-12, 1.0000e+00], dtype=torch.float64)\n"
     ]
    }
   ],
   "execution_count": 15
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-03T08:07:10.239270Z",
     "start_time": "2025-09-03T08:07:10.231247Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# softmax 对于标准差的敏感性\n",
    "x = torch.rand(1, 8)\n",
    "print(x.std(), F.softmax(x, dim=-1))\n",
    "print(F.softmax(x * 1000, dim=-1))  # 对方差敏感，标准差必须趋于1"
   ],
   "id": "172e182e6797f9e3",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor(0.2665) tensor([[0.1693, 0.1284, 0.1339, 0.1558, 0.0904, 0.0793, 0.1036, 0.1391]])\n",
      "tensor([[1.0000e+00, 0.0000e+00, 0.0000e+00, 1.0554e-36, 0.0000e+00, 0.0000e+00,\n",
      "         0.0000e+00, 0.0000e+00]])\n"
     ]
    }
   ],
   "execution_count": 39
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-03T08:07:10.276271Z",
     "start_time": "2025-09-03T08:07:10.269762Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 对齐分数的标准差变化\n",
    "B, T, H = 32, 100, 10\n",
    "K = torch.randn(B, T, H)\n",
    "Q = torch.randn(B, T, H)\n",
    "scores = K @ Q.transpose(-2, -1) / H ** 0.5  # 标准差趋于 1，归一化处理\n",
    "print(scores.std())"
   ],
   "id": "faf9cf9fa1066643",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor(0.9888)\n"
     ]
    }
   ],
   "execution_count": 40
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "![](./images/注意力机制Transformer.png)\n",
    "![](./images/注意力机制代码.png)"
   ],
   "id": "fd92b07e7bb10e02"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-13T09:15:22.512387Z",
     "start_time": "2025-09-13T09:15:22.507805Z"
    }
   },
   "cell_type": "code",
   "source": [
    "def attention(query, key, value, dropout, mask=None):  # make=None 双向注意力，下三角为单向自注意力机制\n",
    "    # query, key, value: (B, T, H)\n",
    "    # mask: (T, T)\n",
    "    # output: (B, T, H)\n",
    "    B, T, H = query.shape\n",
    "    scores = query @ key.transpose(-2, -1) / H ** 0.5\n",
    "    if mask is not None:\n",
    "        scores = scores.masked_fill(mask == 0, float('-inf'))\n",
    "    w_att = F.softmax(scores, dim=-1)  # (B, T, T)\n",
    "    # w_att = dropout(w_att)\n",
    "\n",
    "    # 计算背景向量\n",
    "    out = w_att @ value  # (B, T, H)\n",
    "    return out"
   ],
   "id": "78b2d976f3bd4071",
   "outputs": [],
   "execution_count": 4
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-13T09:20:38.263048Z",
     "start_time": "2025-09-13T09:20:38.253370Z"
    }
   },
   "cell_type": "code",
   "source": [
    "x = torch.randn(1, 4, 3)\n",
    "q = nn.Linear(3, 4)(x)\n",
    "k = nn.Linear(3, 4)(x)\n",
    "v = nn.Linear(3, 4)(x)\n",
    "\n",
    "tril = torch.tril(torch.ones(4, 4))\n",
    "\n",
    "attention(q, k, v, None, s)"
   ],
   "id": "69401de7fde265d3",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[ 0.9240, -0.1882,  0.0583,  0.8408],\n",
       "         [ 0.4866, -0.2417, -0.1998,  0.4787],\n",
       "         [ 0.0679, -0.0429,  0.1941,  0.0676],\n",
       "         [-0.0334, -0.0135, -0.1885, -0.1777]]], grad_fn=<UnsafeViewBackward0>)"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 12
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "这行代码是 **PyTorch** 中常见的用法，特别是在实现 **Transformer 的因果注意力（causal self-attention）** 时会用到。我们来逐部分解析它的含义：\n",
    "\n",
    "```python\n",
    "self.register_buffer('tril', torch.tril(torch.ones(sequence_len, sequence_len)))\n",
    "```\n",
    "\n",
    "---\n",
    "\n",
    "### 🔍 分解解释\n",
    "\n",
    "#### 1. `torch.ones(sequence_len, sequence_len)`\n",
    "- 创建一个形状为 `(sequence_len, sequence_len)` 的全 1 矩阵（即单位矩阵的“1版”）。\n",
    "- 例如，如果 `sequence_len = 4`，结果是：\n",
    "  $$\n",
    "  \\begin{bmatrix}\n",
    "  1 & 1 & 1 & 1 \\\\\n",
    "  1 & 1 & 1 & 1 \\\\\n",
    "  1 & 1 & 1 & 1 \\\\\n",
    "  1 & 1 & 1 & 1 \\\\\n",
    "  \\end{bmatrix}\n",
    "  $$\n",
    "\n",
    "#### 2. `torch.tril(...)`\n",
    "- `tril` 是 **\"lower triangular\"** 的缩写，表示“下三角矩阵”。\n",
    "- 它会把矩阵中**主对角线及以上保留，主对角线以下设为 0**。\n",
    "- 对上面的矩阵应用 `torch.tril` 后变成：\n",
    "  $$\n",
    "  \\begin{bmatrix}\n",
    "  1 & 0 & 0 & 0 \\\\\n",
    "  1 & 1 & 0 & 0 \\\\\n",
    "  1 & 1 & 1 & 0 \\\\\n",
    "  1 & 1 & 1 & 1 \\\\\n",
    "  \\end{bmatrix}\n",
    "  $$\n",
    "  > ✅ 注意：默认 `upper_diagonal=0`，即只保留主对角线和下方元素。\n",
    "\n",
    "  （可选参数：`torch.tril(matrix, diagonal=k)`，k=0 是主对角线，k=1 包含上一行等）\n",
    "\n",
    "#### 3. `self.register_buffer(...)`\n",
    "这是 PyTorch 的一个方法，用于：\n",
    "- 将某个张量注册为模型的一个 **“buffer”（缓冲区）**\n",
    "- 这个张量会：\n",
    "  - 被保存在模型的状态字典中（`state_dict`）\n",
    "  - 随模型一起被 `to(device)` 移动（比如从 CPU 到 GPU）\n",
    "  - **但不会被当作可训练参数（不会参与梯度更新）**\n",
    "\n",
    "> ✅ 用途：适合存放那些**固定或预定义的辅助张量**，比如位置编码、掩码（mask）等。\n",
    "\n",
    "---\n",
    "\n",
    "### 🎯 整体作用：创建一个**因果注意力掩码（causal mask）**\n",
    "\n",
    "这个 `tril` 缓冲区通常用于 **防止模型在预测时看到未来的信息**。\n",
    "\n",
    "#### ✅ 应用场景：语言模型自回归生成\n",
    "\n",
    "比如你在生成一句话：“我 爱 机 器 学 习”，模型应该：\n",
    "\n",
    "- 在预测“爱”时，只能看“我”\n",
    "- 在预测“机”时，只能看“我 爱”\n",
    "- ...\n",
    "- 不能偷看未来的词！\n",
    "\n",
    "所以，在计算注意力分数时，要用这个 `tril` 掩码来屏蔽未来位置。\n",
    "\n",
    "#### ✅ 代码示例（注意力中的使用）：\n",
    "\n",
    "```python\n",
    "# 假设 attention_scores 形状为 (batch, heads, seq_len, seq_len)\n",
    "attention_scores = ...  # Q @ K.T\n",
    "\n",
    "# 应用掩码：把未来位置的分数设为非常小的数（如 -inf）\n",
    "mask = self.tril[:seq_len, :seq_len]  # 取所需大小\n",
    "attention_scores = attention_scores.masked_fill(mask == 0, float('-inf'))\n",
    "\n",
    "# 然后 softmax\n",
    "attention_weights = F.softmax(attention_scores, dim=-1)\n",
    "```\n",
    "\n",
    "这样，softmax 之后，被掩掉的位置（未来）权重接近 0。\n",
    "\n",
    "---\n",
    "\n",
    "### 📌 总结：这行代码的意思是\n",
    "\n",
    "> 将一个 `sequence_len × sequence_len` 的**下三角矩阵**（主对角线及以下为1，其余为0）注册为模型的一个**不可训练的缓冲区**，命名为 `'tril'`，用于后续实现**因果注意力机制**，防止模型关注未来时间步。\n",
    "\n",
    "---\n",
    "\n",
    "### 💡 补充建议\n",
    "\n",
    "- 实际中 `sequence_len` 不一定是固定的，可以设大一点（如 1024），然后运行时动态切片。\n",
    "- 更灵活的做法是延迟创建掩码，直到前向传播时根据实际序列长度生成。\n",
    "\n",
    "```python\n",
    "# 更动态的方式（推荐）\n",
    "def forward(self, x):\n",
    "    T = x.size(1)\n",
    "    mask = torch.tril(torch.ones(T, T, device=x.device))\n",
    "    ...\n",
    "```\n",
    "\n",
    "但 `register_buffer` 的方式适合固定长度或训练期间不变的情况。\n",
    "\n"
   ],
   "id": "f28efc3c4448b6af"
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-03T08:07:10.316681Z",
     "start_time": "2025-09-03T08:07:10.311286Z"
    }
   },
   "cell_type": "code",
   "source": [
    "class MaskedAttention(nn.Module):\n",
    "    # 单向自注意力\n",
    "\n",
    "    def __init__(self, emb_size, head_size):\n",
    "        \"\"\"\n",
    "        :param emb_size: 输入向量长度 -- C\n",
    "        :param head_size: 背景向量长度 -- H\n",
    "        \"\"\"\n",
    "        super().__init__()\n",
    "        # 不需要截距项之一是因为大语言模型的训练中都有残差项来加速训练\n",
    "        self.key = nn.Linear(emb_size, head_size, bias=False)\n",
    "        self.query = nn.Linear(emb_size, head_size, bias=False)\n",
    "        self.value = nn.Linear(emb_size, head_size, bias=False)\n",
    "        self.register_buffer('tril', torch.tril(torch.ones(sequence_len, sequence_len)))\n",
    "        self.dp = nn.Dropout(0.4)\n",
    "\n",
    "    def forward(self, x):\n",
    "        # x: (B, T, C)\n",
    "        # out: (B, T, H)\n",
    "        B, T, C = x.shape\n",
    "        k = self.key(x)  # (B, T, H)\n",
    "        q = self.query(x)   # (B, T, H)\n",
    "        v = self.value(x)  # (B, T, H)\n",
    "\n",
    "        mask = self.tril[:T, :T]  # 截取一块和文本长度一样大小的掩码，如果 T 超过 sequence_len 就会报错，这是注意力机制的一个缺陷\n",
    "        out = attention(q, k, v, self.dp, mask)\n",
    "        return out"
   ],
   "id": "19e21c1ebeb67d56",
   "outputs": [],
   "execution_count": 42
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-03T08:07:10.357384Z",
     "start_time": "2025-09-03T08:07:10.329539Z"
    }
   },
   "cell_type": "code",
   "source": [
    "m = MaskedAttention(3, 4)\n",
    "x = torch.randn(5, 10, 3)\n",
    "m(x).shape"
   ],
   "id": "6d5eeaa384b47584",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "torch.Size([5, 10, 4])"
      ]
     },
     "execution_count": 43,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 43
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-09-03T08:13:12.816715Z",
     "start_time": "2025-09-03T08:13:12.811035Z"
    }
   },
   "cell_type": "code",
   "source": "print(nn.Dropout(0.4)(F.softmax(torch.ones(5, 5), dim=-1)))",
   "id": "881c61c997fa384e",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[0.0000, 0.3333, 0.3333, 0.0000, 0.3333],\n",
      "        [0.3333, 0.0000, 0.0000, 0.3333, 0.3333],\n",
      "        [0.3333, 0.3333, 0.3333, 0.0000, 0.0000],\n",
      "        [0.3333, 0.3333, 0.3333, 0.3333, 0.0000],\n",
      "        [0.3333, 0.0000, 0.3333, 0.3333, 0.3333]])\n"
     ]
    }
   ],
   "execution_count": 66
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
