{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "3b065f0e-3302-4a07-9c76-080d8310df29",
   "metadata": {},
   "source": [
    "# 自注意力机制"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "44726596-cc9d-4ecc-b37e-9b0e06690c71",
   "metadata": {},
   "source": [
    "## 自注意力计算过程"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c395f977-7698-44f5-8ed0-85d492e0f2ea",
   "metadata": {},
   "source": [
    "在深入探讨自注意力机制之前，我们先通过一个示例句子\"The sun rises in the east\"来演示操作过程。与其他文本处理模型（如递归或卷积神经网络）类似，第一步是创建句子嵌入。\r\n",
    "为简化说明，我们的字典dc仅包含输入句子中的单词。在实际应用中，字典通常从更大的词汇表构建，一般包含30,000到50,000个单词。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "b07e8ac175b9eb2c",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-11-13T06:14:21.959008Z",
     "start_time": "2024-11-13T06:14:21.948136Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'The': 0, 'east': 1, 'in': 2, 'rises': 3, 'sun': 4, 'the': 5}\n"
     ]
    }
   ],
   "source": [
    " sentence = 'The sun rises in the east'  \n",
    " dc = {s:i for i,s in enumerate(sorted(sentence.split()))}  \n",
    " print(dc)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e77536a1-30e3-4427-ac46-d121cc16f65d",
   "metadata": {},
   "source": [
    "接下来，我们使用这个字典将句子中的每个单词转换为其对应的整数索引。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "162ae9b088c7110a",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-11-13T06:15:05.945422Z",
     "start_time": "2024-11-13T06:15:05.932719Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([0, 4, 3, 2, 5, 1])\n"
     ]
    }
   ],
   "source": [
    "import torch  \n",
    "sentence_int = torch.tensor([dc[s] for s in sentence.split()])\n",
    "print(sentence_int)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ee8e02b5-d815-4845-9f16-4e24ec0424f5",
   "metadata": {},
   "source": [
    "有了这个输入句子的整数表示，可以使用嵌入层将每个单词转换为向量。为简化演示，我们这里使用3维嵌入，但在实际应用中，嵌入维度通常要大得多（例如，Llama 2模型中使用4,096维）。较小的维度有助于直观理解向量而不会使页面充满数字。\n",
    "由于句子包含6个单词，嵌入将生成一个6×3维矩阵。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "e65a1712-5277-41f7-b61a-ebf437f8a4e6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ 0.3374, -0.1778, -0.3035],\n",
      "        [ 0.1794,  1.8951,  0.4954],\n",
      "        [-1.1925,  0.6984, -1.4097],\n",
      "        [-0.2196, -0.3792,  0.7671],\n",
      "        [ 0.2692, -0.0770, -1.0205],\n",
      "        [-0.5880,  0.3486,  0.6603]])\n",
      "torch.Size([6, 3])\n"
     ]
    }
   ],
   "source": [
    "vocab_size = 50_000  \n",
    "torch.manual_seed(123)   \n",
    "embed = torch.nn.Embedding(vocab_size, 3)   \n",
    "embedded_sentence = embed(sentence_int).detach()  \n",
    "print(embedded_sentence)\n",
    "print(embedded_sentence.shape) "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dc3c315f-0832-45e4-9194-8952997cf3f8",
   "metadata": {},
   "source": [
    "这个6×3矩阵表示输入句子的嵌入版本，每个单词被编码为一个3维向量。虽然实际模型中的嵌入维度通常更高，但这个简化示例有助于我们理解嵌入的工作原理。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9aa84e29-6d46-4a7e-9734-54bfb3e3cb09",
   "metadata": {},
   "source": [
    "**缩放点积注意力的权重矩阵**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "aea22f24-4724-4623-8d64-399ada5d36ec",
   "metadata": {},
   "source": [
    "完成输入嵌入后，首先探讨自注意力机制，特别是广泛使用的缩放点积注意力，这是Transformer模型的核心元素。\n",
    "\n",
    "缩放点积注意力机制使用三个权重矩阵：Wq、Wk和Wv。这些矩阵在模型训练过程中优化，用于转换输入数据。\n",
    "\n",
    "**查询、键和值的转换**\n",
    "\n",
    "权重矩阵将输入数据投影到三个组成部分：\n",
    "\n",
    "查询 (q)\n",
    "\n",
    "键 (k)\n",
    "\n",
    "值 (v)\n",
    "\r\n",
    "这些组成部分通过矩阵乘法计算得\n",
    "\n",
    "：\r\n",
    "查询：q(i) = x(i)\n",
    "\n",
    "Wq键：k(i) = x(i)\n",
    "\n",
    "Wk值：v(i) = x(i)\n",
    "Wv\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "46fdd061-88b7-4fb5-bade-1848f9bca3b6",
   "metadata": {},
   "source": [
    "这个操作实际上是将每个输入token x(i)投影到这三个不同的空间中。\n",
    "\r\n",
    "关于维度，q(i)和k(i)都是具有dk个元素的向量。投影矩阵Wq和Wk的形状为d × dk，而Wv为d × dv。这里，d是每个词向量x的大小\n",
    "。\r\n",
    "需要注意的是q(i)和k(i)必须具有相同数量的元素（dq = dk），因为后续会计算它们的点积。许多大型语言模型为简化设置dq = dk = dv，但v(i)的大小可以根据需要不\n",
    "同。\r\n",
    "以下是一个代码示例："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "0e551f5b-7ce9-437f-a1b7-06c11c35b075",
   "metadata": {},
   "outputs": [],
   "source": [
    "torch.manual_seed(123)\n",
    "d = embedded_sentence.shape[1]\n",
    "d_q, d_k, d_v = 2, 2, 4 \n",
    "W_query = torch.nn.Parameter(torch.rand(d, d_q))  \n",
    "W_key = torch.nn.Parameter(torch.rand(d, d_k)) \n",
    "W_value = torch.nn.Parameter(torch.rand(d, d_v))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "354c5a23-4826-4491-b7b5-ba0722713983",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Query shape: torch.Size([2])\n",
      "Key shape: torch.Size([2])\n",
      "Value shape: torch.Size([4])\n"
     ]
    }
   ],
   "source": [
    "x_3 = embedded_sentence[2]  # 第三个元素（索引2）\n",
    "query_3 = x_3 @ W_query\n",
    "key_3 = x_3 @ W_key\n",
    "value_3 = x_3 @ W_value  \n",
    "print(\"Query shape:\", query_3.shape)\n",
    "print(\"Key shape:\", key_3.shape)\n",
    "print(\"Value shape:\", value_3.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "b8b6622e-3f69-483e-973b-60a369971590",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "All keys shape: torch.Size([6, 2])\n",
      "All values shape: torch.Size([6, 4])\n"
     ]
    }
   ],
   "source": [
    "keys = embedded_sentence @ W_key  \n",
    "values = embedded_sentence @ W_value  \n",
    "print(\"All keys shape:\", keys.shape)  \n",
    "print(\"All values shape:\", values.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "4253e8d9-992f-4395-8ab6-4682dccfa7a9",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Unnormalized attention weights for query 3:\n",
      "tensor([ 0.4344, -2.5037,  0.9265, -0.3509,  1.0740, -0.9315],\n",
      "       grad_fn=<SqueezeBackward4>)\n"
     ]
    }
   ],
   "source": [
    "omega_3 = query_3 @ keys.T  \n",
    "print(\"Unnormalized attention weights for query 3:\")  \n",
    "print(omega_3)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "86ac232c-c8af-4b0c-9e60-0422171325fc",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Highest compatibility: 1.0740 with input 5\n",
      "Lowest compatibility: -2.5037 with input 2\n"
     ]
    }
   ],
   "source": [
    "max_score = omega_3.max()  \n",
    "min_score = omega_3.min()  \n",
    "max_index = omega_3.argmax()  \n",
    "min_index = omega_3.argmin()  \n",
    "print(f\"Highest compatibility: {max_score:.4f} with input {max_index+1}\")  \n",
    "print(f\"Lowest compatibility: {min_score:.4f} with input {min_index+1}\")  "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "57ab3374-4594-4880-bfa5-6bdf997b7600",
   "metadata": {},
   "source": [
    "**注意力权重归一化与上下文向量计算**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "06a577da-f748-4478-a6cc-d901eb1cd426",
   "metadata": {},
   "source": [
    "计算非归一化注意力权重（ω）后，自注意力机制的下一个关键步骤是对这些权重进行归一化，并利用它们计算上下文向量。\n",
    "\n",
    "这个过程使模型能够聚焦于输入序列中最相关的部分。\n",
    "\r\n",
    "我们首先对非归一化注意力权重进行归一化。使用softmax函数并按1/√(dk)进行缩放，其中dk是键向量的维度："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "1e469a00-c6b6-4763-ad7f-c0a1f7c399d6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Normalized attention weights for input 3:\n",
      "tensor([0.1973, 0.0247, 0.2794, 0.1132, 0.3102, 0.0751],\n",
      "       grad_fn=<SoftmaxBackward0>)\n"
     ]
    }
   ],
   "source": [
    "import torch.nn.functional as F  \n",
    "d_k = 2  # 键向量的维度  omega_3 = query_3 @ keys.T  # 使用前面的例子  \n",
    "attention_weights_3 = F.softmax(omega_3 / d_k**0.5, dim=0)  \n",
    "print(\"Normalized attention weights for input 3:\")  \n",
    "print(attention_weights_3)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0e95fcee-9f5c-4b43-a19e-0666bb77f26f",
   "metadata": {},
   "source": [
    "缩放（1/√dk）至关有助于在模型深度增加时维持梯度的合适大小，促进稳定训练。如果没有这种缩放点积可能会变得过大，将softmax函数推入梯度极小的区域。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "28e3f085-a36d-4806-bf31-836635216274",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Input 5 has the highest attention weight: 0.3102\n"
     ]
    }
   ],
   "source": [
    "max_weight = attention_weights_3.max()  \n",
    "max_weight_index = attention_weights_3.argmax()  \n",
    "print(f\"Input {max_weight_index+1} has the highest attention weight: {max_weight:.4f}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "a9295651-03b5-4cff-95e4-d799fefd8495",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Context vector shape: torch.Size([4])\n",
      "Context vector:\n",
      "tensor([-0.5296, -0.2799, -0.4107, -0.6006], grad_fn=<SqueezeBackward4>)\n"
     ]
    }
   ],
   "source": [
    "context_vector_3 = attention_weights_3 @ values  \n",
    "print(\"Context vector shape:\", context_vector_3.shape)\n",
    "print(\"Context vector:\")\n",
    "print(context_vector_3)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "38f54d36-7a7a-4fb4-8868-2d73bb6a5853",
   "metadata": {},
   "source": [
    "## 自注意力的Pytorch实现"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af1b804f-0799-4f38-875f-d6a9f89819f8",
   "metadata": {},
   "source": [
    "为了便于集成到更大的神经网络架构中，可以将自注意力机制封装为一个PyTorch模块。\n",
    "\n",
    "以下是SelfAttention类的实现，它包含了我们之前讨论的整个自注意力过程："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "e10af9c4-38f5-4b0a-9033-a667fc066d18",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "\n",
    "class SelfAttention(nn.Module):\n",
    "    def __init__(self, d_in, d_out_kq, d_out_v):\n",
    "        super().__init__()\n",
    "        self.d_out_kq = d_out_kq\n",
    "        self.W_query = nn.Parameter(torch.rand(d_in, d_out_kq))\n",
    "        self.W_key = nn.Parameter(torch.rand(d_in, d_out_kq))\n",
    "        self.W_value = nn.Parameter(torch.rand(d_in, d_out_v))\n",
    "    def forward(self, x):\n",
    "        keys = x @ self.W_key          \n",
    "        queries = x @ self.W_query          \n",
    "        values = x @ self.W_value  \n",
    "        attn_scores = queries @ keys.T          \n",
    "        attn_weights = torch.softmax(attn_scores / self.d_out_kq**0.5, dim=-1)  \n",
    "        context_vec = attn_weights @ values\n",
    "        return context_vec        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "c08ad2d7-bfe4-48b6-b851-043c78f03797",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[-0.1564,  0.1028, -0.0763, -0.0764],\n",
      "        [ 0.5313,  1.3607,  0.7891,  1.3110],\n",
      "        [-0.5296, -0.2799, -0.4107, -0.6006],\n",
      "        [ 0.0071,  0.3345,  0.0969,  0.1998],\n",
      "        [-0.3542, -0.1234, -0.2626, -0.3706],\n",
      "        [ 0.1008,  0.4780,  0.2021,  0.3674]], grad_fn=<MmBackward0>)\n"
     ]
    }
   ],
   "source": [
    "torch.manual_seed(123)  \n",
    "d_in, d_out_kq, d_out_v = 3, 2, 4  \n",
    "sa = SelfAttention(d_in, d_out_kq, d_out_v)  \n",
    "# 假设embedded_sentence是我们的输入张量  \n",
    "output = sa(embedded_sentence)  \n",
    "print(output)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "corrformer",
   "language": "python",
   "name": "corrformer"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.14"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
