{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在你的问题中，你提到“为什么单词索引可以是7，最大不应该是5吗”，这里涉及到了两个不同的概念：**词汇表大小（词典大小）** 和 **序列长度**。让我们详细解释这两个概念，并澄清其中的误解。\n",
    "\n",
    "### 1. 词汇表大小（词典大小）\n",
    "\n",
    "- **词汇表大小**指的是你的模型能够识别的不同单词或标记的数量。\n",
    "- 在你的代码示例中，`max_num_src_words = 8` 表示源序列的词汇表大小为8。这意味着你的词汇表包含从索引1到7的单词（通常索引0保留用于填充，即PAD token）。\n",
    "\n",
    "因此，当你使用 `torch.randint(1, max_num_src_words, (L,))` 生成随机整数时，生成的单词索引范围是从1到7（包括1和7），这是因为在Python中，`randint(a, b)`函数生成的随机整数范围是 `[a, b)`，即包含a但不包含b。\n",
    "\n",
    "### 2. 序列长度\n",
    "\n",
    "- **序列长度**指的是一个句子或序列中单词的数量。例如，在你的例子中，`src_len` 存储了每个句子的实际长度：\n",
    "  ```python\n",
    "  src_len = torch.Tensor([2, 4]).to(torch.int32)\n",
    "  ```\n",
    "  这表示第一个句子有2个单词，第二个句子有4个单词。\n",
    "\n",
    "### 区分两者\n",
    "\n",
    "- **词汇表大小**决定了你可以使用的单词索引的最大值。在这个例子中，由于 `max_num_src_words = 8`，所以单词索引可以在1到7之间。\n",
    "- **序列长度**决定了每个句子中有多少个单词。这与单词索引的范围无关，只与句子的实际长度有关。\n",
    "\n",
    "### 示例解释\n",
    "\n",
    "假设我们有一个词汇表大小为8的源序列，其词汇表如下：\n",
    "```\n",
    "词汇表 = {\"<PAD>\": 0, \"我\": 1, \"喜欢\": 2, \"猫\": 3, \"今天\": 4, \"天气\": 5, \"很好\": 6, \"狗\": 7}\n",
    "```\n",
    "\n",
    "现在我们有两个句子：\n",
    "1. 句子A: \"我喜欢猫\" （索引序列：[1, 2, 3]）\n",
    "2. 句子B: \"今天天气很好\" （索引序列：[4, 5, 6, 7]）\n",
    "\n",
    "在这个例子中：\n",
    "- 句子A的长度是3（因为它包含3个单词）。\n",
    "- 句子B的长度是4（因为它包含4个单词）。\n",
    "\n",
    "当我们生成这些句子的索引序列时，我们可以得到：\n",
    "```python\n",
    "# 句子A的索引序列\n",
    "sentence_A_indices = torch.tensor([1, 2, 3])\n",
    "\n",
    "# 句子B的索引序列\n",
    "sentence_B_indices = torch.tensor([4, 5, 6, 7])\n",
    "```\n",
    "\n",
    "然后我们将这些序列填充到相同的最大长度（例如5）：\n",
    "```python\n",
    "# 填充后的句子A\n",
    "padded_sentence_A = F.pad(sentence_A_indices, (0, 2), \"constant\", 0)  # 结果: [1, 2, 3, 0, 0]\n",
    "\n",
    "# 填充后的句子B\n",
    "padded_sentence_B = F.pad(sentence_B_indices, (0, 1), \"constant\", 0)  # 结果: [4, 5, 6, 7, 0]\n",
    "```\n",
    "\n",
    "### 总结\n",
    "\n",
    "- **词汇表大小**决定了单词索引的最大值。在你的例子中，词汇表大小为8，因此单词索引可以在1到7之间。\n",
    "- **序列长度**是指每个句子中的单词数量，与单词索引的范围无关。\n",
    "\n",
    "因此，单词索引可以是7，因为你的词汇表大小允许这个范围内的索引值。而序列长度则是指句子中实际有多少个单词，这与词汇表大小没有直接关系。希望这能帮助你理解这两者之间的区别。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "classtorch.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None, _freeze=False, device=None, dtype=None)\n",
    "\n",
    "num_embeddings: 词汇表的大小，即你希望嵌入的最大单词索引 + 1。\n",
    "embedding_dim: 每个词嵌入向量的维度。\n",
    "padding_idx: 如果设置，所有位于这个位置的输入将会被映射为零向量，通常用于填充序列。\n",
    "max_norm: 如果指定了，将会按照指定的范数重新归一化嵌入向量。\n",
    "scale_grad_by_freq: 如果设为True，则根据单词在小批量中的频率按比例缩放梯度。这有助于处理稀有词。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([2, 4], dtype=torch.int32)\n",
      "tensor([[4, 7, 0, 0],\n",
      "        [2, 6, 7, 3]])\n",
      "Parameter containing:\n",
      "tensor([[ 0.0027, -0.9173, -0.4126,  0.1158, -1.0723,  1.9146,  1.0969, -0.0413],\n",
      "        [ 1.2695, -1.6966, -0.5900,  1.0706, -0.7344, -0.7224, -0.5242, -0.5882],\n",
      "        [ 0.2120, -2.4835,  0.3720,  0.5789,  0.3024,  0.5754,  1.1135,  0.2663],\n",
      "        [ 0.0136, -0.8387,  0.2929,  0.3138, -0.4878,  0.5982,  0.6665,  1.0627],\n",
      "        [ 0.2653, -1.0858, -0.4068,  0.2723, -0.0267, -0.1753, -1.6374, -1.8708],\n",
      "        [ 0.4199, -1.1263, -0.4636, -0.4845,  0.1169,  1.0382, -0.3410,  0.5866],\n",
      "        [-1.0481, -1.3050,  1.8562,  0.0512,  0.3327, -0.8710, -0.9517, -0.4696],\n",
      "        [ 0.3821,  0.6837, -0.7025, -1.3921, -1.0907,  0.1540,  0.3708, -0.1863],\n",
      "        [ 1.0948,  1.9590,  0.0356, -0.1560, -0.9694, -0.8922,  0.4475, -0.0445]],\n",
      "       requires_grad=True)\n",
      "tensor([[[ 0.2653, -1.0858, -0.4068,  0.2723, -0.0267, -0.1753, -1.6374,\n",
      "          -1.8708],\n",
      "         [ 0.3821,  0.6837, -0.7025, -1.3921, -1.0907,  0.1540,  0.3708,\n",
      "          -0.1863],\n",
      "         [ 0.0027, -0.9173, -0.4126,  0.1158, -1.0723,  1.9146,  1.0969,\n",
      "          -0.0413],\n",
      "         [ 0.0027, -0.9173, -0.4126,  0.1158, -1.0723,  1.9146,  1.0969,\n",
      "          -0.0413]],\n",
      "\n",
      "        [[ 0.2120, -2.4835,  0.3720,  0.5789,  0.3024,  0.5754,  1.1135,\n",
      "           0.2663],\n",
      "         [-1.0481, -1.3050,  1.8562,  0.0512,  0.3327, -0.8710, -0.9517,\n",
      "          -0.4696],\n",
      "         [ 0.3821,  0.6837, -0.7025, -1.3921, -1.0907,  0.1540,  0.3708,\n",
      "          -0.1863],\n",
      "         [ 0.0136, -0.8387,  0.2929,  0.3138, -0.4878,  0.5982,  0.6665,\n",
      "           1.0627]]], grad_fn=<EmbeddingBackward0>)\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import numpy\n",
    "import torch.nn.functional as F\n",
    "# 关于word embedding, 以序列建模为例\n",
    "# 考虑source sentence 和 target sentence\n",
    "#构建序列，序列的字符以其在词表中的索引的形式表示\n",
    "#src_seq就是source sentence;tgt_seq就是target sentence\n",
    "batch_size=2#源序列的形状就是batch_size，就是多少个句子（样本）\n",
    "\n",
    "#源序列和目标序列的词汇表大小。\n",
    "max_num_src_words=8#总数是8\n",
    "max_num_tgt_words=8\n",
    "model_dim=8#原文中是512维度\n",
    "\n",
    "#序列最大长度：源序列的长度指的是 源句（source sentence）中实际包含的单词数量\n",
    "max_src_seq_len=5\n",
    "max_tgt_seq_len=5\n",
    "\n",
    "src_len=torch.randint(2,5,(batch_size,))#这里在随机生成的是句子的长度，也就是句子当中的单词数\n",
    "#tgt_len=torch.randint(2,5,(batch_size,))#randint生成整数\n",
    "#src_len=torch.\n",
    "src_len=torch.Tensor([2,4]).to(torch.int32)\n",
    "tgt_len=torch.Tensor([4,3]).to(torch.int32)\n",
    "\n",
    "#单词索引构成源目标和句子，并且做了padding，默认填0\n",
    "src_seq=torch.cat([torch.unsqueeze(F.pad(torch.randint(1,max_num_src_words,(L,)),\\\n",
    "                                         (0,max(src_len)-L)),0) for L in src_len])#左边不pad，右边pad\n",
    "tgt_seq=torch.cat([torch.unsqueeze(F.pad(torch.randint(1,max_num_tgt_words,(L,)),\\\n",
    "                                         (0,max(tgt_len)-L)),0) for L in tgt_len])#左边不pad，右边pad\n",
    "\n",
    "#构造embedding\n",
    "src_embedding_table=nn.Embedding(max_num_src_words+1,model_dim)#+1是引文还有pading\n",
    "tgt_embedding_table=nn.Embedding(max_num_src_words+1,model_dim)\n",
    "src_embedding=src_embedding_table(src_seq)#传入句子，句子有单词索引\n",
    "tgt_embedding=tgt_embedding_table(src_seq)\n",
    "\n",
    "print(src_len)#原句子第一个长度3，第二个长度4\n",
    "print(src_seq)#句子中单词在词汇表中的id\n",
    "print(src_embedding_table.weight)\n",
    "print(src_embedding)#第一个id是5，在table.weight从0数到5\n",
    "#原序列第一个句子第一个单词是4，第二个单词7，其他pad0\n",
    "##原序列第二个句子第一个单词是6，第二个单词6，第五个pad0\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Source Sentence（源句子）\n",
    "定义：指的是你想要翻译的原始句子或文本，即输入给翻译模型的内容。\n",
    "例子：如果你正在进行英语到中文的翻译，那么源句子就是原始的英文句子。\n",
    "Target Sentence（目标句子）\n",
    "定义：指的是经过翻译后希望得到的句子，即翻译模型生成的结果。\n",
    "例子：继续上面的例子，目标句子就是翻译后的中文句子。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n",
      "         [0.8415, 0.8415, 0.0998, 0.0998, 0.0100, 0.0100, 0.0010, 0.0010],\n",
      "         [0.9093, 0.9093, 0.1987, 0.1987, 0.0200, 0.0200, 0.0020, 0.0020],\n",
      "         [0.1411, 0.1411, 0.2955, 0.2955, 0.0300, 0.0300, 0.0030, 0.0030]],\n",
      "\n",
      "        [[0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n",
      "         [0.8415, 0.8415, 0.0998, 0.0998, 0.0100, 0.0100, 0.0010, 0.0010],\n",
      "         [0.9093, 0.9093, 0.1987, 0.1987, 0.0200, 0.0200, 0.0020, 0.0020],\n",
      "         [0.1411, 0.1411, 0.2955, 0.2955, 0.0300, 0.0300, 0.0030, 0.0030]]])\n"
     ]
    }
   ],
   "source": [
    "#构造position embedding\n",
    "max_position_len=5#长度最大是5  \n",
    "pos_mat=torch.arange(max_position_len)#目前是1维向量\n",
    "pos_mat=pos_mat.reshape(-1,1)#一般都要从向量变成二维矩阵。这个是pos矩阵\n",
    "i_mat=torch.arange(0,8,2).reshape(1,-1)/model_dim  #变成1行4列\n",
    "i_mat=torch.pow(10000,torch.arange(0,8,2).reshape(1,-1)/model_dim)\n",
    "pe_embedding_table=torch.zeros(max_position_len,model_dim)\n",
    "pe_embedding_table[:,0::2]=torch.sin(pos_mat/i_mat)#偶数列\n",
    "pe_embedding_table[:,1::2]=torch.sin(pos_mat/i_mat)#奇数列\n",
    " \n",
    "pe_embedding=nn.Embedding(max_position_len,model_dim)\n",
    "pe_embedding.weight=nn.Parameter(pe_embedding_table,requires_grad=False)\n",
    "#print(pe_embedding_table)#二者其实一样\n",
    "#print(pe_embedding.weight)\n",
    "src_pos=torch.cat([torch.unsqueeze(torch.arange(max(src_len)),0) for _ in src_len]).to(torch.int32)#传入位置索引\n",
    "tgt_pos=torch.cat([torch.unsqueeze(torch.arange(max(tgt_len)),0) for _ in tgt_len]).to(torch.int32)#传入位置索引\n",
    "\n",
    "src_pe_embedding=pe_embedding(src_pos)\n",
    "tgt_pe_embedding=pe_embedding(tgt_pos)\n",
    "\n",
    "print(src_pe_embedding)\n",
    "#src_pe_embedding=pe_embedding(4)#位置随你，传入单词id\n",
    "#tgt_pe_embedding=pe_embedding(4)\n",
    "\n",
    "#句子长度是4，word_embedding长度是8。以下是第一个和第二个句子"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([ 0.9561,  0.4840, -0.1792,  0.4493, -0.0805])\n",
      "tensor([0.3446, 0.2149, 0.1107, 0.2076, 0.1222])\n",
      "tensor([0.2128, 0.2030, 0.1900, 0.2023, 0.1919])\n",
      "tensor([9.8499e-01, 8.7689e-03, 1.1560e-05, 6.2025e-03, 3.1009e-05])\n",
      "tensor([0.3446, 0.2149, 0.1107, 0.2076, 0.1222])\n"
     ]
    }
   ],
   "source": [
    "#softmax演示,scaled的重要性\n",
    "alpha1=0.1\n",
    "alpha2=10\n",
    "score=torch.randn(5)\n",
    "print(score)\n",
    "prob=F.softmax(score,-1)\n",
    "prob1=F.softmax(score*alpha1,-1)\n",
    "prob2=F.softmax(score*alpha2,-1)#乘以一个大的值，概率大的越来越大。\n",
    "print(prob)\n",
    "print(prob1)\n",
    "print(prob2)\n",
    "#在transformer中，除以genghaodk，使方差小一点，softmax出来的概率分布不会很尖锐,雅可比矩阵(雅可比矩阵时一阶导)不会变成0\n",
    "print(prob)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这段代码的作用是 **构造自注意力机制（Self-Attention）中的掩码（Mask）**，用于在编码器中屏蔽无效的位置（例如填充部分）。掩码的作用是确保模型在处理序列时不会关注到填充部分，从而避免影响模型的训练和预测。\n",
    "\n",
    "下面我们逐行解析代码的功能和原理。\n",
    "\n",
    "---\n",
    "\n",
    "### **代码功能概述**\n",
    "1. **构造有效编码器位置**：\n",
    "   • 根据源序列的实际长度 `src_len`，生成一个表示有效位置的掩码。\n",
    "2. **掩码的作用**：\n",
    "   • 在自注意力机制中，掩码用于屏蔽填充部分，确保模型只关注实际单词。\n",
    "\n",
    "---\n",
    "\n",
    "### **代码逐行解析**\n",
    "\n",
    "#### **1. 定义源序列的长度**\n",
    "```python\n",
    "src_len = torch.Tensor([2, 4]).to(torch.int32)\n",
    "```\n",
    "• `src_len` 表示源序列中每个句子的实际长度。\n",
    "  • 第一个句子的长度为 `2`。\n",
    "  • 第二个句子的长度为 `4`。\n",
    "\n",
    "#### **2. 构造有效编码器位置**\n",
    "```python\n",
    "valid_encoder_pos = [torch.ones(L) for L in src_len]\n",
    "```\n",
    "• `valid_encoder_pos` 是一个列表，包含每个句子的有效位置掩码。\n",
    "  • `torch.ones(L)`：生成一个长度为 `L` 的张量，元素值为 `1`，表示有效位置。\n",
    "  • `for L in src_len`：遍历 `src_len`，为每个句子生成掩码。\n",
    "• 例如：\n",
    "  • 第一个句子（长度 `2`）的掩码为 `tensor([1., 1.])`。\n",
    "  • 第二个句子（长度 `4`）的掩码为 `tensor([1., 1., 1., 1.])`。\n",
    "\n",
    "#### **3. 打印有效编码器位置**\n",
    "```python\n",
    "print(valid_encoder_pos)\n",
    "```\n",
    "• 输出有效编码器位置的掩码。\n",
    "\n",
    "---\n",
    "\n",
    "### **示例**\n",
    "\n",
    "假设 `src_len = torch.Tensor([2, 4]).to(torch.int32)`，则代码的输出为：\n",
    "```\n",
    "[tensor([1., 1.]), tensor([1., 1., 1., 1.])]\n",
    "```\n",
    "\n",
    "#### **解释**\n",
    "• 第一个句子的有效位置掩码为 `tensor([1., 1.])`，表示前两个位置是有效的。\n",
    "• 第二个句子的有效位置掩码为 `tensor([1., 1., 1., 1.])`，表示前四个位置是有效的。\n",
    "\n",
    "---\n",
    "\n",
    "### **掩码的作用**\n",
    "在自注意力机制中，掩码的作用是屏蔽填充部分，确保模型只关注实际单词。具体来说：\n",
    "1. **有效位置**：掩码值为 `1`，表示该位置是有效的，模型可以关注。\n",
    "2. **无效位置**：掩码值为 `-∞`（或一个很大的负数），表示该位置是填充的，模型不应关注。\n",
    "\n",
    "---\n",
    "\n",
    "### **总结**\n",
    "• 这段代码根据源序列的实际长度 `src_len`，生成有效位置的掩码。\n",
    "• 掩码的作用是确保自注意力机制只关注实际单词，而忽略填充部分。\n",
    "• 最终输出的 `valid_encoder_pos` 是一个列表，包含每个句子的有效位置掩码。\n",
    "\n",
    "希望这个讲解能帮助你理解代码的功能和原理！"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "掩码矩阵的形状是根据 **批次大小（batch size）** 和 **序列最大长度（max sequence length）** 来确定的。在自注意力机制中，掩码矩阵的形状通常为 `(batch_size, max_seq_len, max_seq_len)`。\n",
    "\n",
    "下面我们详细讲解掩码矩阵的形状是如何计算的，以及它的作用。\n",
    "\n",
    "---\n",
    "\n",
    "### **1. 掩码矩阵的形状**\n",
    "掩码矩阵的形状为 `(batch_size, max_seq_len, max_seq_len)`，其中：\n",
    "• `batch_size`：批次大小，表示同时处理的句子数量。\n",
    "• `max_seq_len`：序列的最大长度，表示每个句子的最大单词数（包括填充部分）。\n",
    "\n",
    "---\n",
    "\n",
    "### **2. 掩码矩阵的作用**\n",
    "掩码矩阵用于 **屏蔽无效的位置**（例如填充部分），确保自注意力机制只关注实际单词。具体来说：\n",
    "• **有效位置**：掩码值为 `1`，表示该位置是有效的，模型可以关注。\n",
    "• **无效位置**：掩码值为 `-∞`（或一个很大的负数），表示该位置是填充的，模型不应关注。\n",
    "\n",
    "---\n",
    "\n",
    "### **3. 掩码矩阵的计算**\n",
    "\n",
    "#### **(1) 根据序列长度生成掩码**\n",
    "假设 `src_len` 表示每个句子的实际长度，例如：\n",
    "```python\n",
    "src_len = torch.Tensor([2, 4]).to(torch.int32)  # 两个句子的实际长度分别为 2 和 4\n",
    "max_seq_len = 5  # 序列的最大长度\n",
    "```\n",
    "\n",
    "我们可以生成一个形状为 `(batch_size, max_seq_len)` 的掩码矩阵，表示每个句子的有效位置：\n",
    "```python\n",
    "batch_size = len(src_len)\n",
    "mask = torch.zeros(batch_size, max_seq_len)  # 初始化掩码矩阵\n",
    "\n",
    "for i, L in enumerate(src_len):\n",
    "    mask[i, :L] = 1  # 将有效位置设置为 1\n",
    "```\n",
    "\n",
    "生成的掩码矩阵为：\n",
    "```\n",
    "tensor([[1., 1., 0., 0., 0.],\n",
    "        [1., 1., 1., 1., 0.]])\n",
    "```\n",
    "\n",
    "#### **(2) 扩展为自注意力掩码**\n",
    "自注意力机制需要一个形状为 `(batch_size, max_seq_len, max_seq_len)` 的掩码矩阵。我们可以通过以下方式生成：\n",
    "```python\n",
    "# 扩展掩码矩阵\n",
    "mask = mask.unsqueeze(1)  # 形状变为 (batch_size, 1, max_seq_len)\n",
    "mask = mask.expand(batch_size, max_seq_len, max_seq_len)  # 形状变为 (batch_size, max_seq_len, max_seq_len)\n",
    "```\n",
    "\n",
    "生成的掩码矩阵为：\n",
    "```\n",
    "tensor([[[1., 1., 0., 0., 0.],\n",
    "         [1., 1., 0., 0., 0.],\n",
    "         [1., 1., 0., 0., 0.],\n",
    "         [1., 1., 0., 0., 0.],\n",
    "         [1., 1., 0., 0., 0.]],\n",
    "\n",
    "        [[1., 1., 1., 1., 0.],\n",
    "         [1., 1., 1., 1., 0.],\n",
    "         [1., 1., 1., 1., 0.],\n",
    "         [1., 1., 1., 1., 0.],\n",
    "         [1., 1., 1., 1., 0.]]])\n",
    "```\n",
    "\n",
    "#### **(3) 转换为自注意力掩码**\n",
    "在自注意力机制中，掩码矩阵的值应为 `1`（有效位置）或 `-∞`（无效位置）。我们可以通过以下方式转换：\n",
    "```python\n",
    "mask = mask.float().masked_fill(mask == 0, float('-inf'))  # 将 0 替换为 -∞\n",
    "```\n",
    "\n",
    "最终的掩码矩阵为：\n",
    "```\n",
    "tensor([[[1., 1., -inf, -inf, -inf],\n",
    "         [1., 1., -inf, -inf, -inf],\n",
    "         [1., 1., -inf, -inf, -inf],\n",
    "         [1., 1., -inf, -inf, -inf],\n",
    "         [1., 1., -inf, -inf, -inf]],\n",
    "\n",
    "        [[1., 1., 1., 1., -inf],\n",
    "         [1., 1., 1., 1., -inf],\n",
    "         [1., 1., 1., 1., -inf],\n",
    "         [1., 1., 1., 1., -inf],\n",
    "         [1., 1., 1., 1., -inf]]])\n",
    "```\n",
    "\n",
    "---\n",
    "\n",
    "### **4. 总结**\n",
    "掩码矩阵的形状为 `(batch_size, max_seq_len, max_seq_len)`，其计算步骤如下：\n",
    "1. 根据 `src_len` 生成形状为 `(batch_size, max_seq_len)` 的掩码矩阵。\n",
    "2. 扩展为形状为 `(batch_size, max_seq_len, max_seq_len)` 的掩码矩阵。\n",
    "3. 将无效位置的值设置为 `-∞`，有效位置的值设置为 `1`。\n",
    "\n",
    "掩码矩阵的作用是确保自注意力机制只关注实际单词，而忽略填充部分。\n",
    "\n",
    "希望这个讲解能帮助你理解掩码矩阵的形状和计算过程！"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([2, 4], dtype=torch.int32)\n",
      "tensor([[[-0.8195, -0.4718, -1.3777, -0.9108],\n",
      "         [-0.0414,  1.0849,  1.8055,  1.0161],\n",
      "         [ 1.8322,  0.2114, -1.4238, -0.5893],\n",
      "         [-1.7043, -1.0197, -0.9280,  0.4581]],\n",
      "\n",
      "        [[ 0.3570,  0.8853,  1.6351,  1.0016],\n",
      "         [ 2.3463,  0.7714, -0.3302, -0.1246],\n",
      "         [-0.1752,  1.4587,  0.5550,  0.1569],\n",
      "         [-0.1296, -0.6685, -1.7335,  0.9100]]])\n",
      "tensor([[[-0.8195, -0.4718,    -inf,    -inf],\n",
      "         [-0.0414,  1.0849,    -inf,    -inf],\n",
      "         [   -inf,    -inf,    -inf,    -inf],\n",
      "         [   -inf,    -inf,    -inf,    -inf]],\n",
      "\n",
      "        [[ 0.3570,  0.8853,  1.6351,  1.0016],\n",
      "         [ 2.3463,  0.7714, -0.3302, -0.1246],\n",
      "         [-0.1752,  1.4587,  0.5550,  0.1569],\n",
      "         [-0.1296, -0.6685, -1.7335,  0.9100]]])\n",
      "tensor([[[0.4139, 0.5861, 0.0000, 0.0000],\n",
      "         [0.2449, 0.7551, 0.0000, 0.0000],\n",
      "         [   nan,    nan,    nan,    nan],\n",
      "         [   nan,    nan,    nan,    nan]],\n",
      "\n",
      "        [[0.1221, 0.2071, 0.4383, 0.2326],\n",
      "         [0.7351, 0.1522, 0.0506, 0.0621],\n",
      "         [0.1042, 0.5341, 0.2163, 0.1453],\n",
      "         [0.2168, 0.1265, 0.0436, 0.6131]]])\n",
      "torch.Size([2, 4, 1])\n",
      "tensor([[[1., 1., 0., 0.],\n",
      "         [1., 1., 0., 0.],\n",
      "         [0., 0., 0., 0.],\n",
      "         [0., 0., 0., 0.]],\n",
      "\n",
      "        [[1., 1., 1., 1.],\n",
      "         [1., 1., 1., 1.],\n",
      "         [1., 1., 1., 1.],\n",
      "         [1., 1., 1., 1.]]])\n",
      "tensor([2, 4], dtype=torch.int32)\n",
      "tensor([[[False, False,  True,  True],\n",
      "         [False, False,  True,  True],\n",
      "         [ True,  True,  True,  True],\n",
      "         [ True,  True,  True,  True]],\n",
      "\n",
      "        [[False, False, False, False],\n",
      "         [False, False, False, False],\n",
      "         [False, False, False, False],\n",
      "         [False, False, False, False]]])\n"
     ]
    }
   ],
   "source": [
    "#step4:构造encoder读取self-attention mask\n",
    "#Q的大小是(batch，t，embedding)的维度；K也是(batch，t，embedding)的维度\n",
    "#mask的shape:(batch_size,max_src_len,max_src_len),值为1或-infinite(-∞)\n",
    "#src_len是每个序列的长度\n",
    "import numpy as np\n",
    "valid_encoder_pos=torch.cat(\n",
    "    [torch.unsqueeze(#添加第0维度\n",
    "        F.pad(torch.ones(L),(0,max(src_len)-L)),\n",
    "        0) for L in src_len]) #有效的编码器的位置.第一个句子2个单词，第二个句子4个单词\n",
    "valid_encoder_pos=torch.unsqueeze(valid_encoder_pos,2)#得到2*4*1的张量。2代表batch_size,4代表pad以后每个句子最大长度\n",
    "valid_encoder_pos_matrix=torch.bmm(valid_encoder_pos,valid_encoder_pos.transpose(1,2))#乘以转置矩阵，得到矩阵间相关性\n",
    "invalid_encoder_pos_matrix=1-valid_encoder_pos_matrix\n",
    "mask_encoder_self_attention=invalid_encoder_pos_matrix.to(torch.bool)\n",
    "\n",
    "#举例\n",
    "score=torch.randn(batch_size,max(src_len),max(src_len))\n",
    "masked_score=score.masked_fill(mask_encoder_self_attention,-np.inf)\n",
    "prob=F.softmax(masked_score,-1)\n",
    "print(src_len)\n",
    "print(score)\n",
    "print(masked_score)\n",
    "print(prob)\n",
    "\n",
    "print(valid_encoder_pos.shape)\n",
    "print(valid_encoder_pos_matrix)\n",
    "print(src_len)#第一个句子2个单词；第二个单词4个单词\n",
    "print(mask_encoder_self_attention)\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[[False, False,  True,  True],\n",
      "         [False, False,  True,  True],\n",
      "         [False, False,  True,  True],\n",
      "         [False, False,  True,  True]],\n",
      "\n",
      "        [[False, False, False, False],\n",
      "         [False, False, False, False],\n",
      "         [False, False, False, False],\n",
      "         [ True,  True,  True,  True]]])\n",
      "tensor([[[1., 1., 0., 0.],\n",
      "         [1., 1., 0., 0.],\n",
      "         [1., 1., 0., 0.],\n",
      "         [1., 1., 0., 0.]],\n",
      "\n",
      "        [[1., 1., 1., 1.],\n",
      "         [1., 1., 1., 1.],\n",
      "         [1., 1., 1., 1.],\n",
      "         [0., 0., 0., 0.]]])\n",
      "tensor([[[1.],\n",
      "         [1.],\n",
      "         [0.],\n",
      "         [0.]],\n",
      "\n",
      "        [[1.],\n",
      "         [1.],\n",
      "         [1.],\n",
      "         [1.]]]) tensor([[[1.],\n",
      "         [1.],\n",
      "         [1.],\n",
      "         [1.]],\n",
      "\n",
      "        [[1.],\n",
      "         [1.],\n",
      "         [1.],\n",
      "         [0.]]])\n",
      "torch.Size([2, 4, 1])\n"
     ]
    }
   ],
   "source": [
    "#step5:intra attentnion的mask\n",
    "#Q@K^Tshape:(batch_size,tgt_seq_len,src_seq_len\n",
    "valid_encoder_pos=torch.cat(\n",
    "    [torch.unsqueeze(#添加第0维度\n",
    "        F.pad(torch.ones(L),(0,max(src_len)-L)),\n",
    "        0) for L in src_len]) #有效的编码器的位置.第一个句子2个单词，第二个句子4个单词\n",
    "valid_encoder_pos=torch.unsqueeze(valid_encoder_pos,2)#得到2*4*1的张量。2代表batch_size,4代表pad以后每个句子最大长度\n",
    "\n",
    "valid_decoder_pos=torch.cat(\n",
    "    [torch.unsqueeze(#添加第0维度\n",
    "        F.pad(torch.ones(L),(0,max(tgt_len)-L)),\n",
    "        0) for L in tgt_len]) #有效的编码器的位置.第一个句子2个单词，第二个句子4个单词\n",
    "valid_decoder_pos=torch.unsqueeze(valid_decoder_pos,2)#得到2*4*1的张量。2代表batch_size,4代表pad以后每个句子最大长度\n",
    "valid_cross_pos_matrix=torch.bmm(valid_decoder_pos,valid_encoder_pos.transpose(1,2))#(2,4,1)@(2,1,4)=shape(2,4,4) 1代表有效.batch_size不参与运算\n",
    "invalid_cross_pos_matrix=1-valid_cross_pos_matrix#1代表无效\n",
    "mask_cross_attention=invalid_cross_pos_matrix.to(torch.bool)\n",
    "print(mask_cross_attention)\n",
    "\n",
    "print(valid_cross_pos_matrix)#目标序列对原序列有效性的关系\n",
    "print(valid_encoder_pos,valid_decoder_pos)#表示原序列和目标序列有效位置\n",
    "print(valid_decoder_pos.shape)\n",
    "#2是batch，4是最大序列长度，1是要做矩阵运算扩维的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[[False,  True,  True,  True],\n",
      "         [False, False,  True,  True],\n",
      "         [False, False, False,  True],\n",
      "         [False, False, False, False]],\n",
      "\n",
      "        [[False,  True,  True,  True],\n",
      "         [False, False,  True,  True],\n",
      "         [False, False, False,  True],\n",
      "         [ True,  True,  True,  True]]])\n",
      "tensor([4, 3], dtype=torch.int32)\n",
      "tensor([[[1.0000, 0.0000, 0.0000, 0.0000],\n",
      "         [0.3358, 0.6642, 0.0000, 0.0000],\n",
      "         [0.6111, 0.0401, 0.3488, 0.0000],\n",
      "         [0.3272, 0.3146, 0.1661, 0.1921]],\n",
      "\n",
      "        [[1.0000, 0.0000, 0.0000, 0.0000],\n",
      "         [0.1649, 0.8351, 0.0000, 0.0000],\n",
      "         [0.0400, 0.7546, 0.2054, 0.0000],\n",
      "         [0.2500, 0.2500, 0.2500, 0.2500]]])\n"
     ]
    }
   ],
   "source": [
    "#step6:decoder self_attention 的 mask\n",
    "\n",
    "valid_decoder_tri_matrix=torch.cat([torch.unsqueeze(F.pad\n",
    "                          (torch.tril(torch.ones((L,L))),#下三角矩阵\n",
    "                            (0,max(tgt_len)-L,0,max(tgt_len)-L)) ,0)for L in tgt_len],0)#（左右上下）掩盖后面的内容，不让网络看到\n",
    "invalid_decoder_tri_matrix=1-valid_decoder_tri_matrix\n",
    "invalid_decoder_tri_matrix=invalid_decoder_tri_matrix.to(torch.bool)\n",
    "\n",
    "print(invalid_decoder_tri_matrix)#第一个目标序列长度4，第2个目标序列长度3.pad后维度一样\n",
    "#解码器预测第二个字符，输入1个特殊字符和第一个字\n",
    "#预测第三个字符，给1个特殊字符和前2个字\n",
    "\n",
    "score=torch.randn(batch_size,max(tgt_len),max(tgt_len))\n",
    "masked_score=score.masked_fill(invalid_decoder_tri_matrix,-1e9)#-1000亿\n",
    "prob=F.softmax(masked_score,-1)\n",
    "print(tgt_len)\n",
    "print(prob)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#step 构建scaled self-attention\n",
    "def scaled_dot_product_attention(Q,K,V,attn_mask):\n",
    "    #shape of Q,K,V (batch_size*num_head,seq_len,model_dim/num_head)\n",
    "    score=torch.bmm(Q,K.transpose(-2,-1))/torch.sqrt(model_dim)\n",
    "    mask_score=score.masked_fill(attn_mask,-1e9)\n",
    "    prob=F.softmax(masked_score,-1)\n",
    "    context=torch.bmm(prob,V)\n",
    "    return context"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pytorch",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
