{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 注意力机制的思想与原理\n",
    "\n",
    "文字，全流程详细讲解，无代码 https://blog.csdn.net/benzhujie1245com/article/details/117173090\n",
    "\n",
    "视频，原因讲解 https://www.bilibili.com/video/BV1dt4y1J7ov/\n",
    "\n",
    "视频，讲解详细 https://www.bilibili.com/video/BV1v3411r78R/\n",
    "\n",
    "文字，简单讲解，有代码 https://blog.csdn.net/qq_52785473/article/details/124537101\n",
    "\n",
    "文字，简单讲解，有代码 https://blog.csdn.net/Datawhale/article/details/120320116\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<img src=\"33.png\"/>"
      ],
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from IPython.display import Image\n",
    "Image(url= \"33.png\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<img src=\"34.png\"/>"
      ],
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "Image(url= \"34.png\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# transformer中的注意力机制的代码实现"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "\n",
    "import math\n",
    "from torch.autograd import Variable\n",
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "import copy"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## transformer几个重要的功能函数\n",
    "\n",
    "### masked_fill 掩码\n",
    "\n",
    "mask（掩码、掩膜）是深度学习中的常见操作。简单而言，其相当于在原始张量上盖上一层掩膜，从而屏蔽或选择一些特定元素，因此常用于构建张量的过滤器（见下图）\n",
    "\n",
    "按照上述定义，非线性激活函数Relu（根据输出的正负区间进行简单粗暴的二分）、dropout机制（根据概率进行二分）都可以理解为泛化的mask操作。\n",
    "\n",
    "从任务适应性上，mask在图像和自然语言处理中都广为应用，其应用包括但不局限于：图像兴趣区提取、图像屏蔽、图像结构特征提取、语句padding对齐的mask、语言模型中sequence mask等。\n",
    "\n",
    "从使用mask的具体流程上，其可以作用于数据的预处理（如原始数据的过滤）、模型中间层（如relu、drop等）和模型损失计算上（如padding序列的损失忽略）\n",
    "\n",
    "https://aistudio.csdn.net/63aaf7f90d4fc52e3cfc4359.html"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<img src=\"36.png\"/>"
      ],
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "execution_count": 64,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "Image(url= \"36.png\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "origin tensor:\n",
      "tensor([[ 0,  1,  2,  3],\n",
      "        [ 4,  5,  6,  7],\n",
      "        [ 8,  9, 10, 11],\n",
      "        [12, 13, 14, 15]])\n",
      "\n",
      "mask tensor:\n",
      "tensor([[ True, False, False, False],\n",
      "        [False,  True, False, False],\n",
      "        [False, False,  True, False],\n",
      "        [False, False, False,  True]])\n",
      "\n",
      "filled tensor:\n",
      "tensor([[100,   1,   2,   3],\n",
      "        [  4, 100,   6,   7],\n",
      "        [  8,   9, 100,  11],\n",
      "        [ 12,  13,  14, 100]])\n"
     ]
    }
   ],
   "source": [
    "tensor = torch.arange(0,16).view(4,4)\n",
    "print('origin tensor:\\n{}\\n'.format(tensor))\n",
    "\n",
    "mask = torch.eye(4,dtype=torch.bool)\n",
    "print('mask tensor:\\n{}\\n'.format(mask))\n",
    "\n",
    "tensor = tensor.masked_fill(mask,100)\n",
    "print('filled tensor:\\n{}'.format(tensor))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Dropout\n",
    "\n",
    "Dropout是一种常用的正则化方法，通过随机将部分神经元的输出置为0来减少过拟合"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([-1.6486,  0.9010, -2.5143,  0.2972,  2.9130, -0.6835,  1.0512,  1.4107,\n",
      "         0.1304,  1.6109, -0.0490, -3.6601, -0.3592,  0.8066, -2.2930, -0.7336])\n",
      "tensor([-2.0607,  1.1262, -3.1429,  0.3716,  3.6412, -0.8543,  1.3140,  0.0000,\n",
      "         0.1630,  2.0137, -0.0613, -0.0000, -0.4490,  1.0083, -2.8662, -0.9170])\n"
     ]
    }
   ],
   "source": [
    "m = nn.Dropout(p=0.2)\n",
    "input = torch.randn(20, 16)\n",
    "output = m(input)\n",
    "print(input[0])\n",
    "print(output[0])\n",
    "\n",
    "#有一部分的值变为了0，这些值大约占据总数的0.2。\n",
    "#其它非0参数都除以0.8，使得值变大了。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "-0.854375"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "-0.6835/0.8"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### view\n",
    "\n",
    "view squeeze transpose比较\n",
    "\n",
    "https://blog.csdn.net/lsb2002/article/details/132905346\n",
    "\n",
    "* 通过手工指定，将一个一维tensor变换为3*8维的tensor\n",
    "\n",
    "* 如果某个参数为-1，则表示该维度取决于其它维度，由Pytorch自己补充\n",
    "\n",
    "* 将tensor展平成一维"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17, 18,\n",
      "        19, 20, 21, 22, 23, 24])\n",
      "tensor([[ 1,  2,  3,  4,  5,  6],\n",
      "        [ 7,  8,  9, 10, 11, 12],\n",
      "        [13, 14, 15, 16, 17, 18],\n",
      "        [19, 20, 21, 22, 23, 24]])\n",
      "torch.Size([24])\n",
      "torch.Size([4, 6])\n"
     ]
    }
   ],
   "source": [
    "#通过手工指定，将一个一维tensor变换为3*8维的tensor\n",
    "\n",
    "a1 = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, \n",
    "                   13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24])\n",
    " \n",
    "a2 = a1.view(4, 6)\n",
    "print(a1)\n",
    "print(a2)\n",
    "print(a1.shape)\n",
    "print(a2.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17, 18,\n",
      "        19, 20, 21, 22, 23, 24])\n",
      "tensor([[ 1,  2,  3,  4,  5,  6],\n",
      "        [ 7,  8,  9, 10, 11, 12],\n",
      "        [13, 14, 15, 16, 17, 18],\n",
      "        [19, 20, 21, 22, 23, 24]])\n",
      "tensor([[[ 1,  2,  3,  4],\n",
      "         [ 5,  6,  7,  8],\n",
      "         [ 9, 10, 11, 12]],\n",
      "\n",
      "        [[13, 14, 15, 16],\n",
      "         [17, 18, 19, 20],\n",
      "         [21, 22, 23, 24]]])\n",
      "tensor([[[ 1,  2],\n",
      "         [ 3,  4],\n",
      "         [ 5,  6]],\n",
      "\n",
      "        [[ 7,  8],\n",
      "         [ 9, 10],\n",
      "         [11, 12]],\n",
      "\n",
      "        [[13, 14],\n",
      "         [15, 16],\n",
      "         [17, 18]],\n",
      "\n",
      "        [[19, 20],\n",
      "         [21, 22],\n",
      "         [23, 24]]])\n",
      "torch.Size([24])\n",
      "torch.Size([4, 6])\n",
      "torch.Size([2, 3, 4])\n",
      "torch.Size([4, 3, 2])\n"
     ]
    }
   ],
   "source": [
    "#如果某个参数为-1，则表示该维度取决于其它维度，由Pytorch自己补充\n",
    "a3 = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,\n",
    "                   13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24])\n",
    " \n",
    "a4 = a3.view(4, -1)\n",
    "a5 = a3.view(2, 3, -1)\n",
    "a6 = a3.view(-1, 3, 2)\n",
    " \n",
    "print(a3)\n",
    "print(a4)\n",
    "print(a5)\n",
    "print(a6)\n",
    "print(a3.shape)\n",
    "print(a4.shape)\n",
    "print(a5.shape)\n",
    "print(a6.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12],\n",
      "        [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]])\n",
      "tensor([ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17, 18,\n",
      "        19, 20, 21, 22, 23, 24])\n",
      "torch.Size([2, 12])\n",
      "torch.Size([24])\n"
     ]
    }
   ],
   "source": [
    "# 将tensor展平成一维\n",
    " \n",
    "a7 = torch.tensor([[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],\n",
    "                   [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]])\n",
    "a8 = a6.view(-1)\n",
    "print(a7)\n",
    "print(a8)\n",
    "print(a7.shape)\n",
    "print(a8.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### register_buffer\n",
    "\n",
    "https://blog.csdn.net/devil_son1234/article/details/130699031\n",
    "\n",
    "* Parameter与Buffer\n",
    "\n",
    "模型保存下来的参数有两种：一种是需要更新的Parameter，另一种是不需要更新的buffer。在模型中，利用backward反向传播，可以通过requires_grad来得到buffer和parameter的梯度信息，但是利用optimizer进行更新的是parameter，buffer不会更新，这也是两者最重要的区别。这两种参数都存在于model.state_dict()的OrderedDict中，也会随着模型“移动”（model.cuda()）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "OrderedDict([('my_buffer', tensor([[[[-1.4281, -2.0015,  0.5312,  1.1952, -0.7626],\n",
      "          [ 0.0796, -1.5467,  1.0079, -0.4127, -0.0910],\n",
      "          [ 0.3273, -0.5740,  0.7187,  0.6561, -1.1638],\n",
      "          [-0.1531,  1.1297,  0.7064,  1.2909,  0.3901],\n",
      "          [ 0.7776,  0.5719,  0.8909,  0.0940,  0.8989]]]], device='cuda:0')), ('conv.weight', tensor([[[[ 0.2759, -0.3194,  0.2303],\n",
      "          [ 0.0626, -0.3022,  0.3288],\n",
      "          [-0.1622,  0.0643, -0.2257]]]], device='cuda:0')), ('conv.bias', tensor([0.1275], device='cuda:0'))])\n",
      "..........\n",
      "tensor([[[[-1.4281, -2.0015,  0.5312,  1.1952, -0.7626],\n",
      "          [ 0.0796, -1.5467,  1.0079, -0.4127, -0.0910],\n",
      "          [ 0.3273, -0.5740,  0.7187,  0.6561, -1.1638],\n",
      "          [-0.1531,  1.1297,  0.7064,  1.2909,  0.3901],\n",
      "          [ 0.7776,  0.5719,  0.8909,  0.0940,  0.8989]]]])\n",
      "tensor([[[[-1.4281, -2.0015,  0.5312,  1.1952, -0.7626],\n",
      "          [ 0.0796, -1.5467,  1.0079, -0.4127, -0.0910],\n",
      "          [ 0.3273, -0.5740,  0.7187,  0.6561, -1.1638],\n",
      "          [-0.1531,  1.1297,  0.7064,  1.2909,  0.3901],\n",
      "          [ 0.7776,  0.5719,  0.8909,  0.0940,  0.8989]]]], device='cuda:0')\n"
     ]
    }
   ],
   "source": [
    "class my_model(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(my_model, self).__init__()\n",
    "        self.conv = nn.Conv2d(1, 1, 3, 1, 1)\n",
    "        self.tensor = torch.randn(size=(1, 1, 5, 5))\n",
    "        self.register_buffer('my_buffer', self.tensor)\n",
    " \n",
    "    def forward(self, x):\n",
    "        return self.conv(x) + self.my_buffer  # 这里不再是self.tensor\n",
    " \n",
    " \n",
    "x = torch.randn(size=(1, 1, 5, 5))\n",
    "x = x.to('cuda')\n",
    "model = my_model().cuda()\n",
    "model(x)\n",
    "print(model.state_dict())\n",
    "print('..........')\n",
    "print(model.tensor)\n",
    "print(model.my_buffer)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Variable\n",
    "\n",
    "https://blog.csdn.net/Mr_zhuo_/article/details/108132061\n",
    "\n",
    "pytorch两个基本对象：Tensor（张量）和Variable（变量）其中，tensor不能反向传播，variable可以反向传播。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "#Variable(torch.zeros(8, 4, 4))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### torch.matmul\n",
    "\n",
    "https://blog.csdn.net/weixin_44225182/article/details/126655303\n",
    "\n",
    "\n",
    "各个相乘函数的比较\n",
    "\n",
    "https://blog.csdn.net/jizhidexiaoming/article/details/82502724\n",
    "\n",
    "如果两个参数都是二维的，则返回矩阵-矩阵乘积\n",
    "也就是 正常的矩阵乘法 (m * n) * (n * k) = (m * k)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ans : tensor([[24., 30.],\n",
      "        [24., 30.]])\n",
      "ans.size : torch.Size([2, 2])\n"
     ]
    }
   ],
   "source": [
    "tensor1 = torch.Tensor([[1,2,3],\n",
    "                        [1,2,3]])\n",
    "tensor2 =torch.Tensor([[4,5],\n",
    "                       [4,5],\n",
    "                       [4,5]])\n",
    "ans = torch.matmul(tensor1, tensor2)\n",
    "\n",
    "print('ans :', ans)\n",
    "print('ans.size :', ans.size())\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### nn.ModuleList()\n",
    "\n",
    "https://blog.csdn.net/AdamCY888/article/details/131270295\n",
    "\n",
    "nn.ModuleList() 是 PyTorch 中的一个类，用于管理神经网络模型中的子模块列表。它允许我们将多个子模块组织在一起，并将它们作为整个模型的一部分进行管理和操作。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "class MyModel(nn.Module):\n",
    "    def __init__(self):\n",
    "        super(MyModel, self).__init__()\n",
    "\n",
    "        self.module_list = nn.ModuleList([\n",
    "            nn.Linear(10, 20),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(20, 10),\n",
    "        ])\n",
    "\n",
    "    def forward(self, x):\n",
    "        for module in self.module_list:\n",
    "            x = module(x)\n",
    "        return x\n",
    "\n",
    "model = MyModel()\n",
    "input_tensor = torch.randn(32, 10)\n",
    "output_tensor = model(input_tensor)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### contiguous\n",
    "\n",
    "https://blog.csdn.net/m0_48241022/article/details/132804698\n",
    "\n",
    "张量的连续性、contiguous函数\n",
    "\n",
    "在pytorch中，tensor的实际数据以一维数组（storage）的形式存储于某个连续的内存中，以“行优先”进行存储\n",
    " tensor连续（contiguous）是指tensor的storage元素排列顺序与其按行优先时的元素排列顺序相同\n",
    "\n",
    "\n",
    " tensor不连续会导致某些操作无法进行，比如view()就无法进行。在上面的例子中：由于 b 是不连续的，所以对其进行view()操作会报错；b.view(3,3)没报错，因为b本身的shape就是(3,3)。\n",
    "\n",
    "  tensor.contiguous()返回一个与原始tensor有相同元素的 “连续”tensor，如果原始tensor本身就是连续的，则返回原始tensor。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<img src=\"37.png\"/>"
      ],
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "Image(url= \"37.png\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[1, 2, 3],\n",
      "        [4, 5, 6],\n",
      "        [7, 8, 9]])\n",
      " 1\n",
      " 2\n",
      " 3\n",
      " 4\n",
      " 5\n",
      " 6\n",
      " 7\n",
      " 8\n",
      " 9\n",
      "[torch.storage.TypedStorage(dtype=torch.int64, device=cpu) of size 9]\n",
      "True\n",
      "tensor([[1, 4, 7],\n",
      "        [2, 5, 8],\n",
      "        [3, 6, 9]])\n",
      " 1\n",
      " 2\n",
      " 3\n",
      " 4\n",
      " 5\n",
      " 6\n",
      " 7\n",
      " 8\n",
      " 9\n",
      "[torch.storage.TypedStorage(dtype=torch.int64, device=cpu) of size 9]\n",
      "False\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\Administrator\\AppData\\Local\\Temp\\ipykernel_3616\\4290763692.py:5: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\n",
      "  print(a.storage())\n"
     ]
    }
   ],
   "source": [
    "a = torch.tensor([[1, 2, 3],\n",
    "                  [4, 5, 6],\n",
    "                  [7, 8, 9]])\n",
    "print(a)\n",
    "print(a.storage())\n",
    "print(a.is_contiguous())  # a是连续的\n",
    "\n",
    " \n",
    "b = a.t()  # b是a的转置\n",
    "print(b)\n",
    "print(b.storage())\n",
    "print(b.is_contiguous())  # b是不连续的"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[1, 4, 7],\n",
      "        [2, 5, 8],\n",
      "        [3, 6, 9]])\n",
      "tensor([[1, 4, 7],\n",
      "        [2, 5, 8],\n",
      "        [3, 6, 9]])\n",
      " 1\n",
      " 2\n",
      " 3\n",
      " 4\n",
      " 5\n",
      " 6\n",
      " 7\n",
      " 8\n",
      " 9\n",
      "[torch.storage.TypedStorage(dtype=torch.int64, device=cpu) of size 9]\n",
      " 1\n",
      " 4\n",
      " 7\n",
      " 2\n",
      " 5\n",
      " 8\n",
      " 3\n",
      " 6\n",
      " 9\n",
      "[torch.storage.TypedStorage(dtype=torch.int64, device=cpu) of size 9]\n"
     ]
    }
   ],
   "source": [
    "c = b.contiguous()\n",
    "print(b)\n",
    "print(c)\n",
    "print(b.storage())\n",
    "print(c.storage())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## transformer 主要类\n",
    "\n",
    "### 词嵌入\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [],
   "source": [
    "#词嵌入\n",
    "class Embeddings(nn.Module):\n",
    "    def __init__(self, d_model, vocab):\n",
    "    # d_model:词嵌入维度\n",
    "    # vocab:字典大小\n",
    "        super(Embeddings, self).__init__()\n",
    "        self.lut = nn.Embedding(vocab, d_model)\n",
    "        self.d_model = d_model\n",
    "    def forward(self, x):\n",
    "        return self.lut(x) * math.sqrt(self.d_model)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([2, 4, 512])\n"
     ]
    }
   ],
   "source": [
    "d_model = 512  # embedding_size\n",
    "vocab = 1000  # 词典大小\n",
    "x=torch.tensor([[100, 2, 421, 508], [491, 998, 1, 221]], dtype=torch.long)\n",
    "emb = Embeddings(d_model, vocab)\n",
    "embr = emb(x)\n",
    "print(embr.shape)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[  4.4606,  34.4301, -24.5994,  ...,  17.1663,  28.1237,  16.9237],\n",
       "         [  5.1524,   9.8639,  31.0702,  ...,   3.0191,   9.8241,  26.0486],\n",
       "         [ -6.2616,   7.7158,  23.8920,  ...,  -9.6690,  17.4992,  22.7776],\n",
       "         [ 14.7686,  -5.9697,  17.2761,  ..., -19.2139, -36.0296, -19.4070]],\n",
       "\n",
       "        [[-15.1041,  16.8019,  11.4768,  ...,  17.9029,  27.7799,   1.4143],\n",
       "         [-17.6204,   1.9429,  16.7623,  ..., -51.7329, -12.3380,   1.6418],\n",
       "         [ 13.7802,  -3.5118,   5.1110,  ...,   9.2191, -26.6751,  28.3472],\n",
       "         [-21.7961, -42.0103, -23.3687,  ...,   6.7964, -30.1595,  46.5827]]],\n",
       "       grad_fn=<MulBackward0>)"
      ]
     },
     "execution_count": 42,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "embr"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 位置编码"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [],
   "source": [
    "#位置编码\n",
    "class PositionalEncoding(nn.Module):\n",
    "    def __init__(self, d_model, dropout, max_len=5000):\n",
    "    # d_model:词嵌入维度\n",
    "    # dropout:置零比率\n",
    "    # max_len:每个句子最大的长度\n",
    "        super(PositionalEncoding, self).__init__()\n",
    "        self.dropout = nn.Dropout(p=dropout)\n",
    "        pe = torch.zeros(max_len, d_model)\n",
    "        position = torch.arange(0,  max_len).unsqueeze(1)\n",
    "        div_term = torch.exp(torch.arange(0, d_model, 2) * -(math.log(1000.0) / d_model))\n",
    "        pe[:, 0::2] = torch.sin(position * div_term)\n",
    "        pe[:, 1::2] = torch.cos(position * div_term)\n",
    "        pe = pe.unsqueeze(0)\n",
    "        self.register_buffer(\"pe\", pe)\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = x + Variable(self.pe[:, :x.size(1)], requires_grad=False)\n",
    "        return self.dropout(x)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([2, 4, 512])\n"
     ]
    }
   ],
   "source": [
    "dropout = 0.1\n",
    "max_len = 60\n",
    "pe = PositionalEncoding(d_model, dropout, max_len)\n",
    "pe_result = pe(embr)\n",
    "print(pe_result.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[  4.9562,  39.3668, -27.3327,  ...,  20.1848,  31.2486,  19.9152],\n",
       "         [  6.6598,  11.5602,  35.4411,  ...,   4.4657,  10.9168,   0.0000],\n",
       "         [ -5.9470,   8.1107,  27.5802,  ...,  -9.6322,   0.0000,  26.4196],\n",
       "         [ 16.5664,  -7.7330,  19.4397,  ..., -20.2377, -40.0294,  -0.0000]],\n",
       "\n",
       "        [[-16.7823,  19.7799,  12.7519,  ...,  21.0032,   0.0000,   0.0000],\n",
       "         [ -0.0000,   2.7591,  19.5435,  ...,  -0.0000, -13.7077,   2.9353],\n",
       "         [ 16.3216,  -4.3644,   6.7124,  ...,  11.3546, -29.6367,  32.6080],\n",
       "         [-24.0611, -47.7781, -25.7211,  ...,   8.6627, -33.5071,  52.8697]]],\n",
       "       grad_fn=<MulBackward0>)"
      ]
     },
     "execution_count": 46,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "pe_result"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 多头自注意力机制"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [],
   "source": [
    "#mask == 0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [],
   "source": [
    "def attention(query, key, value, mask=None, dropout=None):\n",
    "    d_k = query.size(-1)\n",
    "    scores = torch.matmul(query, key.transpose(-2, -1)) / math.sqrt(d_k)\n",
    "    if mask is not None:\n",
    "        scores = scores.masked_fill(mask == 0, -1e9)\n",
    "    p_attn = F.softmax(scores, dim = -1)\n",
    "\n",
    "    if dropout is not None:\n",
    "        p_attn = dropout(p_attn)\n",
    "    \n",
    "    return torch.matmul(p_attn, value), p_attn\n",
    "\n",
    "# 深层拷贝\n",
    "def clones(module, N):\n",
    "    return nn.ModuleList([copy.deepcopy(module) for _ in range(N)])\n",
    "\n",
    "class MultiHeadedAttention(nn.Module):\n",
    "    def __init__(self, head, embedding_dim, dropout=0.1):\n",
    "        # head:代表几个头\n",
    "        # embedding_dim:词嵌入维度\n",
    "        # dropout:置0比率\n",
    "        super(MultiHeadedAttention, self).__init__()\n",
    "\n",
    "        # 确认embedding_dim能够被head整除\n",
    "        assert embedding_dim % head == 0\n",
    "        self.head = head\n",
    "        self.d_k = embedding_dim // head\n",
    "        # 获得4个线性层， 分别是Q、K、V、以及最终的输出的线形层\n",
    "        self.linears = clones(nn.Linear(embedding_dim, embedding_dim), 4)\n",
    "        self.attn = None\n",
    "        self.dropout = nn.Dropout(p=dropout)\n",
    "\n",
    "    def forward(self, query, key, value, mask=None):\n",
    "        if mask is not None:\n",
    "            mask = mask.unsqueeze(0)\n",
    "        \n",
    "        batch_size = query.size(0)\n",
    "\n",
    "        print(query.shape)\n",
    "        \n",
    "        print(len(self.linears))\n",
    "\n",
    "        # 经过线性层投影后分成head个注意力头\n",
    "        query, key, value = [model(x).view(batch_size, -1, self.head, self.d_k).transpose(1, 2) for model, x in zip(self.linears, (query, key, value))]\n",
    "        # 各自计算每个头的注意力\n",
    "        print(query.shape)\n",
    "        \n",
    "        x, self.attn = attention(query, key, value, mask=mask, dropout=self.dropout)\n",
    "        # 转换回来\n",
    "        print(x.shape)\n",
    "        \n",
    "        x = x.transpose(1, 2).contiguous().view(batch_size, -1, self.head * self.d_k)\n",
    "        # 经过最后一个线性层得到最终多头注意力机制的结果\n",
    "        return self.linears[-1](x)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(1, 2)\n",
      "(2, 2)\n",
      "(3, 2)\n"
     ]
    }
   ],
   "source": [
    "for i in zip([1,2,3,4],[2,2,2]):\n",
    "    print(i)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [],
   "source": [
    "#mask"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "64.0"
      ]
     },
     "execution_count": 53,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "512/8"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([2, 4, 512])\n",
      "4\n",
      "torch.Size([2, 8, 4, 64])\n",
      "torch.Size([2, 8, 4, 64])\n",
      "tensor([[[-8.2004,  0.5804,  1.3639,  ...,  1.7793, -2.9470, -1.8913],\n",
      "         [-6.3798,  2.6990,  3.7775,  ...,  1.6971, -2.1978, -0.7955],\n",
      "         [-6.0106,  2.9720,  4.2135,  ..., -2.0315, -5.0306, -4.9978],\n",
      "         [-6.6885,  0.0595,  1.7017,  ...,  2.5037, -2.7890,  2.4970]],\n",
      "\n",
      "        [[-2.7299, -2.4048,  3.6923,  ..., -7.7733,  1.3931,  1.7657],\n",
      "         [ 3.8099, -1.5517,  0.7698,  ..., -3.5534,  0.2886,  2.9241],\n",
      "         [-2.1036, -3.9115, -4.4982,  ..., -4.2154,  2.1434,  2.2444],\n",
      "         [ 1.0188, -1.2110,  0.7608,  ..., -3.6656,  6.5101,  4.7269]]],\n",
      "       grad_fn=<ViewBackward0>)\n",
      "torch.Size([2, 4, 512])\n"
     ]
    }
   ],
   "source": [
    "head = 8\n",
    "embedding_dim = 512\n",
    "dropout = 0.2\n",
    "query = key = value = pe_result\n",
    "mask = Variable(torch.zeros(8, 4, 4))\n",
    "mha = MultiHeadedAttention(head, embedding_dim, dropout)\n",
    "mha_result = mha(query, key, value, mask)\n",
    "print(mha_result)\n",
    "print(mha_result.shape)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "query = key = value = pe_result\n",
    "mask = Variable(torch.zeros(2, 4, 4))\n",
    "attn, p_attn = attention(query, key, value,mask=mask)\n",
    "# print(attn)\n",
    "# print(attn.shape)\n",
    "# print(p_attn)\n",
    "# print(p_attn.shape)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 前馈全连接层（PositionwiseFeedForward）\n",
    "\n",
    "考虑注意力机制可能对复杂的情况拟合程度不够，因此增加两层网络来增强模型的能力。\n",
    "\n",
    "前馈全连接层就是两次线性层+Relu激活"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {},
   "outputs": [],
   "source": [
    "class PositionwiseFeedForward(nn.Module):\n",
    "    def __init__(self, d_model, d_ff, dropout=0.1):\n",
    "        super(PositionwiseFeedForward, self).__init__()\n",
    "        self.w1 = nn.Linear(d_model, d_ff)\n",
    "        self.w2 = nn.Linear(d_ff, d_model)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "    \n",
    "    def forward(self, x):\n",
    "        return self.w2(self.dropout(F.relu(self.w1(x))))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[[ 1.1700e+00, -8.7976e-02, -1.8591e+00,  ..., -7.9013e-01,\n",
      "          -1.3219e+00, -9.0196e-01],\n",
      "         [-3.7141e-01,  6.4589e-01, -9.5103e-01,  ...,  7.5773e-01,\n",
      "          -1.1638e+00, -2.1573e-01],\n",
      "         [ 1.3626e+00, -4.1240e-01, -8.5201e-01,  ...,  1.1944e+00,\n",
      "          -9.4036e-01,  4.4812e-01],\n",
      "         [ 1.2279e-01, -5.7696e-01, -1.8156e+00,  ..., -3.2551e-01,\n",
      "          -1.9518e+00,  6.0685e-01]],\n",
      "\n",
      "        [[-1.6618e+00, -2.1277e+00, -1.0500e+00,  ..., -5.7305e-01,\n",
      "           5.0484e-01, -4.1263e-01],\n",
      "         [-1.0694e+00, -1.1956e+00, -1.0528e+00,  ..., -3.3320e-01,\n",
      "          -7.3139e-01, -9.6569e-01],\n",
      "         [-2.0961e+00, -2.2061e-01, -8.0619e-01,  ..., -2.7707e-03,\n",
      "          -9.8822e-01, -4.2281e-01],\n",
      "         [-2.9646e+00, -8.1585e-01, -1.3249e+00,  ...,  6.8580e-01,\n",
      "          -6.6974e-01,  8.2907e-01]]], grad_fn=<ViewBackward0>)\n",
      "torch.Size([2, 4, 512])\n"
     ]
    }
   ],
   "source": [
    "d_model = 512\n",
    "d_ff = 64\n",
    "dropout = 0.2\n",
    "x = mha_result\n",
    "ff = PositionwiseFeedForward(d_model, d_ff, dropout=dropout)\n",
    "ff_result = ff(x)\n",
    "print(ff_result)\n",
    "print(ff_result.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### LayerNorm 规范化层\n",
    "\n",
    "BatchNorm简单来说就是对一批样本按照每个特征维度进行归一化\n",
    "\n",
    "Layer Norm是对每个单词的Embedding做归一化\n",
    "\n",
    "https://blog.csdn.net/qq_43827595/article/details/121877901\n",
    "\n",
    "https://liumin.blog.csdn.net/article/details/85075706"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<img src=\"35.png\"/>"
      ],
      "text/plain": [
       "<IPython.core.display.Image object>"
      ]
     },
     "execution_count": 61,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "Image(url= \"35.png\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[[-8.2004,  0.5804,  1.3639,  ...,  1.7793, -2.9470, -1.8913],\n",
       "         [-6.3798,  2.6990,  3.7775,  ...,  1.6971, -2.1978, -0.7955],\n",
       "         [-6.0106,  2.9720,  4.2135,  ..., -2.0315, -5.0306, -4.9978],\n",
       "         [-6.6885,  0.0595,  1.7017,  ...,  2.5037, -2.7890,  2.4970]],\n",
       "\n",
       "        [[-2.7299, -2.4048,  3.6923,  ..., -7.7733,  1.3931,  1.7657],\n",
       "         [ 3.8099, -1.5517,  0.7698,  ..., -3.5534,  0.2886,  2.9241],\n",
       "         [-2.1036, -3.9115, -4.4982,  ..., -4.2154,  2.1434,  2.2444],\n",
       "         [ 1.0188, -1.2110,  0.7608,  ..., -3.6656,  6.5101,  4.7269]]],\n",
       "       grad_fn=<ViewBackward0>)"
      ]
     },
     "execution_count": 64,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "metadata": {},
   "outputs": [],
   "source": [
    "class LayerNorm(nn.Module):\n",
    "    def __init__(self, features, eps=1e-6):\n",
    "        super(LayerNorm, self).__init__()\n",
    "        self.a2 = nn.Parameter(torch.ones(features))\n",
    "        self.b2 = nn.Parameter(torch.zeros(features))\n",
    "        self.eps = eps\n",
    "    \n",
    "    def forward(self, x):\n",
    "        mean = x.mean(-1, keepdim = True)\n",
    "        std = x.std(-1, keepdim = True)\n",
    "        return self.a2 * (x - mean) / (std + self.eps) + self.b2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[[-1.9218,  0.0708,  0.2487,  ...,  0.3429, -0.7296, -0.4901],\n",
      "         [-1.5170,  0.5434,  0.7881,  ...,  0.3160, -0.5679, -0.2497],\n",
      "         [-1.5732,  0.7361,  1.0552,  ..., -0.5502, -1.3212, -1.3128],\n",
      "         [-1.5733, -0.0602,  0.3080,  ...,  0.4879, -0.6989,  0.4864]],\n",
      "\n",
      "        [[-0.6454, -0.5691,  0.8614,  ..., -1.8287,  0.3220,  0.4094],\n",
      "         [ 0.8695, -0.3313,  0.1886,  ..., -0.7796,  0.0808,  0.6711],\n",
      "         [-0.5172, -0.9272, -1.0602,  ..., -0.9961,  0.4457,  0.4686],\n",
      "         [ 0.1812, -0.2806,  0.1277,  ..., -0.7889,  1.3183,  0.9490]]],\n",
      "       grad_fn=<AddBackward0>)\n",
      "tensor([[[-1.9237,  0.0709,  0.2489,  ...,  0.3433, -0.7304, -0.4906],\n",
      "         [-1.5185,  0.5439,  0.7889,  ...,  0.3163, -0.5685, -0.2499],\n",
      "         [-1.5747,  0.7368,  1.0563,  ..., -0.5508, -1.3225, -1.3141],\n",
      "         [-1.5748, -0.0603,  0.3083,  ...,  0.4883, -0.6996,  0.4868]],\n",
      "\n",
      "        [[-0.6460, -0.5697,  0.8623,  ..., -1.8305,  0.3223,  0.4098],\n",
      "         [ 0.8704, -0.3317,  0.1888,  ..., -0.7804,  0.0809,  0.6718],\n",
      "         [-0.5178, -0.9281, -1.0612,  ..., -0.9970,  0.4461,  0.4690],\n",
      "         [ 0.1814, -0.2809,  0.1279,  ..., -0.7896,  1.3196,  0.9500]]],\n",
      "       grad_fn=<NativeLayerNormBackward0>)\n"
     ]
    }
   ],
   "source": [
    "ln = LayerNorm(512)\n",
    "lnn = nn.LayerNorm(512)\n",
    "ln_result = ln(x)\n",
    "lnn_result = lnn(x)\n",
    "print(ln_result)\n",
    "print(lnn_result)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 子层连接结构(SublayerConnection)\n",
    "\n",
    "Add&Norm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {},
   "outputs": [],
   "source": [
    "class SublayerConnection(nn.Module):\n",
    "    def __init__(self, size, dropout=0.1):\n",
    "        super(SublayerConnection, self).__init__()\n",
    "        self.norm = LayerNorm(size)\n",
    "        self.dropout = nn.Dropout(p=dropout) \n",
    "        self.size = size\n",
    "    \n",
    "    def forward(self, x, sublayer):\n",
    "        return x + self.dropout(sublayer(self.norm(x)))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[[ 0.3507],\n",
      "         [ 0.6553],\n",
      "         [-1.0369],\n",
      "         [ 0.2879]],\n",
      "\n",
      "        [[ 0.2382],\n",
      "         [-0.2214],\n",
      "         [ 0.7459],\n",
      "         [-0.4876]]], grad_fn=<MeanBackward1>)\n",
      "tensor([[[24.9129],\n",
      "         [23.3701],\n",
      "         [24.4109],\n",
      "         [22.9386]],\n",
      "\n",
      "        [[24.8099],\n",
      "         [24.7052],\n",
      "         [24.3432],\n",
      "         [23.9425]]], grad_fn=<StdBackward0>)\n",
      "Parameter containing:\n",
      "tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,\n",
      "        1., 1., 1., 1., 1., 1., 1., 1.], requires_grad=True)\n",
      "Parameter containing:\n",
      "tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,\n",
      "        0., 0., 0., 0., 0., 0., 0., 0.], requires_grad=True)\n",
      "torch.Size([2, 4, 512])\n",
      "4\n",
      "torch.Size([2, 8, 4, 64])\n",
      "torch.Size([2, 8, 4, 64])\n",
      "tensor([[[ 4.8513e+00,  3.9640e+01, -2.7079e+01,  ...,  1.9967e+01,\n",
      "           3.1491e+01,  1.9921e+01],\n",
      "         [ 6.6452e+00,  1.1801e+01,  3.5593e+01,  ...,  4.4657e+00,\n",
      "           1.1116e+01, -2.0346e-01],\n",
      "         [-6.0026e+00,  8.2205e+00,  2.7977e+01,  ..., -9.8631e+00,\n",
      "           8.9508e-02,  2.6420e+01],\n",
      "         [ 1.6558e+01, -7.5412e+00,  1.9632e+01,  ..., -2.0488e+01,\n",
      "          -3.9828e+01,  0.0000e+00]],\n",
      "\n",
      "        [[-1.6782e+01,  1.9508e+01,  1.2768e+01,  ...,  2.1003e+01,\n",
      "          -5.9011e-02, -2.7535e-02],\n",
      "         [-3.6174e-01,  2.5330e+00,  1.9437e+01,  ..., -3.8678e-01,\n",
      "          -1.3685e+01,  2.8580e+00],\n",
      "         [ 1.6073e+01, -4.5942e+00,  6.7385e+00,  ...,  1.1040e+01,\n",
      "          -2.9748e+01,  3.2529e+01],\n",
      "         [-2.4367e+01, -4.7931e+01, -2.5721e+01,  ...,  8.3428e+00,\n",
      "          -3.3544e+01,  5.2852e+01]]], grad_fn=<AddBackward0>)\n",
      "torch.Size([2, 4, 512])\n"
     ]
    }
   ],
   "source": [
    "size = 512\n",
    "dropout = 0.2\n",
    "head = 8\n",
    "d_model = 512\n",
    "x = pe_result\n",
    "mask = Variable(torch.zeros(8, 4, 4))\n",
    "self_attn = MultiHeadedAttention(head, d_model)\n",
    "sublayer = lambda x: self_attn(x, x, x, mask)\n",
    "sc = SublayerConnection(size, dropout)\n",
    "sc_result = sc(x, sublayer)\n",
    "print(sc_result)\n",
    "print(sc_result.shape)\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
