{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 多头注意力\n",
    "\n",
    "在实践中，当给定相同的查询、键和值的集合时，我们希望模型可以基于相同的注意力机制学习到不同的行为，然后将不同的行为知识组合起来。\n",
    "\n",
    "例如捕获序列内各种范围的依赖关系（例如，短距离依赖和长距离依赖）。因此，允许注意力机制组合实用查询、键和值得不同 子空间表示（representation subspaces）可能是有益的。\n",
    "\n",
    "为此，与使用单独一个注意力汇聚不同，我们可以用独立学习得到的h组不同的线性投影（linear projections）来变换查询、键和值。然后，这h组变换后的查询、键和值将并行地送到注意力汇聚中。最后，将这h个注意力汇聚的输出拼接在一起，并且通过另一个可以学习的线性投影进行变换，以产生最终输出。这种设计被称为多头注意力，其中h个注意力汇聚输出中的每一个输出都被称作为头（head）。图（10.5.1)展示了使用全连接层来实现可学习的线性变换的多头注意力。\n",
    "\n",
    "![10.5.1](../img/multi-head-attention.svg)  \n",
    "10.5.1 多头注意力，多个头连结然后线性变换"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 模型\n",
    "\n",
    "在实现多头注意力之前，让我们用数学语言将这个模型形式化描述出来，给定查询 ${q \\in R^{d_q}}$、键${k \\in R^{d_k}}$和值${v \\in R^{d_v}}$，每个注意力头${h_i(i=1,...,h)}$的计算方法为：\n",
    "\n",
    "${\\mathbf{h}_i = f(\\mathbf W_i^{(q)}\\mathbf q, \\mathbf W_i^{(k)}\\mathbf k,\\mathbf W_i^{(v)}\\mathbf v) \\in \\mathbb R^{p_v},}$\n",
    "\n",
    "其中，可学习的参数包括${W_i^{(q)} \\in R^{p_q \\times d_q}}$ 、${W_i^{(k)} \\in R^{p_k \\times d_k}}$ 和${W_i^{(v))} \\in R^{p_v \\times d_v}}$以及代表注意力汇聚的函数${f}$。${f}$可以是加性注意力和缩放点积注意力。多头注意力的输出需要经过另一个线性变换，它对应着h个头连结后的结果，因此其可学习参数是${W_o \\in R^{p_o \\times hp_v}}$\n",
    "\n",
    "${ \\begin{matrix}\\mathbf W_o \\begin{bmatrix}\\mathbf h_1\\\\\\vdots\\\\\\mathbf h_h\\end{bmatrix} \\in \\mathbb{R}^{p_o}.\\end{matrix} }$\n",
    "\n",
    "基于这种设计，每个头都可能关注输入的不同部分。可以表示比加权平均值更复杂的函数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import math\n",
    "import torch\n",
    "from torch import nn\n",
    "import d2lzh_pytorch as d2l"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 实现\n",
    "\n",
    "在实现过程中，我们选择缩放点积注意力作为每一个注意力头。为了避免计算成本和参数数量的大幅增长，我们设定 ${p_q=p_k=p_v=p_o/h}$。\n",
    "\n",
    "值得注意的是，如果我们将查询、键和值得线性变换的输出数量设置为 ${ p_{q} h=p_{k} h=p_{v} h=p_o }$，则可以并行计算 ${h}$个头。\n",
    "\n",
    "在下面的实现中，${p_o}$ 是通过参数num_hiddens指定的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "class MultiHeadAttention(nn.Module):\n",
    "    def __init__(self,key_size,query_size,value_size,num_hiddens,num_heads,dropout,bias=False,debug=False,**kwargs):\n",
    "        super(MultiHeadAttention, self).__init__(**kwargs)\n",
    "        self.num_heads=num_heads\n",
    "        self.attention=d2l.DotProductAttention(dropout)\n",
    "        self.W_q=nn.Linear(query_size,num_hiddens,bias=bias)\n",
    "        self.W_k=nn.Linear(key_size,num_hiddens,bias=bias)\n",
    "        self.W_v=nn.Linear(value_size,num_hiddens,bias=bias)\n",
    "        self.W_o=nn.Linear(num_hiddens,num_hiddens,bias=bias)\n",
    "        self.log=d2l.logger(debug)\n",
    "\n",
    "    def forward(self, queries, keys, values, vaild_lens):\n",
    "        self.log.print(queries.shape)\n",
    "        self.log.print(keys.shape)\n",
    "        self.log.print(values.shape)\n",
    "        self.log.print(vaild_len.shape)\n",
    "\n",
    "        queries=d2l.transpose_qkv(self.W_q(queries),self.num_heads)\n",
    "        keys=d2l.transpose_qkv(self.W_q(keys),self.num_heads)\n",
    "        values=d2l.transpose_qkv(self.W_q(values),self.num_heads)\n",
    "\n",
    "        # 在轴0，将第一项（标量或者矢量）复制`num_heads`次\n",
    "        # 然后如此复制第二项，然后诸如此类。\n",
    "        if vaild_lens is not None:\n",
    "            # repeat和repeat_interleave之间的区别\n",
    "            # repeat_interleave() 在原有的tensor，按每一个tensor复制\n",
    "            # repeat 根据原有的tensor复制n个，然后拼接到一起\n",
    "\n",
    "            vaild_lens=torch.repeat_interleave(vaild_lens, repeats=self.num_heads, dim=0)\n",
    "        \n",
    "        # `output`的形状：（`batch_size`*`num_heads`,查询的个数,\n",
    "        # `num_hiddens`/ `num_heads`\n",
    "        output=self.attention(queries,keys,values,vaild_lens)\n",
    "        output_concat=transpose_output(output,self.num_heads)\n",
    "        return self.W_o(output_concat)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# def forward(self, queries, keys, values, vaild_lens=None):\n",
    "#     # self.log.print_arg_shape(queries, keys, values, vaild_lens)\n",
    "\n",
    "#     d = queries.shape[-1]\n",
    "\n",
    "#     scores = torch.bmm(queries, keys.transpose(1, 2)) / math.sqrt(d)\n",
    "\n",
    "#     self.attention_weights = d2l.masked_softmax(scores, vaild_lens)\n",
    "#     return torch.bmm(self.dropout(self.attention_weights), values)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`queries`, `keys`, or `values` 的形状:  \n",
    "* (`batch_size`, 查询或者“键－值”对的个数, `num_hiddens`)  \n",
    "\n",
    "\n",
    "\n",
    "`valid_lens`　的形状:  \n",
    "* (`batch_size`,) or (`batch_size`, 查询的个数)  \n",
    "\n",
    "经过变换后，输出的 `queries`, `keys`, or `values`　的形状:  \n",
    "* (`batch_size` * `num_heads`, 查询或者“键－值”对的个数,  \n",
    "* `num_hiddens` / `num_heads`)  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "为了能够使多个头并行计算，上面的MultiHeadAttention类使用了下面定义的两个转置函数。具体来说，transpose_output函数翻转了transpose_qkv函数的操作。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "def transpose_qkv(X,num_heads):\n",
    "    X=X.reshape(X.shape[0],X.shape[1],num_heads,-1)\n",
    "    X=X.permute(0,2,1,3)\n",
    "    return X.reshape(-1,X.shape[2],X.shape[3])\n",
    "\n",
    "def transpose_output(X,num_heads):\n",
    "    \"\"\"逆转transpose_qkv函数的操作\"\"\"\n",
    "    X=X.reshape(-1,num_heads,X.shape[1],X.shape[2])\n",
    "    X=X.permute(0,2,1,3)\n",
    "    return X.reshape(X.shape[0],X.shape[1],-1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "让我们使用键和值相同的小例子来测试我们编写的MultiHeadAttention类。\n",
    "\n",
    "多头注意力输出的形状是（batch_size,num_queries,num_hiddens)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "MultiHeadAttention(\n",
       "  (attention): DotProductAttention(\n",
       "    (dropout): Dropout(p=0.5, inplace=False)\n",
       "  )\n",
       "  (W_q): Linear(in_features=100, out_features=100, bias=False)\n",
       "  (W_k): Linear(in_features=100, out_features=100, bias=False)\n",
       "  (W_v): Linear(in_features=100, out_features=100, bias=False)\n",
       "  (W_o): Linear(in_features=100, out_features=100, bias=False)\n",
       ")"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "num_hiddens,num_heads=100,5\n",
    "attention=MultiHeadAttention(num_hiddens,num_hiddens,num_hiddens,num_hiddens,num_heads,0.5)\n",
    "attention.eval()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO: X.shape > torch.Size([2, 4, 100])\n",
      "INFO: Y.shape > torch.Size([2, 6, 100])\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "torch.Size([2, 4, 100])"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "batch_size,num_queries,num_kvpairs,vaild_len=2,4,6,torch.tensor([3,2])\n",
    "\n",
    "# X.shape --> [2,4,100]\n",
    "X=torch.ones((batch_size,num_queries,num_hiddens))\n",
    "\n",
    "print(\"INFO: X.shape >\",X.shape)\n",
    "\n",
    "\n",
    "Y=torch.ones((batch_size,num_kvpairs,num_hiddens))\n",
    "print(\"INFO: Y.shape >\",Y.shape)\n",
    "\n",
    "\n",
    "attention(X,Y,Y,vaild_len).shape"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pytorch_gpu",
   "language": "python",
   "name": "pytorch_gpu"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
