{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "from torch import nn\n",
    "\n",
    "from labml.logger import inspect\n",
    "from py.attention_impl import MultiHeadAttention\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "传统Transformer模型使用绝对位置编码，即每个位置有一个固定的编码向量。然而，这种方法在处理长序列时可能无法有效捕捉相对位置关系。ROPE通过引入旋转矩阵，将位置信息嵌入到模型的注意力机制中，从而更好地建模相对位置关系。\n",
    "\n",
    "ROPE的核心思想是通过旋转矩阵对词向量进行变换，将位置信息融入词向量中。假设输入向量的长度是偶数（一般也确实是偶数）：\n",
    "\n",
    "$$\n",
    "\\begin{align}\n",
    "RoPE(\\bf{x}_m,m) &=\\bf{R_m}\\bf{x}_m\\\\\n",
    "&=\\begin{pmatrix}\n",
    "\\cos m\\theta_1 & -\\sin m\\theta_1 & 0 &\\cdots & \\cdots & 0 \\\\\n",
    "\\sin m\\theta_1 & \\cos m\\theta_1 & 0 &\\cdots & \\cdots & 0 \\\\\n",
    "0 & 0 & \\cos m\\theta_2 & -\\sin m\\theta_2 & \\cdots & 0\\\\\n",
    "0 & 0 & \\sin m\\theta_2 & \\cos m\\theta_2& \\cdots & 0 \\\\\n",
    "\\vdots & \\vdots & \\cdots &\\ddots & \\cos m\\theta_{\\frac{d}{2}} & -\\sin m\\theta_{\\frac{d}{2}}\\\\\n",
    "0 & 0 & \\cdots &\\cdots & \\sin m\\theta_{\\frac{d}{2}} & \\cos m\\theta_{\\frac{d}{2}}\n",
    "\\end{pmatrix}\n",
    "\\begin{pmatrix}\n",
    "x_m^{(1)} \\\\\n",
    "x_m^{(1+\\frac{d}{2})} \\\\\n",
    "\\vdots\\\\\n",
    "x_m^{(\\frac{d}{2})}\\\\\n",
    "x_m^{(d)}\n",
    "\\end{pmatrix}\n",
    "\\end{align}\n",
    "\n",
    "$$\n",
    "式中，$\\theta_i=10000^{\\frac{-2i}{n}}$，m是输入向量对应token的从0开始的索引位置。完成计算后，各个维度的位置还应当还原，因此在计算上，可以直接在原值上计算。\n",
    "\n",
    "对于第i对（从1开始数），对应的输出结果是：\n",
    "$$\n",
    "\\begin{pmatrix}\n",
    "x_m^{(i)}\\cos m\\theta -x_m^{(i+\\frac{d}{2})}\\sin m\\theta \\\\\n",
    "x_m^{(i)}\\sin m\\theta + x_m^{(i+\\frac{d}{2})}\\cos m\\theta\n",
    "\\end{pmatrix}\n",
    "$$\n",
    "注意到，RoPE并不是计算相邻2个维度的旋转结果，而是隔d/2组成一对，这可以降低计算复杂度，增加维度分布的平均性，防止某个维度的信息被忽略\n",
    "\n",
    "可以看到，RoPE的核心思想是把输入向量中每2个分量视为1个二维平面，并分别应用旋转矩阵。这样设计的核心原因在于，可以将两个不同绝对位置对应分量的点积，转换成相对位置的计算：\n",
    "$$\n",
    "\\begin{align}\n",
    "&\\langle RoPE(\\vec{x}_m, m), RoPE(\\vec{x}_n , n) \\rangle\\\\\n",
    "&=\\langle RoPE(\\vec{x}_m, m-n), RoPE(\\vec{x}_n , 0) \\rangle\n",
    "\\end{align}\n",
    "$$\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "传统Transformer模型使用绝对位置编码，即每个位置有一个固定的编码向量。然而，这种方法在处理长序列时可能无法有效捕捉相对位置关系。ROPE通过引入旋转矩阵，将位置信息嵌入到模型的注意力机制中，从而更好地建模相对位置关系。\n",
    "\n",
    "ROPE的核心思想是通过旋转矩阵对词向量进行变换，将位置信息融入词向量中。假设输入向量的长度是偶数（一般也确实是偶数）：\n",
    "\n",
    "$$\n",
    "\\begin{align}\n",
    "RoPE(\\bf{x}_m,m) &=\\bf{R_m}\\bf{x}_m\\\\\n",
    "&=\\begin{pmatrix}\n",
    "\\cos m\\theta_1 & -\\sin m\\theta_1 & 0 &\\cdots & \\cdots & 0 \\\\\n",
    "\\sin m\\theta_1 & \\cos m\\theta_1 & 0 &\\cdots & \\cdots & 0 \\\\\n",
    "0 & 0 & \\cos m\\theta_2 & -\\sin m\\theta_2 & \\cdots & 0\\\\\n",
    "0 & 0 & \\sin m\\theta_2 & \\cos m\\theta_2& \\cdots & 0 \\\\\n",
    "\\vdots & \\vdots & \\cdots &\\ddots & \\cos m\\theta_{\\frac{d}{2}} & -\\sin m\\theta_{\\frac{d}{2}}\\\\\n",
    "0 & 0 & \\cdots &\\cdots & \\sin m\\theta_{\\frac{d}{2}} & \\cos m\\theta_{\\frac{d}{2}}\n",
    "\\end{pmatrix}\n",
    "\\begin{pmatrix}\n",
    "x_m^{(1)} \\\\\n",
    "x_m^{(1+\\frac{d}{2})} \\\\\n",
    "\\vdots\\\\\n",
    "x_m^{(\\frac{d}{2})}\\\\\n",
    "x_m^{(d)}\n",
    "\\end{pmatrix}\n",
    "\\end{align}\n",
    "\n",
    "$$\n",
    "式中，$\\theta_i=10000^{\\frac{-2i}{n}}$，m是输入向量对应token的从0开始的索引位置。\n",
    "\n",
    "对于第i对（从1开始数），对应的输出结果是：\n",
    "$$\n",
    "\\begin{pmatrix}\n",
    "x_m^{(i)}\\cos m\\theta -x_m^{(i+\\frac{d}{2})}\\sin m\\theta \\\\\n",
    "x_m^{(i)}\\sin m\\theta + x_m^{(i+\\frac{d}{2})}\\cos m\\theta\n",
    "\\end{pmatrix}\n",
    "$$\n",
    "注意到，RoPE并不是计算相邻2个维度的旋转结果，而是隔d/2组成一对，这可以降低计算复杂度，增加维度分布的平均性，防止某个维度的信息被忽略\n",
    "\n",
    "可以看到，RoPE的核心思想是把输入向量中每2个分量视为1个二维平面，并分别应用旋转矩阵。这样设计的核心原因在于，可以将两个不同绝对位置对应分量的点积，转换成相对位置的计算：\n",
    "$$\n",
    "\\begin{align}\n",
    "&\\langle RoPE(\\vec{x}_m, m), RoPE(\\vec{x}_n , n) \\rangle\\\\\n",
    "&=\\langle RoPE(\\vec{x}_m, m-n), RoPE(\\vec{x}_n , 0) \\rangle\n",
    "\\end{align}\n",
    "$$\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class RotaryPositionalEmbeddings(nn.Module):\n",
    "    def __init__(self, d: int, base: int = 10_000):\n",
    "        super().__init__()\n",
    "        self.base = base\n",
    "        self.d = d\n",
    "        self._cos_cached = self._sin_cached = None\n",
    "\n",
    "    def _build_cached(self, x: torch.Tensor):\n",
    "        # 已经build的cache的大小大于已有dimension的大小\n",
    "        if self._cos_cached is not None and x.shape[0] <= self._cos_cached.shape[0]:\n",
    "            return\n",
    "        seq_len = x.shape[0]\n",
    "        # 见上述公式\n",
    "        theta = (1.0 / self.base **\n",
    "                 (torch.arange(0, self.d, 2).float()/self.d)).to(x.device)\n",
    "        \n",
    "\n",
    "        seq_idx = torch.arange(seq_len, device=x.device, dtype=torch.float32)\n",
    "        # 长度为n，维度为d/2的数据，就需要nd/2个角度值,shape = (seq_len, d//2)\n",
    "        # 这样表示的einsum就是在计算外积\n",
    "        # 实际的数值中，idx_angle[i][j] = i*θ_j\n",
    "        idx_angle = torch.einsum(\"n,d->nd\", seq_idx, theta)\n",
    "        # shape = (seq_len, d)\n",
    "        both_idx_angle = torch.cat([idx_angle, idx_angle], dim=1)\n",
    "        # shape = (seq_len, 1,1, d)，用于将计算结果广播\n",
    "        self._sin_cached = both_idx_angle.sin()[:, None, None, :]\n",
    "        self._cos_cached = both_idx_angle.cos()[:, None, None, :]\n",
    "\n",
    "\n",
    "    def _neg_half(self, x: torch.Tensor) -> torch.Tensor:\n",
    "        \"\"\"将需要rope的x部分按最后的dmodel维度，前半段设为-,后半段保持原样,并将前后半段旋转\n",
    "\n",
    "        Args:\n",
    "            x (torch.Tensor): _description_\n",
    "        \"\"\"\n",
    "        d_2 = self.d // 2\n",
    "        return torch.cat([-x[..., d_2:], x[..., :d_2]], dim=-1)\n",
    "\n",
    "    def forward(self, x: torch.Tensor):\n",
    "        self._build_cached(x)\n",
    "        seq_len = x.shape[0]\n",
    "        assert x.shape[-1] % self.d == 0\n",
    "        # RoPE的另一特性，分组旋转\n",
    "        # chatGLM则选择只旋转一部分，剩下的原样输出，减少计算量\n",
    "        x_chunk = torch.chunk(x, x.shape[-1] // self.d, dim=-1)\n",
    "        x_rope_chunk = []\n",
    "        for x_rope in x_chunk:\n",
    "            neg_half_x = self._neg_half(x_rope)\n",
    "\n",
    "            # 根据旋转公式计算, 这里不是直接计算出矩阵行*向量列直接得到一个结果分量，\n",
    "            # 而是用矩阵列分别乘以向量分量，分别计算每一个矩阵列对最终运算的结果后再加和\n",
    "            x_rope_chunk.append( x_rope * self._cos_cached[:seq_len] + neg_half_x * self._sin_cached[:seq_len])\n",
    "        return torch.cat(x_rope_chunk, dim=-1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[[[ 1.5196,  0.1239, -0.5330, -0.2248]]],\n",
      "\n",
      "\n",
      "        [[[ 2.8112, -0.3056, -0.1887,  2.2337]]],\n",
      "\n",
      "\n",
      "        [[[-2.0540, -0.1236,  1.6194,  1.4238]]],\n",
      "\n",
      "\n",
      "        [[[ 0.2819,  1.9064, -0.1381, -3.1228]]]])\n",
      "tensor([[[[ 1.5196,  0.1239, -0.5330, -0.2248]]],\n",
      "\n",
      "\n",
      "        [[[ 2.8112, -0.3056, -0.1887,  2.2337]]],\n",
      "\n",
      "\n",
      "        [[[-2.0540, -0.1236,  1.6194,  1.4238]]],\n",
      "\n",
      "\n",
      "        [[[ 0.2819,  1.9064, -0.1381, -3.1228]]]]) tensor([[[[ 0.5330,  0.2248,  1.5196,  0.1239]]],\n",
      "\n",
      "\n",
      "        [[[ 0.1887, -2.2337,  2.8112, -0.3056]]],\n",
      "\n",
      "\n",
      "        [[[-1.6194, -1.4238, -2.0540, -0.1236]]],\n",
      "\n",
      "\n",
      "        [[[ 0.1381,  3.1228,  0.2819,  1.9064]]]])\n",
      "tensor([[[[ 1.5196,  0.1239, -0.5330, -0.2248]]],\n",
      "\n",
      "\n",
      "        [[[ 1.6777, -0.3279,  2.2636,  2.2305]]],\n",
      "\n",
      "\n",
      "        [[[-0.6177, -0.1520, -2.5416,  1.4211]]],\n",
      "\n",
      "\n",
      "        [[[-0.2596,  1.9992,  0.1765, -3.0642]]]])\n",
      "tensor([[[[ 1.5196,  0.1239, -0.5330, -0.2248]]],\n",
      "\n",
      "\n",
      "        [[[ 1.6777, -0.3279,  2.2636,  2.2305]]],\n",
      "\n",
      "\n",
      "        [[[-0.6177, -0.1520, -2.5416,  1.4211]]],\n",
      "\n",
      "\n",
      "        [[[-0.2596,  1.9992,  0.1765, -3.0642]]]])\n",
      "True\n"
     ]
    }
   ],
   "source": [
    "from tkinter import NO\n",
    "import torch\n",
    "\n",
    "\n",
    "class MyRoPE(nn.Module):\n",
    "    def __init__(self, d: int, base:int = 10000):\n",
    "        super().__init__()\n",
    "        self.d = d\n",
    "        self.base = base\n",
    "        self._sin_cached = self._cos_cached = None\n",
    "    \n",
    "\n",
    "    def _build_cache(self, x: torch.Tensor):\n",
    "        build_len = (x.shape[0]+1)//2*2\n",
    "        if self._sin_cached is not None and self._sin_cached.shape[0] >= build_len:\n",
    "            return\n",
    "        \n",
    "        exponent = -torch.arange(0,self.d,2, device=x.device).float()/self.d\n",
    "        theta = torch.pow(self.base, exponent)\n",
    "        seq_idx = torch.arange(build_len, dtype=x.dtype, device=x.device)\n",
    "\n",
    "        angle = torch.einsum(\"n,d->nd\", seq_idx,theta)\n",
    "        angle = torch.cat([angle,angle], dim=-1)\n",
    "\n",
    "        self._sin_cached = angle.sin()[:,None,None,:]\n",
    "        self._cos_cached = angle.cos()[:,None,None,:]\n",
    "\n",
    "    def forward(self, x: torch.Tensor, inplace=True):\n",
    "        self._build_cache(x)\n",
    "        seq_len,*_, model_dim = x.shape\n",
    "        chunked_x = x.chunk(model_dim//self.d, -1)\n",
    "        chunks = [] \n",
    "        for id,chunk in enumerate(chunked_x):\n",
    "            neg_half = torch.cat([-chunk[...,self.d//2:], chunk[...,:self.d//2]],dim=-1)\n",
    "            mid_val = chunk*self._cos_cached[:seq_len] + neg_half*self._sin_cached[:seq_len]\n",
    "            if inplace:\n",
    "                x[...,id*self.d:(id+1)*self.d] = mid_val\n",
    "            else:\n",
    "                chunks.append(mid_val)\n",
    "        return x if inplace else torch.cat(chunks, dim=-1)\n",
    "\n",
    "\n",
    "heads, d_model, seq_len_q, batch = 1, 4, 4, 1\n",
    "my_rope = MyRoPE(d_model//heads)\n",
    "x = torch.randn((seq_len_q, batch,heads, d_model//heads))\n",
    "\n",
    "rope = RotaryPositionalEmbeddings(d_model//heads)\n",
    "# print(rope(x).shape)\n",
    "# print(my_rope(x).shape)\n",
    "print(torch.allclose(rope(x), my_rope(x,False)))\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "        \n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "class RotaryPEMultiHeadAttention(MultiHeadAttention):\n",
    "    def __init__(self, heads: int, d_model: int, rope_chunk: int = 2, dropout_prob: float = 0.0):\n",
    "        super().__init__(heads, d_model, dropout_prob)\n",
    "        self.query_rope = RotaryPositionalEmbeddings(d_model//rope_chunk)\n",
    "        self.key_rope = RotaryPositionalEmbeddings(d_model//rope_chunk)\n",
    "\n",
    "    def get_score(self, query: torch.Tensor, key: torch.Tensor):\n",
    "        return torch.einsum(\"qbhd,kbhd->qkbh\", self.query_rope(query), self.key_rope(key))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[[[ 1.,  2.,  3.,  4.]]],\n",
      "\n",
      "\n",
      "        [[[ 4.,  5.,  6.,  7.]]],\n",
      "\n",
      "\n",
      "        [[[ 7.,  8.,  9., 10.]]]])\n",
      "tensor([[[[  1.0000,   2.0000,   3.0000,   4.0000]]],\n",
      "\n",
      "\n",
      "        [[[ -2.8876,   4.9298,   6.6077,   7.0496]]],\n",
      "\n",
      "\n",
      "        [[[-11.0967,   7.7984,   2.6198,  10.1580]]]])\n"
     ]
    }
   ],
   "source": [
    "def _test_rotary():\n",
    "    x = torch.tensor([[1, 2, 3, 4], [4, 5, 6, 7], [\n",
    "                     7, 8, 9, 10]], dtype=torch.float)\n",
    "    x = x[:, None, None, :]\n",
    "    print(x)\n",
    "\n",
    "    rotary_pe = RotaryPositionalEmbeddings(4)\n",
    "    print(rotary_pe(x))\n",
    "\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    _test_rotary()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
