{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 4 实验\n",
    "\n",
    "我们在涉及超长输入序列的基准测试中评估了我们的 Infini-Transformer 模型：长上下文语言建模、100 万长度的密钥上下文块检索以及 50 万长度的书籍摘要任务。对于语言建模基准测试，我们从零开始训练模型，而对于密钥和书籍摘要任务，我们持续对现有的大型语言模型进行预训练，以突出我们方法的即插即用长上下文适应能力。\n",
    "\n",
    "## 4.1 实施细节\n",
    "\n",
    "分段处理。我们将整个输入文本前向传递给 Transformer 模型，然后在每个无限注意力层执行分段处理——通过这种方式，对现有的 Transformer 实现进行最小修改。无限注意力层对输入进行分段处理，逐段处理，然后将分段结果连接起来，以原始长度的分段作为输出传递给下一层。\n",
    "\n",
    "通过时间反向传播（BPTT）。每个无限注意力层都通过时间反向传播（Werbos，1988）进行训练，通过计算与压缩记忆状态的梯度来进行，类似于训练循环神经网络（RNN）的方式。为了节省内存，我们在逐段处理序列时执行梯度检查点。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4.2 长上下文语言建模\n",
    "\n",
    "我们在 PG19（Rae 等人，2019 年）和 Arxiv-math（Wu 等人，2022 年）基准上对小型 Infini-Transformer 模型进行了训练和评估。我们的设置与记忆 Transformer（Wu 等人，2022 年）非常相似。也就是说，我们所有的模型都有 12 层，每个注意力头维度为 128，前馈神经网络隐藏层为 4096。\n",
    "\n",
    "我们将所有注意力层的 Infini-注意力分段长度 N 设置为 2048，训练时将输入序列长度设置为 32768。这使得 Infini-注意力能够在其压缩内存状态下展开超过 16 步。对于 RMT 基线，我们进行了几次实验，摘要提示长度分别为 50、100 和 150，序列长度分别为 4096、8196 和 32768。在 8196 长度的序列上进行训练时，具有 100 个摘要向量的 RMT 给出了最佳结果。\n",
    "\n",
    "语言建模实验的主要结果总结于表 2 中。我们的 Infini-Transformer 优于 Transformer-XL（戴等人，2019 年）和记忆 Transformer（吴等人，2022 年）基准模型，同时其内存参数比具有基于向量检索的键值内存且在第 9 层长度为 65K 的记忆 Transformer 模型减少了 114 倍。\n",
    "\n",
    "100K 长度的训练。我们将训练序列长度从 32K 进一步增加到 100K，并在 Arxiv-math 数据集上对模型进行训练。100K 的训练使线性模型和线性+增量模型的困惑度分数分别进一步降低\n",
    "到 2.21 和 2.20 。\n",
    "\n",
    "门控分数可视化。图 3 展示了每层中所有注意力头的压缩记忆的门控分数 sigmoid（β）。训练后，\n",
    "Infini-attention 中出现了两种类型的头：门控分数接近 0 或 1 的专门化头，以及分数接近 0.5 的混合头。专门化头通过局部注意力计算处理上下文信息，或者从压缩记忆中检索信息，而混合头则将当前上下文信息和长期记忆内容聚合在一起，形成单一的输出。有趣的是，每层至少有一个短距离头，允许输入信号向前传播到输出层。我们还观察到在整个前向计算过程中，长期和短期内容检索相互交织。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 加载数据集\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "from torch.utils.data import Dataset, DataLoader\n",
    "import string"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据预处理\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "__main__.TextDataset"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 设置随机种子\n",
    "torch.manual_seed(42)\n",
    "np.random.seed(42)\n",
    "\n",
    "\n",
    "# 读取文件内容并返回文本\n",
    "def read_file(filename):\n",
    "    with open(filename, \"r\") as f:\n",
    "        return f.read()\n",
    "\n",
    "\n",
    "# 创建字符到索引的映射\n",
    "def create_char_to_idx_mapping(text):\n",
    "    unique_chars = sorted(set(text))\n",
    "    char_to_idx = {char: idx for idx, char in enumerate(unique_chars)}\n",
    "    idx_to_char = {idx: char for char, idx in char_to_idx.items()}\n",
    "    return char_to_idx, idx_to_char\n",
    "\n",
    "\n",
    "# 将文本转换为字符索引\n",
    "def text_to_indices(text, char_to_idx):\n",
    "    return [char_to_idx[char] for char in text]\n",
    "\n",
    "\n",
    "# 加载 enwik8 数据\n",
    "train_text = read_file(\"data/enwik8/train.txt\")\n",
    "valid_text = read_file(\"data/enwik8/valid.txt\")\n",
    "test_text = read_file(\"data/enwik8/test.txt\")\n",
    "\n",
    "# 创建字符到索引的映射\n",
    "char_to_idx, idx_to_char = create_char_to_idx_mapping(train_text)\n",
    "\n",
    "# 将文本转换为索引\n",
    "train_indices = text_to_indices(train_text, char_to_idx)\n",
    "valid_indices = text_to_indices(valid_text, char_to_idx)\n",
    "test_indices = text_to_indices(test_text, char_to_idx)\n",
    "\n",
    "\n",
    "# 定义数据集类\n",
    "class TextDataset(Dataset):\n",
    "    def __init__(self, data, seq_length):\n",
    "        self.data = data\n",
    "        self.seq_length = seq_length\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.data) // self.seq_length\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        start_idx = idx * self.seq_length\n",
    "        end_idx = (idx + 1) * self.seq_length\n",
    "        input_seq = torch.tensor(self.data[start_idx:end_idx])\n",
    "        target_seq = torch.tensor(self.data[start_idx + 1 : end_idx + 1])\n",
    "        return input_seq, target_seq\n",
    "\n",
    "\n",
    "TextDataset"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 模型的定义"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "__main__.InfiniTransformer"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 定义 Infini-Transformer 模型\n",
    "class InfiniTransformer(nn.Module):\n",
    "    def __init__(\n",
    "        self, vocab_size, d_model=128, n_heads=8, n_layers=6, d_ff=512, seq_length=2048\n",
    "    ):\n",
    "        super(InfiniTransformer, self).__init__()\n",
    "\n",
    "        self.d_model = d_model\n",
    "        self.seq_length = seq_length\n",
    "\n",
    "        # 嵌入层\n",
    "        self.embedding = nn.Embedding(vocab_size, d_model)\n",
    "\n",
    "        # Transformer 编码器层\n",
    "        self.transformer_layers = nn.ModuleList(\n",
    "            [\n",
    "                nn.TransformerEncoderLayer(d_model, n_heads, d_ff)\n",
    "                for _ in range(n_layers)\n",
    "            ]\n",
    "        )\n",
    "\n",
    "        # 输出层\n",
    "        self.fc_out = nn.Linear(d_model, vocab_size)\n",
    "\n",
    "    def forward(self, x):\n",
    "        # 嵌入输入\n",
    "        x = self.embedding(x) * np.sqrt(self.d_model)\n",
    "\n",
    "        # 传递给每一层 transformer\n",
    "        for layer in self.transformer_layers:\n",
    "            x = layer(x)\n",
    "\n",
    "        # 输出预测\n",
    "        x = self.fc_out(x)\n",
    "        return x\n",
    "\n",
    "\n",
    "InfiniTransformer"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 训练与评估"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义训练函数\n",
    "def train(model, train_loader, criterion, optimizer, device):\n",
    "    model.train()\n",
    "    total_loss = 0.0\n",
    "\n",
    "    for input_seq, target_seq in train_loader:\n",
    "        input_seq, target_seq = input_seq.to(device), target_seq.to(device)\n",
    "\n",
    "        optimizer.zero_grad()\n",
    "\n",
    "        output = model(input_seq)\n",
    "\n",
    "        loss = criterion(output.view(-1, output.size(-1)), target_seq.view(-1))\n",
    "        loss.backward()\n",
    "\n",
    "        optimizer.step()\n",
    "\n",
    "        total_loss += loss.item()\n",
    "\n",
    "    return total_loss / len(train_loader)\n",
    "\n",
    "\n",
    "# 定义评估函数\n",
    "def evaluate(model, valid_loader, criterion, device):\n",
    "    model.eval()\n",
    "    total_loss = 0.0\n",
    "\n",
    "    with torch.no_grad():\n",
    "        for input_seq, target_seq in valid_loader:\n",
    "            input_seq, target_seq = input_seq.to(device), target_seq.to(device)\n",
    "\n",
    "            output = model(input_seq)\n",
    "\n",
    "            loss = criterion(output.view(-1, output.size(-1)), target_seq.view(-1))\n",
    "\n",
    "            total_loss += loss.item()\n",
    "\n",
    "    return total_loss / len(valid_loader)\n",
    "\n",
    "\n",
    "# 定义超参数\n",
    "batch_size = 64\n",
    "seq_length = 2048\n",
    "epochs = 10\n",
    "learning_rate = 1e-4\n",
    "d_model = 128\n",
    "n_heads = 8\n",
    "n_layers = 6\n",
    "d_ff = 512"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "InfiniTransformer(\n",
       "  (embedding): Embedding(12, 128)\n",
       "  (transformer_layers): ModuleList(\n",
       "    (0-5): 6 x TransformerEncoderLayer(\n",
       "      (self_attn): MultiheadAttention(\n",
       "        (out_proj): NonDynamicallyQuantizableLinear(in_features=128, out_features=128, bias=True)\n",
       "      )\n",
       "      (linear1): Linear(in_features=128, out_features=512, bias=True)\n",
       "      (dropout): Dropout(p=0.1, inplace=False)\n",
       "      (linear2): Linear(in_features=512, out_features=128, bias=True)\n",
       "      (norm1): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
       "      (norm2): LayerNorm((128,), eps=1e-05, elementwise_affine=True)\n",
       "      (dropout1): Dropout(p=0.1, inplace=False)\n",
       "      (dropout2): Dropout(p=0.1, inplace=False)\n",
       "    )\n",
       "  )\n",
       "  (fc_out): Linear(in_features=128, out_features=12, bias=True)\n",
       ")"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 创建数据集和数据加载器\n",
    "train_dataset = TextDataset(train_indices, seq_length)\n",
    "valid_dataset = TextDataset(valid_indices, seq_length)\n",
    "test_dataset = TextDataset(test_indices, seq_length)\n",
    "\n",
    "train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True)\n",
    "valid_loader = DataLoader(valid_dataset, batch_size=batch_size, shuffle=False)\n",
    "test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False)\n",
    "\n",
    "# 创建模型并将其移动到 GPU\n",
    "vocab_size = len(char_to_idx)\n",
    "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "model = InfiniTransformer(vocab_size, d_model, n_heads, n_layers, d_ff, seq_length).to(\n",
    "    device\n",
    ")\n",
    "\n",
    "model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "Adam (\n",
       "Parameter Group 0\n",
       "    amsgrad: False\n",
       "    betas: (0.9, 0.999)\n",
       "    capturable: False\n",
       "    differentiable: False\n",
       "    eps: 1e-08\n",
       "    foreach: None\n",
       "    fused: None\n",
       "    lr: 0.0001\n",
       "    maximize: False\n",
       "    weight_decay: 0\n",
       ")"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 定义损失函数和优化器\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "optimizer = optim.Adam(model.parameters(), lr=learning_rate)\n",
    "optimizer"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这里太慢了，加了个进度条在ex.py里面进行训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "ename": "KeyboardInterrupt",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m                         Traceback (most recent call last)",
      "Cell \u001b[0;32mIn[7], line 3\u001b[0m\n\u001b[1;32m      1\u001b[0m \u001b[38;5;66;03m# 训练模型\u001b[39;00m\n\u001b[1;32m      2\u001b[0m \u001b[38;5;28;01mfor\u001b[39;00m epoch \u001b[38;5;129;01min\u001b[39;00m \u001b[38;5;28mrange\u001b[39m(epochs):\n\u001b[0;32m----> 3\u001b[0m     train_loss \u001b[38;5;241m=\u001b[39m \u001b[43mtrain\u001b[49m\u001b[43m(\u001b[49m\u001b[43mmodel\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mtrain_loader\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcriterion\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43moptimizer\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mdevice\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m      4\u001b[0m     valid_loss \u001b[38;5;241m=\u001b[39m evaluate(model, valid_loader, criterion, device)\n\u001b[1;32m      6\u001b[0m     \u001b[38;5;28mprint\u001b[39m(\n\u001b[1;32m      7\u001b[0m         \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mEpoch [\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mepoch\u001b[38;5;241m+\u001b[39m\u001b[38;5;241m1\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m/\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mepochs\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m], Train Loss: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mtrain_loss\u001b[38;5;132;01m:\u001b[39;00m\u001b[38;5;124m.4f\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m, Validation Loss: \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mvalid_loss\u001b[38;5;132;01m:\u001b[39;00m\u001b[38;5;124m.4f\u001b[39m\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m      8\u001b[0m     )\n",
      "Cell \u001b[0;32mIn[4], line 14\u001b[0m, in \u001b[0;36mtrain\u001b[0;34m(model, train_loader, criterion, optimizer, device)\u001b[0m\n\u001b[1;32m     11\u001b[0m output \u001b[38;5;241m=\u001b[39m model(input_seq)\n\u001b[1;32m     13\u001b[0m loss \u001b[38;5;241m=\u001b[39m criterion(output\u001b[38;5;241m.\u001b[39mview(\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m, output\u001b[38;5;241m.\u001b[39msize(\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m)), target_seq\u001b[38;5;241m.\u001b[39mview(\u001b[38;5;241m-\u001b[39m\u001b[38;5;241m1\u001b[39m))\n\u001b[0;32m---> 14\u001b[0m \u001b[43mloss\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbackward\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m     16\u001b[0m optimizer\u001b[38;5;241m.\u001b[39mstep()\n\u001b[1;32m     18\u001b[0m total_loss \u001b[38;5;241m+\u001b[39m\u001b[38;5;241m=\u001b[39m loss\u001b[38;5;241m.\u001b[39mitem()\n",
      "File \u001b[0;32m~/Developer/python/SAM/reproduction/infini-transformer/venv/lib/python3.9/site-packages/torch/_tensor.py:581\u001b[0m, in \u001b[0;36mTensor.backward\u001b[0;34m(self, gradient, retain_graph, create_graph, inputs)\u001b[0m\n\u001b[1;32m    571\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m has_torch_function_unary(\u001b[38;5;28mself\u001b[39m):\n\u001b[1;32m    572\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m handle_torch_function(\n\u001b[1;32m    573\u001b[0m         Tensor\u001b[38;5;241m.\u001b[39mbackward,\n\u001b[1;32m    574\u001b[0m         (\u001b[38;5;28mself\u001b[39m,),\n\u001b[0;32m   (...)\u001b[0m\n\u001b[1;32m    579\u001b[0m         inputs\u001b[38;5;241m=\u001b[39minputs,\n\u001b[1;32m    580\u001b[0m     )\n\u001b[0;32m--> 581\u001b[0m \u001b[43mtorch\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mautograd\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mbackward\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m    582\u001b[0m \u001b[43m    \u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mgradient\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mretain_graph\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mcreate_graph\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43minputs\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43minputs\u001b[49m\n\u001b[1;32m    583\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/Developer/python/SAM/reproduction/infini-transformer/venv/lib/python3.9/site-packages/torch/autograd/__init__.py:347\u001b[0m, in \u001b[0;36mbackward\u001b[0;34m(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)\u001b[0m\n\u001b[1;32m    342\u001b[0m     retain_graph \u001b[38;5;241m=\u001b[39m create_graph\n\u001b[1;32m    344\u001b[0m \u001b[38;5;66;03m# The reason we repeat the same comment below is that\u001b[39;00m\n\u001b[1;32m    345\u001b[0m \u001b[38;5;66;03m# some Python versions print out the first line of a multi-line function\u001b[39;00m\n\u001b[1;32m    346\u001b[0m \u001b[38;5;66;03m# calls in the traceback and some print out the last line\u001b[39;00m\n\u001b[0;32m--> 347\u001b[0m \u001b[43m_engine_run_backward\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m    348\u001b[0m \u001b[43m    \u001b[49m\u001b[43mtensors\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    349\u001b[0m \u001b[43m    \u001b[49m\u001b[43mgrad_tensors_\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    350\u001b[0m \u001b[43m    \u001b[49m\u001b[43mretain_graph\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    351\u001b[0m \u001b[43m    \u001b[49m\u001b[43mcreate_graph\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    352\u001b[0m \u001b[43m    \u001b[49m\u001b[43minputs\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m    353\u001b[0m \u001b[43m    \u001b[49m\u001b[43mallow_unreachable\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mTrue\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m    354\u001b[0m \u001b[43m    \u001b[49m\u001b[43maccumulate_grad\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[38;5;28;43;01mTrue\u001b[39;49;00m\u001b[43m,\u001b[49m\n\u001b[1;32m    355\u001b[0m \u001b[43m\u001b[49m\u001b[43m)\u001b[49m\n",
      "File \u001b[0;32m~/Developer/python/SAM/reproduction/infini-transformer/venv/lib/python3.9/site-packages/torch/autograd/graph.py:825\u001b[0m, in \u001b[0;36m_engine_run_backward\u001b[0;34m(t_outputs, *args, **kwargs)\u001b[0m\n\u001b[1;32m    823\u001b[0m     unregister_hooks \u001b[38;5;241m=\u001b[39m _register_logging_hooks_on_whole_graph(t_outputs)\n\u001b[1;32m    824\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 825\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[43mVariable\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_execution_engine\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mrun_backward\u001b[49m\u001b[43m(\u001b[49m\u001b[43m  \u001b[49m\u001b[38;5;66;43;03m# Calls into the C++ engine to run the backward pass\u001b[39;49;00m\n\u001b[1;32m    826\u001b[0m \u001b[43m        \u001b[49m\u001b[43mt_outputs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43margs\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\n\u001b[1;32m    827\u001b[0m \u001b[43m    \u001b[49m\u001b[43m)\u001b[49m  \u001b[38;5;66;03m# Calls into the C++ engine to run the backward pass\u001b[39;00m\n\u001b[1;32m    828\u001b[0m \u001b[38;5;28;01mfinally\u001b[39;00m:\n\u001b[1;32m    829\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m attach_logging_hooks:\n",
      "\u001b[0;31mKeyboardInterrupt\u001b[0m: "
     ]
    }
   ],
   "source": [
    "# 训练模型\n",
    "for epoch in range(epochs):\n",
    "    train_loss = train(model, train_loader, criterion, optimizer, device)\n",
    "    valid_loss = evaluate(model, valid_loader, criterion, device)\n",
    "\n",
    "    print(\n",
    "        f\"Epoch [{epoch+1}/{epochs}], Train Loss: {train_loss:.4f}, Validation Loss: {valid_loss:.4f}\"\n",
    "    )\n",
    "print(\"train is done\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 测试模型\n",
    "test_loss = evaluate(model, test_loader, criterion, device)\n",
    "print(f\"Test Loss: {test_loss:.4f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4.3 LLM 持续预训练\n",
    "\n",
    "我们对现有的语言模型进行了轻量级的持续预训练，以适应长上下文。预训练数据包括 PG19 和\n",
    "Arxiv-math 语料库以及长度超过 4K 个标记的 C4 文本（Raffel 等人，2020 年）。在我们的整个实验中，片段长度 N 被设置为 2K。\n",
    "\n",
    "1M 密码密钥检索基准。我们在一个 10 亿参数的大型语言模型中用 Infiniattention 替换了普通的多头注意力机制，并继续对长度为 4K 的输入进行预训练。该模型在通过大小为 64 的批次进行 30K 步训练后，在密码密钥检索任务上进行了微调（Mohtashami 和 Jaggi，2024 年）。\n",
    "\n",
    "密码任务将一个随机数隐藏在一段长文本中，并在模型输出时要求其返回。通过多次重复一个文本片段来改变干扰文本的长度。先前的研究（Chen 等人，2023a）表明，使用相同长度的 32K 输入进行位置插值的微调时，一个 8B 的 LLaMA 模型能够完成长达 32K 长度的任务。我们进一步接受这一挑战，仅使用 5K 长度的输入进行微调，并在 100 万长度的范围内进行测试。\n",
    "\n",
    "表 3 报告了输入长度从 32K 到 1M 的测试子集的标记级准确率。对于每个测试子集，我们控制了密钥的位置，使其位于输入序列的开头、中间或结尾附近。我们报告了零样本准确率和微调准确率。Infini-Transformers 在 5K 长度的输入上微调 400 步后，在高达 1M 的上下文长度下解决了该任务。\n",
    "\n",
    "50 万字长度的书籍摘要（书籍摘要）。我们进一步扩大了我们的方法规模，通过使用 8000 个输入长度的 80 亿个大型语言模型（LLM）进行连续预训练，训练步骤为 30,000 步。然后，我们在书籍摘要任务 BookSum （Kryściński 等人，2021 年）上进行了微调，该任务的目标是生成整本书文本的摘要。\n",
    "\n",
    "我们将输入长度设置为 32K 以进行微调，并将其增加到 500K 以进行评估。我们使用生成温度为 0.5 和 topp = 0.95，并将解码步骤数设置为 1024 来生成每本书的摘要。\n",
    "\n",
    "结果显示，通过处理来自书籍的整个文本，在 BookSum 上实现了新的最先进水平。我们还绘制了 BookSum 数据验证集上的总体 Rouge 分数图（如图 4 所示）。有一个明显的趋势表明，随着从书籍中提供更多的文本作为输入，我们的 Infini-Transformers 提高了其摘要性能指标。\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
