{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "8467cd89-3261-4eab-84f5-5a294f04cefc",
   "metadata": {},
   "source": [
    "# Transformer基础示例"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c0d26415-0fd5-4fd4-928b-428df9270cf6",
   "metadata": {},
   "source": [
    "## 基础构建块示例"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "50ed3b74-ca46-487f-9b0f-f7c1fae3b4b6",
   "metadata": {},
   "source": [
    "### 位置编码\n",
    "\n",
    "\n",
    "\n",
    "**位置编码公式说明**\n",
    "\n",
    "- 频率分量div_term是Transformer位置编码公式的核心部分，用于生成不同频率的正弦和余弦函数。它是形状为 (d_model//2,) 的张量，用于计算位置编码中的频率分量，对应公式：\n",
    "$$\\text{div\\_term}_i = \\exp\\left(-\\frac{\\log(10000)}{d_{model}} \\cdot i\\right) = \\frac{1}{10000^{i/d_{model}}}$$\n",
    "  - 其中 $i$ 是偶数索引 (0, 2, 4, ...)。\n",
    "\n",
    "- 将position（形状 (max_len, 1)）与div_term（形状 (d_model//2,)）相乘，利用广播机制得到形状为 (max_len, d_model//2) 的张量，而后再分别对其应用正弦和余弦函数，以分别分别填充它的偶数列与奇数列，从而形成位置编码。\n",
    "\n",
    "- 偶数维度的正弦位置编码公式：$\\text{PE}(pos, 2i) = \\sin\\left(\\frac{pos}{10000^{2i/d_{model}}}\\right)$\n",
    "\n",
    "- 奇数维度的余弦位置编码公式：$\\text{PE}(pos, 2i+1) = \\cos\\left(\\frac{pos}{10000^{2i/d_{model}}}\\right)$\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3d396026-bbd8-4898-ae64-170f8e8ab5de",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import math\n",
    "\n",
    "# 参数设置\n",
    "d_model = 8      # 词嵌入的维度，简化为8以方便观察\n",
    "max_len = 8      # 序列最大长度\n",
    "\n",
    "# --- 1. 计算位置编码矩阵 ---\n",
    "\n",
    "# 1.1. 初始化一个形状为 (8, 8) 的零矩阵\n",
    "# 创建一个预先分配好的“容器”，用于逐步存储计算出的位置编码值\n",
    "pe = torch.zeros(max_len, d_model)\n",
    "print(f\"Step 1: 初始化零矩阵pe的形状: {pe.shape}\\n{pe}\")\n",
    "print(\"-\" * 50)\n",
    "\n",
    "# 1.2. 创建位置索引张量\n",
    "# position的形状为 [8, 1]，包含了从0到7的位置索引\n",
    "position = torch.arange(0, max_len).unsqueeze(1)\n",
    "print(f\"Step 2: 位置索引张量 position 的形状: {position.shape}\\n{position}\")\n",
    "print(\"-\" * 50)\n",
    "\n",
    "# 1.3. 计算分母的倒数（div_term）\n",
    "# arange(0, 8, 2) 生成 [0, 2, 4, 6]\n",
    "# div_term 的形状为 [4]\n",
    "div_term = torch.exp(torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model))\n",
    "print(f\"Step 3: div_term 的形状: {div_term.shape}\\n{div_term}\")\n",
    "print(\"-\" * 50)\n",
    "\n",
    "# 1.4. 使用广播乘法计算正弦和余弦的输入\n",
    "# position的形状[8, 1]会广播到[8, 4]\n",
    "# div_term的形状[4]会广播到[8, 4]\n",
    "pe_input = position * div_term\n",
    "print(f\"Step 4: 正余弦输入张量的形状: {pe_input.shape}\\n{pe_input}\")\n",
    "print(\"-\" * 50)\n",
    "\n",
    "# 1.5. 填充pe矩阵\n",
    "# 填充偶数维度\n",
    "pe_even = torch.sin(pe_input)\n",
    "pe[:, 0::2] = pe_even\n",
    "print(f\"Step 5a: torch.sin()负责填充偶数维度（0::2），结果形状: {pe_even.shape}\\n{pe_even}\")\n",
    "print(\"-\" * 50)\n",
    "\n",
    "# 填充奇数维度\n",
    "pe_odd = torch.cos(pe_input)\n",
    "pe[:, 1::2] = pe_odd\n",
    "print(f\"Step 5b: torch.cos()负责填充奇数维度（1::2），结果形状: {pe_odd.shape}\\n{pe_odd}\")\n",
    "print(\"-\" * 50)\n",
    "\n",
    "print(f\"Step 5c: 将偶数维度与奇数维度合并后，最终的位置编码矩阵pe的形状: {pe.shape}\\n{pe}\")\n",
    "print(\"-\" * 50)\n",
    "\n",
    "# --- 2. 演示位置编码的应用 ---\n",
    "\n",
    "# 2.1. 模拟一个输入的词嵌入张量\n",
    "# 假设batch_size = 2，序列长度seq_len = 8\n",
    "batch_size = 2\n",
    "seq_len = 8\n",
    "x = torch.randn(batch_size, seq_len, d_model)\n",
    "\n",
    "print(f\"Step 6: 原始输入张量x的形状: {x.shape}\")\n",
    "print(\"-\" * 50)\n",
    "\n",
    "# 2.2. 将位置编码加到输入张量中\n",
    "# 我们只取pe的前seq_len行\n",
    "pe_to_add = pe[:seq_len, :]\n",
    "\n",
    "# PyTorch自动将[8, 8]的pe_to_add广播为 [2, 8, 8]\n",
    "x_with_pe = x + pe_to_add\n",
    "\n",
    "print(f\"Step 7: 添加位置编码后的张量 x_with_pe 的形状: {x_with_pe.shape}\\n\")\n",
    "\n",
    "# 检查结果\n",
    "print(\"添加位置编码前后的差异（第0个样本，第0个位置的向量）：\")\n",
    "print(f\"原始向量的前8个值: {x[0, 0, :8]}\")\n",
    "print(f\"位置编码pe的第0个向量: {pe_to_add[0, :8]}\")\n",
    "print(f\"相加后的向量的前8个值: {x_with_pe[0, 0, :8]}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e7c4a40c-1c77-4387-8e4a-bdc620ae3539",
   "metadata": {},
   "source": [
    "### 自注意力计算示例\n",
    "\n",
    "**单头自注意力计算过程**\n",
    "- 首先通过 线性变换 得到 查询（Q）、键（K）、值（V）：$Q = X W^Q, K = X W^K, V = X W^V$\n",
    "- 相关性打分（缩放点积注意力）：$\\text{Scores} = \\frac{Q K^T}{\\sqrt{d_k}}$\n",
    "- Softmax归一化：$\\text{Attention Weights}= \\text{softmax}\\left(\\frac{Q K^T}{\\sqrt{d_k}}\\right)$\n",
    "- 加权求和得到输出：$\\text{Attention}(Q, K, V) = \\text{softmax}\\left(\\frac{Q K^T}{\\sqrt{d_k}}\\right) V$\n",
    "\n",
    "**多头自注意力的分头计算**\n",
    "\n",
    "标准自注意力通常以多头形式实现，将 $ Q $、$ K $、$ V $ 拆分为多个子空间并行计算，然后合并结果。\n",
    "- 拆分多头：$Q_h = (X W^Q_h) \\in \\mathbb{R}^{n \\times d_k}, \\quad K_h = (X W^K_h) \\in \\mathbb{R}^{n \\times d_k}, \\quad V_h = (X W^V_h) \\in \\mathbb{R}^{n \\times d_v}$\n",
    "  - 其中 $ W^Q_h, W^K_h, W^V_h \\in \\mathbb{R}^{d \\times d_k} $，$ d_k = d / h $，$ h $ 是头数。\n",
    "  - 每头分别计算注意力：$\\text{head}_i = \\text{Attention}(Q_h, K_h, V_h) = \\text{softmax}\\left(\\frac{Q_h K_h^T}{\\sqrt{d_k}}\\right) V_h$\n",
    "- 合并多头：$\\text{MultiHead Output} = \\text{Concat}(\\text{head}_1, \\text{head}_2, \\dots, \\text{head}_h) W^O$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "59d74d16-6c37-4799-bbd7-ea52e687a091",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import math\n",
    "\n",
    "# 参数设置\n",
    "vocab_size = 100\n",
    "d_model = 8         # 词嵌入维度\n",
    "num_heads = 4       # 注意力头数\n",
    "seq_len = 5         # 序列长度 (\"I am a good student\")\n",
    "batch_size = 1      # 批量大小\n",
    "\n",
    "# 1. 模拟数据和模块初始化\n",
    "# ----------------------------------\n",
    "\n",
    "# 模拟输入：句子 \"I am a good student\" 的词汇表索引\n",
    "input_ids = torch.tensor([[5, 12, 34, 45, 67]]) \n",
    "\n",
    "# 模拟一个简单的词嵌入层\n",
    "embedding = nn.Embedding(vocab_size, d_model)\n",
    "\n",
    "# 模拟一个多头注意力模块\n",
    "W_q = nn.Linear(d_model, d_model)\n",
    "W_k = nn.Linear(d_model, d_model)\n",
    "W_v = nn.Linear(d_model, d_model)\n",
    "W_o = nn.Linear(d_model, d_model)\n",
    "d_k = d_model // num_heads # 每个头的维度\n",
    "\n",
    "print(\"--- 1. 模拟输入与参数 ---\")\n",
    "print(f\"输入句子 ('I am a good student') 索引: {input_ids}\")\n",
    "print(f\"输入张量形状: {input_ids.shape}\")\n",
    "print(f\"模型维度 (d_model): {d_model}\")\n",
    "print(f\"注意力头数 (num_heads): {num_heads}\")\n",
    "print(f\"每个头的维度 (d_k): {d_k}\\n\")\n",
    "\n",
    "# 2. 词嵌入\n",
    "# ----------------------------------\n",
    "# 将输入索引转换为词嵌入向量\n",
    "x = embedding(input_ids)\n",
    "print(\"--- 2. 词嵌入 ---\")\n",
    "print(f\"词嵌入向量形状: {x.shape}\\n\")\n",
    "\n",
    "\n",
    "# 3. 计算 Q, K, V\n",
    "# ----------------------------------\n",
    "# 线性变换，得到 Q, K, V\n",
    "Q = W_q(x)\n",
    "K = W_k(x)\n",
    "V = W_v(x)\n",
    "\n",
    "print(\"--- 3. Q, K, V 线性变换 ---\")\n",
    "print(f\"Q 张量形状: {Q.shape}\")\n",
    "print(f\"K 张量形状: {K.shape}\")\n",
    "print(f\"V 张量形状: {V.shape}\\n\")\n",
    "\n",
    "\n",
    "# 4. 分头 (Split Heads)\n",
    "# ----------------------------------\n",
    "# 将 Q, K, V 重塑并转置，以便进行多头计算\n",
    "Q = Q.view(batch_size, seq_len, num_heads, d_k).transpose(1, 2)\n",
    "K = K.view(batch_size, seq_len, num_heads, d_k).transpose(1, 2)\n",
    "V = V.view(batch_size, seq_len, num_heads, d_k).transpose(1, 2)\n",
    "\n",
    "print(\"--- 4. 分头 ---\")\n",
    "print(f\"分头后的 Q 张量形状: {Q.shape}\") # [batch_size, num_heads, seq_len, d_k]\n",
    "print(f\"分头后的 K 张量形状: {K.shape}\")\n",
    "print(f\"分头后的 V 张量形状: {V.shape}\\n\")\n",
    "\n",
    "\n",
    "# 5. 计算注意力分数\n",
    "# ----------------------------------\n",
    "# 缩放点积注意力公式：Q * K^T / sqrt(d_k)\n",
    "scores = torch.matmul(Q, K.transpose(-2, -1)) / math.sqrt(d_k)\n",
    "\n",
    "print(\"--- 5. 计算注意力分数 ---\")\n",
    "print(f\"分数矩阵 (Q * K^T) 形状: {scores.shape}\") # [batch, num_heads, seq_len, seq_len]\n",
    "print(f\"注意力分数（第一个头）:\\n{scores[0, 0]}\\n\")\n",
    "\n",
    "\n",
    "# 6. softmax\n",
    "# ----------------------------------\n",
    "# 对分数进行 softmax，得到注意力权重\n",
    "attention_weights = torch.softmax(scores, dim=-1)\n",
    "\n",
    "print(\"--- 6. Softmax 归一化 ---\")\n",
    "print(f\"注意力权重矩阵形状: {attention_weights.shape}\")\n",
    "print(f\"注意力权重（第一个头）:\\n{attention_weights[0, 0]}\\n\")\n",
    "print(\"注意：每一行的和都为 1\\n\")\n",
    "\n",
    "\n",
    "# 7. 加权求和\n",
    "# ----------------------------------\n",
    "# 将注意力权重与 V 相乘，得到加权后的值\n",
    "context = torch.matmul(attention_weights, V)\n",
    "\n",
    "print(\"--- 7. 加权求和 ---\")\n",
    "print(f\"上下文向量 (context) 形状: {context.shape}\\n\") # [batch, num_heads, seq_len, d_k]\n",
    "\n",
    "\n",
    "# 8. 合并多头\n",
    "# ----------------------------------\n",
    "# 将多头的结果拼接回原始维度\n",
    "context = context.transpose(1, 2).contiguous().view(batch_size, seq_len, d_model)\n",
    "\n",
    "print(\"--- 8. 合并多头 ---\")\n",
    "print(f\"合并后的上下文向量形状: {context.shape}\\n\")\n",
    "\n",
    "\n",
    "# 9. 最终线性变换\n",
    "# ----------------------------------\n",
    "# 最终的线性层\n",
    "output = W_o(context)\n",
    "\n",
    "print(\"--- 9. 最终输出 ---\")\n",
    "print(f\"最终输出张量形状: {output.shape}\")\n",
    "print(\"\\n✅ 整个多头自注意力过程完成。\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "be8f901d-4ef9-48f7-b2c3-e4ec5aca7195",
   "metadata": {},
   "source": [
    "### 单个token的自注意力计算示例\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5a6af410-db80-4273-b4cf-e2a22b1c4301",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import math\n",
    "\n",
    "# 参数设置 (与之前相同)\n",
    "vocab_size = 100\n",
    "d_model = 8         \n",
    "num_heads = 4       \n",
    "seq_len = 5         \n",
    "batch_size = 1      \n",
    "\n",
    "# 模拟数据和模块初始化\n",
    "input_ids = torch.tensor([[5, 12, 34, 45, 67]]) \n",
    "embedding = nn.Embedding(vocab_size, d_model)\n",
    "W_q = nn.Linear(d_model, d_model)\n",
    "W_k = nn.Linear(d_model, d_model)\n",
    "W_v = nn.Linear(d_model, d_model)\n",
    "W_o = nn.Linear(d_model, d_model)\n",
    "d_k = d_model // num_heads \n",
    "\n",
    "# 将输入索引转换为词嵌入向量\n",
    "x = embedding(input_ids)\n",
    "\n",
    "# 线性变换，得到 Q, K, V\n",
    "Q_all = W_q(x)\n",
    "K_all = W_k(x)\n",
    "V_all = W_v(x)\n",
    "\n",
    "# 提取“good” (位置3，索引为3) 的Query向量\n",
    "# 注意: Q_all的形状是[1, 5, 8]\n",
    "Q_good = Q_all[:, 3:4, :]  # 形状为[1, 1, 8]\n",
    "print(\"--- 1. 提取 'good' 的QKV向量 ---\")\n",
    "print(f\"原始'good'的Q向量形状: {Q_good.shape}\\n\")\n",
    "print(f\"原始'good'的Q向量: {Q_good}\\n\")\n",
    "\n",
    "# 以类似的方式，可提取出“good” (位置3，索引为3) 的key和value向量\n",
    "K_good = K_all[:, 3:4, :] \n",
    "V_good = V_all[:, 3:4, :]\n",
    "print(f\"原始'good'的K向量: {K_good}\\n\")\n",
    "print(f\"原始'good'的V向量: {V_good}\\n\")\n",
    "\n",
    "# 2. 分头 (Split Heads)\n",
    "# ----------------------------------\n",
    "# 将所有 Q, K, V 重塑并转置\n",
    "# view()是改变张量形状 的方法，相当于NumPy的reshape()\n",
    "Q_all_split = Q_all.view(batch_size, seq_len, num_heads, d_k).transpose(1, 2)\n",
    "K_all_split = K_all.view(batch_size, seq_len, num_heads, d_k).transpose(1, 2)\n",
    "V_all_split = V_all.view(batch_size, seq_len, num_heads, d_k).transpose(1, 2)\n",
    "\n",
    "# 提取“good”的QKV向量（已分头）\n",
    "# 形状变为 [1, num_heads, 1, d_k]\n",
    "Q_good_split = Q_all_split[:, :, 3:4, :]\n",
    "K_good_split = K_all_split[:, :, 3:4, :]\n",
    "V_good_split = V_all_split[:, :, 3:4, :]\n",
    "print(\"--- 2. 'good'的Q向量分头 ---\")\n",
    "print(f\"'good'的Q向量分头后形状: {Q_good_split.shape}\\n\")\n",
    "print(f\"'good'的Q向量分头后的结果: {Q_good_split}\\n\")\n",
    "print(f\"'good'的K向量分头后的结果: {K_good_split}\\n\")\n",
    "print(f\"'good'的V向量分头后的结果: {V_good_split}\\n\")\n",
    "\n",
    "# 3. 计算注意力分数\n",
    "# ----------------------------------\n",
    "# 只有“good”的Query向量与所有K向量进行点积\n",
    "# Q_good_split 形状: [1, 4, 1, 2]，维度意义为[batch_size, number_of_heads, q_sq_len, dim_per_head]\n",
    "# K_all_split 形状: [1, 4, 5, 2]，维度意义为[batch_size, number_of_heads, k_sq_len, dim_per_head]\n",
    "# K_all_split.transpose(-2, -1) 形状: [1, 4, 2, 5]\n",
    "# 结果形状: [1, 4, 1, 5]，维度意义为[batch_size, number_of_heads, q_sq_len, k_sq_len]\n",
    "# 由于只以“good”作为查询，因此查询长度为1；又因为要计算good这个查询与所有key词的关联性，因此key的长度为5\n",
    "scores = torch.matmul(Q_good_split, K_all_split.transpose(-2, -1)) / math.sqrt(d_k)\n",
    "print(f\"分数矩阵{scores}\")\n",
    "\n",
    "# scores张量[1, 4, 1, 5]代表着，对于每个批量中的每个头，都有一个1x5的分数矩阵，其每一行（这里仅有一行）都代表good\n",
    "# 与句子中5个词的关联性，例如scores[0, 0, 0, 0] 表示“good”与“I”的分数\n",
    "print(\"--- 3. 计算 'good' 的注意力分数 ---\")\n",
    "print(f\"'good'的分数矩阵形状: {scores.shape}\") \n",
    "print(f\"'good'的注意力分数（第一个头）:\\n{scores[0, 0]}\\n\")\n",
    "print(f\"注意力分数代表'good'与每个token (I, am, a, good, student) 的关联性\\n\")\n",
    "\n",
    "\n",
    "# 4. Softmax\n",
    "# ----------------------------------\n",
    "# 对分数进行 softmax，得到注意力权重\n",
    "attention_weights = torch.softmax(scores, dim=-1)\n",
    "\n",
    "print(\"--- 4. 'good' 的注意力权重 ---\")\n",
    "print(f\"注意力权重矩阵形状: {attention_weights.shape}\")\n",
    "print(f\"'good'的注意力权重（第一个头）:\\n{attention_weights[0, 0]}\")\n",
    "print(f\"这些权重展示了'good'在该头中对其他词的关注程度，总和为1\\n\")\n",
    "\n",
    "\n",
    "# 5. 加权求和\n",
    "# ----------------------------------\n",
    "# 将注意力权重与所有 V 向量相乘\n",
    "# attention_weights 形状: [1, 4, 1, 5]\n",
    "# V_all_split 形状: [1, 4, 5, 2]\n",
    "# 结果形状: [1, 4, 1, 2]\n",
    "context_good = torch.matmul(attention_weights, V_all_split)\n",
    "\n",
    "print(\"--- 5. 计算 'good' 的上下文向量 ---\")\n",
    "print(f\"'good' 的上下文向量形状: {context_good.shape}\\n\")\n",
    "print(f\"'good'的上下文向量：{context_good}\")\n",
    "\n",
    "\n",
    "# 6. 合并多头\n",
    "# ----------------------------------\n",
    "# 将多头的结果拼接回原始维度\n",
    "# context_good 形状: [1, 4, 1, 2] -> [1, 1, 4, 2] -> [1, 1, 8]\n",
    "context_good = context_good.transpose(1, 2).contiguous().view(batch_size, -1, d_model)\n",
    "\n",
    "print(\"--- 6. 合并 'good' 的多头结果 ---\")\n",
    "print(f\"合并后的上下文向量形状: {context_good.shape}\\n\")\n",
    "\n",
    "\n",
    "# 7. 最终线性变换\n",
    "# ----------------------------------\n",
    "# 最终的线性层\n",
    "output_good = W_o(context_good)\n",
    "\n",
    "print(\"--- 7. 'good' 的最终输出 ---\")\n",
    "print(f\"最终输出张量形状: {output_good.shape}\")\n",
    "print(\"\\n✅ 'good' 的注意力计算过程完成。\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "efe1456a-4468-4a45-9e84-2a72b4b1d174",
   "metadata": {},
   "source": [
    "### 掩码多头自注意力机制计算\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9d833f15-80d2-49b3-8fda-22be20492d99",
   "metadata": {},
   "source": [
    "#### 生成掩码矩阵"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0eb77a72-4c14-48e7-812c-857755620cdd",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "\n",
    "# 创建一个上三角掩码矩阵：torch.ones()创建大小为seq_len x seq_len的全1矩阵，用于代表序列中所有词彼此间的关联关系\n",
    "x = torch.ones(5, 5)\n",
    "print(f\"全1矩阵：\\n{x}\\n\")\n",
    "\n",
    "# triu是 \"triangle upper\"（上三角）的缩写，该方法会保留矩阵的上三角部分，并将其他部分（对角线和其下方）设置为0\n",
    "# diagonal=1是关键参数，它指定从主对角线上方第一条对角线开始保留，以确保主对角线（即每个词与自身的关系）也被设置为0\n",
    "x = torch.triu(x, diagonal=1)\n",
    "print(f\"上三角矩阵：\\n{x}\\n\")\n",
    "\n",
    "mask = x.bool()\n",
    "print(f\"布尔型值上三角矩阵(掩码矩阵)：\\n{x}\\n\")\n",
    "\n",
    "# 随机生成一个5x5矩阵来模拟注意力分数矩阵\n",
    "torch.manual_seed(42)  # 设置随机种子，用于保证每次运行程序时， 随机数生成器都会从相同的起点开始\n",
    "scores = torch.randn(5, 5)\n",
    "print(f\"注意力分数矩阵：\\n{scores}\\n\")\n",
    "\n",
    "scores_masked = scores.masked_fill(mask.unsqueeze(0).unsqueeze(0), -1e9)\n",
    "print(f\"应用掩码后的注意力分数矩阵：\\n{scores_masked}\\n\")\n",
    "\n",
    "attention_weights = torch.softmax(scores_masked, dim=-1)\n",
    "print(f\"对注意力分数进行归一化，得到注意力权重：\\n{attention_weights}\\n\")\n",
    "# 注意：每次生成一个新token，序列长度seq_len都会加1,因此注意力权重矩阵维度也会从seq_len x seq_len变为\n",
    "#（seq_len + 1) x seq_len + 1)，因此，模型会为新生成的token计算一套完整的注意力权重。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "96e55057-a293-4862-b258-15b1ae54b482",
   "metadata": {},
   "source": [
    "#### 掩码多头自注意力的完整示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4da72977-18e0-4c9e-8e74-126747f9caad",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import math\n",
    "\n",
    "# 参数设置\n",
    "vocab_size = 100\n",
    "d_model = 8         # 词嵌入维度\n",
    "num_heads = 4       # 注意力头数\n",
    "seq_len = 5         # 序列长度 (\"I am a good student\")\n",
    "batch_size = 1      # 批量大小\n",
    "\n",
    "# 1. 模拟数据和模块初始化\n",
    "# ----------------------------------\n",
    "# 模拟输入：句子 \"I am a good student\" 的词汇表索引\n",
    "input_ids = torch.tensor([[5, 12, 34, 45, 67]]) \n",
    "\n",
    "# 模拟一个简单的词嵌入层\n",
    "embedding = nn.Embedding(vocab_size, d_model)\n",
    "\n",
    "# 模拟一个多头注意力模块\n",
    "W_q = nn.Linear(d_model, d_model)\n",
    "W_k = nn.Linear(d_model, d_model)\n",
    "W_v = nn.Linear(d_model, d_model)\n",
    "W_o = nn.Linear(d_model, d_model)\n",
    "d_k = d_model // num_heads # 每个头的维度\n",
    "\n",
    "print(\"--- 1. 模拟输入与参数 ---\")\n",
    "print(f\"输入句子: 'I am a good student'\")\n",
    "print(f\"输入张量形状: {input_ids.shape}\")\n",
    "print(f\"模型维度 (d_model): {d_model}\")\n",
    "print(f\"注意力头数 (num_heads): {num_heads}\")\n",
    "print(f\"每个头的维度 (d_k): {d_k}\\n\")\n",
    "\n",
    "\n",
    "# 2. 词嵌入\n",
    "# ----------------------------------\n",
    "x = embedding(input_ids)\n",
    "print(\"--- 2. 词嵌入 ---\")\n",
    "print(f\"词嵌入向量形状: {x.shape}\\n\")\n",
    "\n",
    "\n",
    "# 3. 计算 Q, K, V 并分头\n",
    "# ----------------------------------\n",
    "# 线性变换，得到所有词的 Q, K, V\n",
    "Q_all = W_q(x)\n",
    "K_all = W_k(x)\n",
    "V_all = W_v(x)\n",
    "\n",
    "# 分头 (Split Heads)\n",
    "Q = Q_all.view(batch_size, seq_len, num_heads, d_k).transpose(1, 2)\n",
    "K = K_all.view(batch_size, seq_len, num_heads, d_k).transpose(1, 2)\n",
    "V = V_all.view(batch_size, seq_len, num_heads, d_k).transpose(1, 2)\n",
    "\n",
    "print(\"--- 3. Q, K, V 分头 ---\")\n",
    "print(f\"Q 张量形状: {Q.shape}\")\n",
    "print(f\"K 张量形状: {K.shape}\")\n",
    "print(f\"V 张量形状: {V.shape}\\n\")\n",
    "\n",
    "\n",
    "# 4. 生成掩码矩阵\n",
    "# ----------------------------------\n",
    "mask = torch.triu(torch.ones(seq_len, seq_len), diagonal=1).bool()\n",
    "\n",
    "print(\"--- 4. 生成掩码矩阵 ---\")\n",
    "print(f\"掩码矩阵形状: {mask.shape}\")\n",
    "print(f\"掩码矩阵 (True 表示掩盖):\\n{mask}\\n\")\n",
    "\n",
    "\n",
    "# 5. 计算注意力分数\n",
    "# ----------------------------------\n",
    "# 缩放点积注意力公式：Q * K^T / sqrt(d_k)\n",
    "scores = torch.matmul(Q, K.transpose(-2, -1)) / math.sqrt(d_k)\n",
    "\n",
    "print(\"--- 5. 原始注意力分数 ---\")\n",
    "print(f\"分数矩阵形状: {scores.shape}\")\n",
    "print(f\"分数矩阵（第一个头）:\\n{scores[0, 0]}\\n\")\n",
    "\n",
    "\n",
    "# 6. 应用掩码\n",
    "# ----------------------------------\n",
    "# 将掩码应用到分数矩阵上，把 True 对应的位置设置为一个非常小的负数\n",
    "scores = scores.masked_fill(mask.unsqueeze(0).unsqueeze(0), -1e9)\n",
    "\n",
    "print(\"--- 6. 应用掩码后的分数 ---\")\n",
    "print(f\"掩码后的分数矩阵（第一个头）:\\n{scores[0, 0]}\\n\")\n",
    "print(f\"注意：上三角位置的分数已被设置为极小值\\n\")\n",
    "\n",
    "\n",
    "# 7. Softmax 归一化\n",
    "# ----------------------------------\n",
    "# 对分数进行 softmax，得到注意力权重\n",
    "attention_weights = torch.softmax(scores, dim=-1)\n",
    "\n",
    "print(\"--- 7. 最终注意力权重 ---\")\n",
    "print(f\"注意力权重矩阵形状: {attention_weights.shape}\")\n",
    "print(f\"注意力权重（第一个头）:\\n{attention_weights[0, 0]}\\n\")\n",
    "print(\"注意：上三角位置的权重已趋近于 0，每一行的和都为 1\\n\")\n",
    "\n",
    "\n",
    "# 8. 逐行分析“student”的注意力权重（位置 4）\n",
    "# ----------------------------------\n",
    "print(\"--- 8. 分析 'student' 的注意力 ---\")\n",
    "print(f\"'student' (索引 4) 在第一个头的注意力权重: {attention_weights[0, 0, 4]}\")\n",
    "print(\"注意力权重分布:\")\n",
    "print(f\" - 'I' (索引 0): {attention_weights[0, 0, 4, 0]:.4f}\")\n",
    "print(f\" - 'am' (索引 1): {attention_weights[0, 0, 4, 1]:.4f}\")\n",
    "print(f\" - 'a' (索引 2): {attention_weights[0, 0, 4, 2]:.4f}\")\n",
    "print(f\" - 'good' (索引 3): {attention_weights[0, 0, 4, 3]:.4f}\")\n",
    "print(f\" - 'student' (索引 4): {attention_weights[0, 0, 4, 4]:.4f}\")\n",
    "print(\"\\n由于没有掩码限制，'student' 可以看到它前面所有的词，包括自身。\")\n",
    "print(\"如果我们将注意力焦点放在 'a' (索引 2) 上，你会发现它只能看到 'I', 'am', 'a'。\\n\")\n",
    "\n",
    "\n",
    "# 9. 加权求和并合并多头\n",
    "# ----------------------------------\n",
    "# 将注意力权重与 V 相乘\n",
    "context = torch.matmul(attention_weights, V)\n",
    "# 将多头的结果拼接回原始维度\n",
    "context = context.transpose(1, 2).contiguous().view(batch_size, seq_len, d_model)\n",
    "\n",
    "# 最终线性变换\n",
    "W_o = nn.Linear(d_model, d_model)\n",
    "output = W_o(context)\n",
    "\n",
    "print(\"--- 9. 最终输出 ---\")\n",
    "print(f\"最终输出张量形状: {output.shape}\")\n",
    "print(\"\\n✅ 掩码多头自注意力过程完成。\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b96743a4-4ba8-42b5-9440-8b2a5a684a8d",
   "metadata": {},
   "source": [
    "## 残差连接和层归一化"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ba792e59-f525-4c29-a26c-8c4be334403a",
   "metadata": {},
   "source": [
    "### 残差连接的简单示例\n",
    "\n",
    "残差连接（Residual Connection），又称跳跃连接（Skip Connection），是深度学习中一个非常重要的概念。它的核心思想是将一个层的输入直接加到该层的输出上，即 output = F(x) + x。这种连接方式有以下几个主要优点：\n",
    "- 缓解梯度消失问题： 在非常深的神经网络中，梯度在反向传播时可能会变得非常小，导致网络训练困难。残差连接提供了一条梯度可以直接反向传播的“捷径”。\n",
    "- 帮助训练深层网络： 允许我们构建更深的神经网络，比如 ResNet (Residual Network)，而不会遇到性能下降或训练收敛问题。\n",
    "- 保留原始信息： 模型的输出是在原始输入的基础上进行微调，而不是完全从头学习，这有助于保留原始的输入信息。\n",
    "\n",
    "在注意力机制中，残差连接通常用于将注意力层的输出与原始的输入进行相加，以保留原始的序列信息。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9593ae18-0ef9-4a41-b480-a298b57d9d96",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "\n",
    "# 假设的参数\n",
    "batch_size = 4\n",
    "sequence_length = 6\n",
    "embedding_dim = 6\n",
    "\n",
    "# 设置随机种子，用于保证每次运行程序时， 随机数生成器都会从相同的起点开始\n",
    "torch.manual_seed(42)  \n",
    "\n",
    "# 1. 模拟注意力计算的输入 (x) 和输出 (attention_output)\n",
    "\n",
    "# 模拟原始输入，通常是来自前一个层的输出，比如词嵌入层\n",
    "# 形状: (batch_size, sequence_length, embedding_dim)\n",
    "original_input = torch.randn(batch_size, sequence_length, embedding_dim)\n",
    "\n",
    "# 模拟注意力计算后的结果。\n",
    "# 形状与原始输入相同，以便进行元素级的加法。\n",
    "attention_output = torch.randn(batch_size, sequence_length, embedding_dim)\n",
    "\n",
    "# 2. 进行残差连接\n",
    "\n",
    "# 将注意力层的输出与原始输入相加。\n",
    "# 这一步是残差连接的核心。\n",
    "# 使用 F(x) + x 的形式\n",
    "output_with_residual = attention_output + original_input\n",
    "\n",
    "# 3. 打印结果的形状以验证\n",
    "print(\"原始输入的形状:\", original_input.shape)\n",
    "print(\"注意力输出的形状:\", attention_output.shape)\n",
    "print(\"残差连接后结果的形状:\", output_with_residual.shape)\n",
    "\n",
    "# 4. 可选：打印部分数据以更好地理解\n",
    "print(\"\\n--- 打印部分数据 (前两个batch、前三个token) ---\")\n",
    "print(\"原始输入 (部分): \\n\", original_input[0:2, 0:3, 0:5])\n",
    "print(\"注意力输出 (部分): \\n\", attention_output[0:2, 0:3, 0:5])\n",
    "print(\"残差连接结果 (部分): \\n\", output_with_residual[0:2, 0:3, 0:5])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "cf5270b3-f8a3-4102-af0b-a2f835a47937",
   "metadata": {},
   "source": [
    "### 残差连接和层归一化示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4c16c817-9bc9-4f12-baf0-9ab52930ed5d",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "\n",
    "# 假设的参数\n",
    "batch_size = 4\n",
    "sequence_length = 6\n",
    "embedding_dim = 6\n",
    "\n",
    "# 设置随机种子，用于保证每次运行程序时， 随机数生成器都会从相同的起点开始\n",
    "torch.manual_seed(42)  \n",
    "\n",
    "# 1. 模拟注意力计算的输入 (x) 和输出 (attention_output)\n",
    "\n",
    "# 模拟原始输入，通常是来自前一个层的输出\n",
    "original_input = torch.randn(batch_size, sequence_length, embedding_dim)\n",
    "\n",
    "# 模拟注意力计算后的结果\n",
    "attention_output = torch.randn(batch_size, sequence_length, embedding_dim)\n",
    "\n",
    "# 2. 进行残差连接\n",
    "\n",
    "# 将注意力层的输出与原始输入相加，得到残差连接后的结果\n",
    "output_with_residual = attention_output + original_input\n",
    "\n",
    "# 3. 添加层归一化\n",
    "\n",
    "# 初始化 LayerNorm 层\n",
    "# LayerNorm 的参数 size_normalized_shape 通常是特征维度，\n",
    "# 在这里我们对 embedding_dim 维度进行归一化\n",
    "layer_norm = nn.LayerNorm(embedding_dim)\n",
    "\n",
    "# 将残差连接的结果输入到 LayerNorm 层\n",
    "output_with_residual_and_norm = layer_norm(output_with_residual)\n",
    "\n",
    "# 4. 打印结果的形状以验证\n",
    "print(\"原始输入的形状:\", original_input.shape)\n",
    "print(\"注意力输出的形状:\", attention_output.shape)\n",
    "print(\"残差连接后结果的形状:\", output_with_residual.shape)\n",
    "print(\"添加层归一化后结果的形状:\", output_with_residual_and_norm.shape)\n",
    "\n",
    "# 5. 可选：打印部分数据以更好地理解\n",
    "print(\"\\n--- 打印部分数据 (前两个batch、前三个token) ---\")\n",
    "print(\"残差连接结果 (部分): \\n\", output_with_residual[0:2, 0:3, 0:6])\n",
    "print(\"层归一化后结果 (部分): \\n\", output_with_residual_and_norm[0:2, 0:3, 0:6])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "439ba10b-2035-4835-9f76-906b416685a4",
   "metadata": {},
   "source": [
    "## 三种架构各自的前向传播过程"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "11aa76a0-2f6c-4c27-af5c-2201bc633cc1",
   "metadata": {},
   "source": [
    "### Transformer的基本构建块和前向传播过程\n",
    "\n",
    "下面的代码演示了Transformer中一个Encoder块的完整前向传播过程：\n",
    "- 输入: 一个形状为 [batch_size, seq_len] 的整数张量 input_ids，代表一批文本序列。\n",
    "- 词嵌入: Embeddings(input_ids) 将 input_ids 转换为形状为 [batch_size, seq_len, d_model] 的浮点数张量。\n",
    "- 位置编码: PositionalEncoding(x) 将位置编码张量加到词嵌入张量上，保持形状不变。\n",
    "- Transformer 块: TransformerBlock(x) 依次执行以下步骤：\n",
    "  - 多头注意力: self.attention(x) 计算注意力权重并生成注意力上下文，输出形状为 [batch_size, seq_len, d_model]。\n",
    "  - 残差连接和层归一化: self.norm1(x + attn_output) 将注意力输出与原始输入相加（残差连接），然后进行层归一化。\n",
    "  - 前馈网络: self.ffn(x) 将数据通过一个两层的全连接网络，输出形状仍为 [batch_size, seq_len, d_model]。\n",
    "  - 残差连接和层归一化: self.norm2(x + ffn_output) 将前馈网络的输出与之前的输出相加，并再次进行层归一化。\n",
    "- 最终输出: 模型的最终输出是一个形状为 [batch_size, seq_len, d_model] 的张量，其中包含了丰富的语义和位置信息，可以用于后续的任务，例如文本分类或翻译。\n",
    "\n",
    "**MultiHeadAttention**\n",
    "```python\n",
    "torch.nn.MultiheadAttention(\n",
    "    embed_dim,           # 输入特征维度 (embedding 维度)\n",
    "    num_heads,           # 注意力头数\n",
    "    dropout=0.0,         # 注意力权重的dropout\n",
    "    bias=True,           # 是否对投影层使用偏置\n",
    "    add_bias_kv=False,   # 是否为 key/value 增加一个可学习的 bias token\n",
    "    add_zero_attn=False, # 是否增加全零的 key/value token\n",
    "    kdim=None,           # 如果提供，key 的特征维度（默认 = embed_dim）\n",
    "    vdim=None,           # 如果提供，value 的特征维度（默认 = embed_dim）\n",
    "    batch_first=False,   # 输入输出是否采用 (batch, seq, embed) 格式\n",
    "    device=None,\n",
    "    dtype=None\n",
    ")\n",
    "```\n",
    "\n",
    "**Transformer Encoder Layer**\n",
    "\n",
    "```python\n",
    "torch.nn.TransformerEncoderLayer(\n",
    "    d_model,              # 输入维度 (embedding维度)\n",
    "    nhead,                # 注意力头数\n",
    "    dim_feedforward=2048, # FFN 隐藏层大小\n",
    "    dropout=0.1,\n",
    "    activation=\"relu\",    # 激活函数：relu 或 gelu\n",
    "    layer_norm_eps=1e-5,\n",
    "    batch_first=False,\n",
    "    norm_first=False,     # True => Pre-LN, False => Post-LN\n",
    "    device=None,\n",
    "    dtype=None\n",
    ")\n",
    "```\n",
    "\n",
    "**Transformer Decoder Layer**\n",
    "\n",
    "```python\n",
    "torch.nn.TransformerDecoderLayer(\n",
    "    d_model,\n",
    "    nhead,\n",
    "    dim_feedforward=2048,\n",
    "    dropout=0.1,\n",
    "    activation=\"relu\",\n",
    "    layer_norm_eps=1e-5,\n",
    "    batch_first=False,\n",
    "    norm_first=False,\n",
    "    device=None,\n",
    "    dtype=None\n",
    ")\n",
    "```\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1c538ac5-690b-44ac-90bc-836122157765",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import math\n",
    "\n",
    "class Embeddings(nn.Module):\n",
    "    def __init__(self, vocab_size, d_model):\n",
    "        super().__init__()\n",
    "        self.lut = nn.Embedding(vocab_size, d_model)\n",
    "        self.d_model = d_model\n",
    "\n",
    "    def forward(self, x):\n",
    "        print(f\"Embedding 输入形状: {x.shape}\") \n",
    "        embeddings = self.lut(x) * math.sqrt(self.d_model)\n",
    "        print(f\"Embedding 输出形状: {embeddings.shape}\\n\")\n",
    "        return embeddings\n",
    "\n",
    "class PositionalEncoding(nn.Module):\n",
    "    def __init__(self, d_model, max_len=5000):\n",
    "        super().__init__()\n",
    "        # 创建一个形状为 (max_len, d_model) 的全零张量pe，用于存储位置编码\n",
    "        # 每行对应一个位置（序列中的token），每行对应编码向量的维度，后续会填充具体的编码值\n",
    "        pe = torch.zeros(max_len, d_model)\n",
    "        # 创建一个从0到max_len-1的整数序列张量position，形状为 (max_len,)，然后通过unsqueeze(1)将其扩展为形状(max_len, 1)\n",
    "        # position表示序列中每个token的位置索引（0, 1, 2, ..., max_len-1），扩展为二维张量是为了后续与div_term进行广播运算\n",
    "        position = torch.arange(0, max_len).unsqueeze(1)\n",
    "        # torch.arange(0, d_model, 2) 生成一个从0到d_model-1的偶数索引序列（步长为 2），形状为 (d_model//2,)\n",
    "        # div_term用于生成不同频率的正弦和余弦函数\n",
    "        div_term = torch.exp(torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model))\n",
    "        # position * div_term：将position（形状 (max_len, 1)）与div_term（形状 (d_model//2,)）相乘，\n",
    "        # 利用广播机制得到形状为 (max_len, d_model//2) 的张量，而由torch.sin()对结果应用正弦函数\n",
    "        pe[:, 0::2] = torch.sin(position * div_term)\n",
    "        # 计算余弦值并填充奇数列\n",
    "        pe[:, 1::2] = torch.cos(position * div_term)\n",
    "        # pe.unsqueeze(0)：将pe张量的形状从 (max_len, d_model) 扩展为 (1, max_len, d_model)，增加一个batch维度\n",
    "        self.register_buffer('pe', pe.unsqueeze(0))\n",
    "\n",
    "    def forward(self, x):\n",
    "        print(f\"位置编码 输入形状: {x.shape}\")\n",
    "        x = x + self.pe[:, :x.size(1)]\n",
    "        print(f\"位置编码 输出形状: {x.shape}\\n\")\n",
    "        return x\n",
    "\n",
    "class MultiHeadAttention(nn.Module):\n",
    "    def __init__(self, d_model, num_heads):\n",
    "        super().__init__()\n",
    "        self.d_model = d_model\n",
    "        self.d_k = d_model // num_heads\n",
    "        self.num_heads = num_heads\n",
    "        self.W_q = nn.Linear(d_model, d_model)\n",
    "        self.W_k = nn.Linear(d_model, d_model)\n",
    "        self.W_v = nn.Linear(d_model, d_model)\n",
    "        self.W_o = nn.Linear(d_model, d_model)\n",
    "\n",
    "    def forward(self, x):\n",
    "        print(f\"多头注意力 输入形状: {x.shape}\") \n",
    "        batch_size = x.size(0)\n",
    "        \n",
    "        Q = self.W_q(x).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)\n",
    "        K = self.W_k(x).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)\n",
    "        V = self.W_v(x).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)\n",
    "        print(f\"Q/K/V 分头后形状: {Q.shape}\")\n",
    "\n",
    "        scores = torch.matmul(Q, K.transpose(-2, -1)) / math.sqrt(self.d_k)\n",
    "        attn = torch.softmax(scores, dim=-1)\n",
    "        context = torch.matmul(attn, V)\n",
    "        print(f\"注意力上下文形状: {context.shape}\")\n",
    "\n",
    "        context = context.transpose(1, 2).contiguous().view(batch_size, -1, self.d_model)\n",
    "        output = self.W_o(context)\n",
    "        print(f\"多头注意力 输出形状: {output.shape}\\n\")\n",
    "        return output\n",
    "\n",
    "class FeedForward(nn.Module):\n",
    "    def __init__(self, d_model, d_ff):\n",
    "        super().__init__()\n",
    "        self.linear1 = nn.Linear(d_model, d_ff)\n",
    "        self.linear2 = nn.Linear(d_ff, d_model)\n",
    "        self.relu = nn.ReLU()\n",
    "\n",
    "    def forward(self, x):\n",
    "        print(f\"前馈网络 输入形状: {x.shape}\")\n",
    "        x = self.relu(self.linear1(x))\n",
    "        x = self.linear2(x)\n",
    "        print(f\"前馈网络 输出形状: {x.shape}\\n\")\n",
    "        return x\n",
    "\n",
    "class TransformerBlock(nn.Module):\n",
    "    def __init__(self, d_model, num_heads, d_ff):\n",
    "        super().__init__()\n",
    "        self.attention = MultiHeadAttention(d_model, num_heads)\n",
    "        self.norm1 = nn.LayerNorm(d_model)\n",
    "        self.ffn = FeedForward(d_model, d_ff)\n",
    "        self.norm2 = nn.LayerNorm(d_model)\n",
    "\n",
    "    def forward(self, x):\n",
    "        attn_output = self.attention(x)\n",
    "        x = self.norm1(x + attn_output)\n",
    "        ffn_output = self.ffn(x)\n",
    "        x = self.norm2(x + ffn_output)\n",
    "        return x\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    vocab_size = 10000\n",
    "    d_model = 64\n",
    "    num_heads = 4\n",
    "    d_ff = 256\n",
    "    seq_len = 10\n",
    "    batch_size = 2\n",
    "\n",
    "    embedding = Embeddings(vocab_size, d_model)\n",
    "    pos_encoding = PositionalEncoding(d_model)\n",
    "    transformer_block = TransformerBlock(d_model, num_heads, d_ff)\n",
    "\n",
    "    input_ids = torch.randint(0, vocab_size, (batch_size, seq_len))\n",
    "    print(f\"原始输入形状: {input_ids.shape}\\n\")\n",
    "\n",
    "    x = embedding(input_ids)\n",
    "    x = pos_encoding(x)\n",
    "    x = transformer_block(x)\n",
    "\n",
    "    print(\"最终输出形状:\", x.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "23cc9c94-fc91-42da-8648-7355f191412a",
   "metadata": {},
   "source": [
    "### 标准Transformer架构的前向传播示例\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0f18356b-7222-4e52-96ff-c26213ad94d1",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import math\n",
    "\n",
    "# 注意力机制的辅助函数，用于掩码\n",
    "def get_attention_mask(seq_len):\n",
    "    \"\"\"\n",
    "    生成一个上三角掩码矩阵，用于自回归任务，防止关注未来信息。\n",
    "    \"\"\"\n",
    "    # triu是 \"triangle upper\"（上三角）的缩写。\n",
    "    # triu()会保留输入张量的上三角部分，并将其他部分（对角线或其下方）设置为0。\n",
    "    # diagonal=1参数表示从主对角线上方第一条对角线开始保留。\n",
    "    mask = torch.triu(torch.ones(seq_len, seq_len), diagonal=1).bool()\n",
    "    return mask\n",
    "\n",
    "# --- 核心模块 ---\n",
    "\n",
    "class Embeddings(nn.Module):\n",
    "    def __init__(self, vocab_size, d_model):\n",
    "        super().__init__()\n",
    "        self.lut = nn.Embedding(vocab_size, d_model)\n",
    "        self.d_model = d_model\n",
    "\n",
    "    def forward(self, x):\n",
    "        print(f\"Embedding 输入形状: {x.shape}\")\n",
    "        embeddings = self.lut(x) * math.sqrt(self.d_model)\n",
    "        print(f\"Embedding 输出形状: {embeddings.shape}\\n\")\n",
    "        return embeddings\n",
    "\n",
    "class PositionalEncoding(nn.Module):\n",
    "    def __init__(self, d_model, max_len=5000):\n",
    "        super().__init__()\n",
    "        pe = torch.zeros(max_len, d_model)\n",
    "        position = torch.arange(0, max_len).unsqueeze(1)\n",
    "        div_term = torch.exp(torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model))\n",
    "\n",
    "        pe[:, 0::2] = torch.sin(position * div_term)\n",
    "        pe[:, 1::2] = torch.cos(position * div_term)\n",
    "\n",
    "        self.register_buffer('pe', pe.unsqueeze(0))\n",
    "\n",
    "    def forward(self, x):\n",
    "        print(f\"位置编码 输入形状: {x.shape}\")\n",
    "        x = x + self.pe[:, :x.size(1)]\n",
    "        print(f\"位置编码 输出形状: {x.shape}\\n\")\n",
    "        return x\n",
    "\n",
    "class MultiHeadAttention(nn.Module):\n",
    "    def __init__(self, d_model, num_heads):\n",
    "        super().__init__()\n",
    "        self.d_model = d_model\n",
    "        self.d_k = d_model // num_heads\n",
    "        self.num_heads = num_heads\n",
    "        self.W_q = nn.Linear(d_model, d_model)\n",
    "        self.W_k = nn.Linear(d_model, d_model)\n",
    "        self.W_v = nn.Linear(d_model, d_model)\n",
    "        self.W_o = nn.Linear(d_model, d_model)\n",
    "\n",
    "    def forward(self, query, key, value, mask=None):\n",
    "        batch_size = query.size(0)\n",
    "        \n",
    "        Q = self.W_q(query).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)\n",
    "        K = self.W_k(key).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)\n",
    "        V = self.W_v(value).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)\n",
    "        \n",
    "        scores = torch.matmul(Q, K.transpose(-2, -1)) / math.sqrt(self.d_k)\n",
    "        \n",
    "        if mask is not None:\n",
    "            scores = scores.masked_fill(mask.unsqueeze(0).unsqueeze(0), -1e9)\n",
    "            \n",
    "        attn = torch.softmax(scores, dim=-1)\n",
    "        context = torch.matmul(attn, V)\n",
    "        \n",
    "        context = context.transpose(1, 2).contiguous().view(batch_size, -1, self.d_model)\n",
    "        output = self.W_o(context)\n",
    "        return output\n",
    "\n",
    "class FeedForward(nn.Module):\n",
    "    def __init__(self, d_model, d_ff):\n",
    "        super().__init__()\n",
    "        self.linear1 = nn.Linear(d_model, d_ff)\n",
    "        self.linear2 = nn.Linear(d_ff, d_model)\n",
    "        self.relu = nn.ReLU()\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = self.relu(self.linear1(x))\n",
    "        x = self.linear2(x)\n",
    "        return x\n",
    "\n",
    "# --- Encoder 和 Decoder 块 ---\n",
    "\n",
    "class EncoderBlock(nn.Module):\n",
    "    def __init__(self, d_model, num_heads, d_ff):\n",
    "        super().__init__()\n",
    "        self.attention = MultiHeadAttention(d_model, num_heads)\n",
    "        self.norm1 = nn.LayerNorm(d_model)\n",
    "        self.ffn = FeedForward(d_model, d_ff)\n",
    "        self.norm2 = nn.LayerNorm(d_model)\n",
    "\n",
    "    def forward(self, x):\n",
    "        print(\"--- 进入 EncoderBlock ---\")\n",
    "        print(f\"自注意力 输入形状: {x.shape}\")\n",
    "        attn_output = self.attention(x, x, x)\n",
    "        x = self.norm1(x + attn_output)\n",
    "        print(f\"自注意力+归一化后形状: {x.shape}\\n\")\n",
    "        \n",
    "        print(f\"前馈网络 输入形状: {x.shape}\")\n",
    "        ffn_output = self.ffn(x)\n",
    "        x = self.norm2(x + ffn_output)\n",
    "        print(f\"前馈网络+归一化后形状: {x.shape}\\n\")\n",
    "        return x\n",
    "\n",
    "class DecoderBlock(nn.Module):\n",
    "    def __init__(self, d_model, num_heads, d_ff):\n",
    "        super().__init__()\n",
    "        self.masked_self_attention = MultiHeadAttention(d_model, num_heads)\n",
    "        self.norm1 = nn.LayerNorm(d_model)\n",
    "        self.cross_attention = MultiHeadAttention(d_model, num_heads)\n",
    "        self.norm2 = nn.LayerNorm(d_model)\n",
    "        self.ffn = FeedForward(d_model, d_ff)\n",
    "        self.norm3 = nn.LayerNorm(d_model)\n",
    "        \n",
    "    def forward(self, x, encoder_output, mask):\n",
    "        print(\"--- 进入 DecoderBlock ---\")\n",
    "        \n",
    "        print(f\"带掩码自注意力 输入形状: {x.shape}\")\n",
    "        attn1_output = self.masked_self_attention(x, x, x, mask)\n",
    "        x = self.norm1(x + attn1_output)\n",
    "        print(f\"带掩码自注意力+归一化后形状: {x.shape}\\n\")\n",
    "        \n",
    "        print(f\"交叉注意力 输入形状: {x.shape} (Query) 和 {encoder_output.shape} (Key/Value)\")\n",
    "        attn2_output = self.cross_attention(x, encoder_output, encoder_output)\n",
    "        x = self.norm2(x + attn2_output)\n",
    "        print(f\"交叉注意力+归一化后形状: {x.shape}\\n\")\n",
    "        \n",
    "        print(f\"前馈网络 输入形状: {x.shape}\")\n",
    "        ffn_output = self.ffn(x)\n",
    "        x = self.norm3(x + ffn_output)\n",
    "        print(f\"前馈网络+归一化后形状: {x.shape}\\n\")\n",
    "        \n",
    "        return x\n",
    "\n",
    "# --- 完整的 Transformer 模型 ---\n",
    "\n",
    "class Transformer(nn.Module):\n",
    "    def __init__(self, src_vocab_size, tgt_vocab_size, d_model, num_heads, d_ff, num_layers=6):\n",
    "        super().__init__()\n",
    "        self.encoder_embedding = Embeddings(src_vocab_size, d_model)\n",
    "        self.decoder_embedding = Embeddings(tgt_vocab_size, d_model)\n",
    "        self.pos_encoding = PositionalEncoding(d_model)\n",
    "        \n",
    "        self.encoder_layers = nn.ModuleList([EncoderBlock(d_model, num_heads, d_ff) for _ in range(num_layers)])\n",
    "        self.decoder_layers = nn.ModuleList([DecoderBlock(d_model, num_heads, d_ff) for _ in range(num_layers)])\n",
    "        \n",
    "        self.linear_out = nn.Linear(d_model, tgt_vocab_size)\n",
    "    \n",
    "    def forward(self, src_input_ids, tgt_input_ids):\n",
    "        # 编码器部分\n",
    "        print(\"--- 编码器前向传播 ---\")\n",
    "        encoder_output = self.encoder_embedding(src_input_ids)\n",
    "        encoder_output = self.pos_encoding(encoder_output)\n",
    "        \n",
    "        for layer in self.encoder_layers:\n",
    "            encoder_output = layer(encoder_output)\n",
    "            \n",
    "        print(\"✅ 编码器最终输出形状:\", encoder_output.shape)\n",
    "        print(\"\\n\" + \"=\"*50 + \"\\n\")\n",
    "            \n",
    "        # 解码器部分\n",
    "        print(\"--- 解码器前向传播 ---\")\n",
    "        decoder_output = self.decoder_embedding(tgt_input_ids)\n",
    "        decoder_output = self.pos_encoding(decoder_output)\n",
    "        \n",
    "        # 生成掩码\n",
    "        tgt_seq_len = tgt_input_ids.size(1)\n",
    "        look_ahead_mask = get_attention_mask(tgt_seq_len)\n",
    "        \n",
    "        for layer in self.decoder_layers:\n",
    "            decoder_output = layer(decoder_output, encoder_output, look_ahead_mask)\n",
    "            \n",
    "        print(\"✅ 解码器最终输出形状:\", decoder_output.shape)\n",
    "        \n",
    "        # 线性层输出到词汇表\n",
    "        output = self.linear_out(decoder_output)\n",
    "        print(f\"线性层输出形状: {output.shape}\\n\")\n",
    "        \n",
    "        return output\n",
    "\n",
    "# --- 运行演示 ---\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    # 参数配置\n",
    "    src_vocab_size = 10000\n",
    "    tgt_vocab_size = 8000\n",
    "    d_model = 64\n",
    "    num_heads = 4\n",
    "    d_ff = 256\n",
    "    num_layers = 1 # 为了简化演示，只用1层\n",
    "\n",
    "    src_seq_len = 10\n",
    "    tgt_seq_len = 8\n",
    "    batch_size = 2\n",
    "\n",
    "    # 初始化模型\n",
    "    transformer_model = Transformer(src_vocab_size, tgt_vocab_size, d_model, num_heads, d_ff, num_layers)\n",
    "\n",
    "    # 模拟输入：源语言和目标语言序列\n",
    "    src_input_ids = torch.randint(0, src_vocab_size, (batch_size, src_seq_len))\n",
    "    tgt_input_ids = torch.randint(0, tgt_vocab_size, (batch_size, tgt_seq_len))\n",
    "\n",
    "    print(f\"原始源语言输入形状: {src_input_ids.shape}\")\n",
    "    print(f\"原始目标语言输入形状: {tgt_input_ids.shape}\\n\")\n",
    "\n",
    "    # 执行前向传播\n",
    "    final_output = transformer_model(src_input_ids, tgt_input_ids)\n",
    "\n",
    "    print(\"最终模型输出形状:\", final_output.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3a0c5635-e559-49f3-a598-c8409a587498",
   "metadata": {},
   "source": [
    "### Decoder-Only架构的前向传播"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2fa68dcc-0769-4903-9319-a947a8473773",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import math\n",
    "\n",
    "# 注意力机制的辅助函数，用于掩码\n",
    "def get_attention_mask(seq_len):\n",
    "    \"\"\"\n",
    "    生成一个上三角掩码矩阵，用于自回归任务，防止关注未来信息。\n",
    "    \"\"\"\n",
    "    mask = torch.triu(torch.ones(seq_len, seq_len), diagonal=1).bool()\n",
    "    return mask\n",
    "\n",
    "# --- 核心模块 ---\n",
    "\n",
    "class Embeddings(nn.Module):\n",
    "    def __init__(self, vocab_size, d_model):\n",
    "        super().__init__()\n",
    "        self.lut = nn.Embedding(vocab_size, d_model)\n",
    "        self.d_model = d_model\n",
    "\n",
    "    def forward(self, x):\n",
    "        print(f\"Embedding 输入形状: {x.shape}\")\n",
    "        embeddings = self.lut(x) * math.sqrt(self.d_model)\n",
    "        print(f\"Embedding 输出形状: {embeddings.shape}\\n\")\n",
    "        return embeddings\n",
    "\n",
    "class PositionalEncoding(nn.Module):\n",
    "    def __init__(self, d_model, max_len=5000):\n",
    "        super().__init__()\n",
    "        pe = torch.zeros(max_len, d_model)\n",
    "        position = torch.arange(0, max_len).unsqueeze(1)\n",
    "        div_term = torch.exp(torch.arange(0, d_model, 2) * -(math.log(10000.0) / d_model))\n",
    "        pe[:, 0::2] = torch.sin(position * div_term)\n",
    "        pe[:, 1::2] = torch.cos(position * div_term)\n",
    "        self.register_buffer('pe', pe.unsqueeze(0))\n",
    "\n",
    "    def forward(self, x):\n",
    "        print(f\"位置编码 输入形状: {x.shape}\")\n",
    "        x = x + self.pe[:, :x.size(1)]\n",
    "        print(f\"位置编码 输出形状: {x.shape}\\n\")\n",
    "        return x\n",
    "\n",
    "class MultiHeadAttention(nn.Module):\n",
    "    def __init__(self, d_model, num_heads):\n",
    "        super().__init__()\n",
    "        self.d_model = d_model\n",
    "        self.d_k = d_model // num_heads\n",
    "        self.num_heads = num_heads\n",
    "        self.W_q = nn.Linear(d_model, d_model)\n",
    "        self.W_k = nn.Linear(d_model, d_model)\n",
    "        self.W_v = nn.Linear(d_model, d_model)\n",
    "        self.W_o = nn.Linear(d_model, d_model)\n",
    "\n",
    "    def forward(self, query, key, value, mask=None):\n",
    "        batch_size = query.size(0)\n",
    "\n",
    "        Q = self.W_q(query).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)\n",
    "        K = self.W_k(key).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)\n",
    "        V = self.W_v(value).view(batch_size, -1, self.num_heads, self.d_k).transpose(1, 2)\n",
    "\n",
    "        scores = torch.matmul(Q, K.transpose(-2, -1)) / math.sqrt(self.d_k)\n",
    "\n",
    "        if mask is not None:\n",
    "            scores = scores.masked_fill(mask.unsqueeze(0).unsqueeze(0), -1e9)\n",
    "\n",
    "        attn = torch.softmax(scores, dim=-1)\n",
    "        context = torch.matmul(attn, V)\n",
    "\n",
    "        context = context.transpose(1, 2).contiguous().view(batch_size, -1, self.d_model)\n",
    "        output = self.W_o(context)\n",
    "        return output\n",
    "\n",
    "class FeedForward(nn.Module):\n",
    "    def __init__(self, d_model, d_ff):\n",
    "        super().__init__()\n",
    "        self.linear1 = nn.Linear(d_model, d_ff)\n",
    "        self.linear2 = nn.Linear(d_ff, d_model)\n",
    "        self.relu = nn.ReLU()\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = self.relu(self.linear1(x))\n",
    "        x = self.linear2(x)\n",
    "        return x\n",
    "\n",
    "# --- Decoder-Only 块 ---\n",
    "\n",
    "class DecoderOnlyBlock(nn.Module):\n",
    "    def __init__(self, d_model, num_heads, d_ff):\n",
    "        super().__init__()\n",
    "        self.masked_self_attention = MultiHeadAttention(d_model, num_heads)\n",
    "        self.norm1 = nn.LayerNorm(d_model)\n",
    "        self.ffn = FeedForward(d_model, d_ff)\n",
    "        self.norm2 = nn.LayerNorm(d_model)\n",
    "\n",
    "    def forward(self, x, mask):\n",
    "        print(\"--- 进入 Decoder-Only Block ---\")\n",
    "        \n",
    "        print(f\"带掩码自注意力 输入形状: {x.shape}\")\n",
    "        # 在这里，query, key, value 都来自输入 x\n",
    "        attn_output = self.masked_self_attention(x, x, x, mask)\n",
    "        x = self.norm1(x + attn_output)\n",
    "        print(f\"带掩码自注意力+归一化后形状: {x.shape}\\n\")\n",
    "        \n",
    "        print(f\"前馈网络 输入形状: {x.shape}\")\n",
    "        ffn_output = self.ffn(x)\n",
    "        x = self.norm2(x + ffn_output)\n",
    "        print(f\"前馈网络+归一化后形状: {x.shape}\\n\")\n",
    "        \n",
    "        return x\n",
    "\n",
    "# --- 完整的 Decoder-Only Transformer 模型 ---\n",
    "\n",
    "class DecoderOnlyTransformer(nn.Module):\n",
    "    def __init__(self, vocab_size, d_model, num_heads, d_ff, num_layers=6):\n",
    "        super().__init__()\n",
    "        self.embedding = Embeddings(vocab_size, d_model)\n",
    "        self.pos_encoding = PositionalEncoding(d_model)\n",
    "        \n",
    "        # Decoder-Only 模型由一系列 Decoder-Only 块组成\n",
    "        self.decoder_layers = nn.ModuleList([DecoderOnlyBlock(d_model, num_heads, d_ff) for _ in range(num_layers)])\n",
    "        \n",
    "        # 最后的线性层，用于将输出向量映射回词汇表\n",
    "        self.linear_out = nn.Linear(d_model, vocab_size)\n",
    "\n",
    "    def forward(self, input_ids):\n",
    "        print(\"--- Decoder-Only 模型前向传播 ---\")\n",
    "        \n",
    "        # 1. 词嵌入和位置编码\n",
    "        x = self.embedding(input_ids)\n",
    "        x = self.pos_encoding(x)\n",
    "\n",
    "        # 2. 生成自注意力掩码\n",
    "        seq_len = input_ids.size(1)\n",
    "        look_ahead_mask = get_attention_mask(seq_len)\n",
    "        \n",
    "        # 3. 逐层通过 Decoder-Only 块\n",
    "        for layer in self.decoder_layers:\n",
    "            x = layer(x, look_ahead_mask)\n",
    "        \n",
    "        print(\"✅ 解码器最终输出形状:\", x.shape)\n",
    "        \n",
    "        # 4. 线性层输出到词汇表\n",
    "        output = self.linear_out(x)\n",
    "        print(f\"线性层输出形状: {output.shape}\\n\")\n",
    "        \n",
    "        return output\n",
    "\n",
    "# --- 运行演示 ---\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    # 参数配置\n",
    "    vocab_size = 10000\n",
    "    d_model = 64\n",
    "    num_heads = 4\n",
    "    d_ff = 256\n",
    "    num_layers = 1 # 为了简化演示，只用1层\n",
    "    seq_len = 10\n",
    "    batch_size = 2\n",
    "\n",
    "    # 初始化模型\n",
    "    decoder_only_model = DecoderOnlyTransformer(vocab_size, d_model, num_heads, d_ff, num_layers)\n",
    "\n",
    "    # 模拟输入序列\n",
    "    input_ids = torch.randint(0, vocab_size, (batch_size, seq_len))\n",
    "\n",
    "    print(f\"原始输入形状: {input_ids.shape}\\n\")\n",
    "\n",
    "    # 执行前向传播\n",
    "    final_output = decoder_only_model(input_ids)\n",
    "\n",
    "    print(\"最终模型输出形状:\", final_output.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7f82c81c-e387-4285-bf53-a413de88af28",
   "metadata": {},
   "source": [
    "## Transformer模型示例"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "24406fcd-079b-47a7-a0fe-8837005e1d16",
   "metadata": {},
   "source": [
    "### 最简版的Transformer模型\n",
    "\n",
    "下面是一个最简版的Transformer代码示例，用PyTorch自带的nn.Transformer，配合随机数据跑通前向传播。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b400fab9-eedb-4337-9ce8-32bfce4bbe42",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "\n",
    "# 参数设置\n",
    "d_model = 512      # 词向量维度\n",
    "nhead = 8          # 注意力头数\n",
    "num_encoder_layers = 3\n",
    "num_decoder_layers = 3\n",
    "dim_feedforward = 2048\n",
    "batch_size = 32\n",
    "src_seq_len = 20\n",
    "tgt_seq_len = 15\n",
    "\n",
    "# 构建 Transformer 模型\n",
    "transformer = nn.Transformer(\n",
    "    d_model=d_model,\n",
    "    nhead=nhead,\n",
    "    num_encoder_layers=num_encoder_layers,\n",
    "    num_decoder_layers=num_decoder_layers,\n",
    "    dim_feedforward=dim_feedforward,\n",
    "    batch_first=True  # 输入输出格式 (batch, seq, dim)\n",
    ")\n",
    "\n",
    "# 构造随机输入（模拟 token embedding）\n",
    "src = torch.rand(batch_size, src_seq_len, d_model)  # 源序列 (encoder 输入)\n",
    "tgt = torch.rand(batch_size, tgt_seq_len, d_model)  # 目标序列 (decoder 输入)\n",
    "\n",
    "# 构造目标 mask (避免解码器看到未来的信息)\n",
    "tgt_mask = nn.Transformer.generate_square_subsequent_mask(tgt_seq_len)\n",
    "\n",
    "# 前向传播\n",
    "output = transformer(src, tgt, tgt_mask=tgt_mask)\n",
    "\n",
    "print(\"输入 (src):\", src.shape)\n",
    "print(\"输入 (tgt):\", tgt.shape)\n",
    "print(\"输出 (output):\", output.shape)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ea9fa9d4-ba73-473f-bc33-19b750d87492",
   "metadata": {},
   "source": [
    "## BERT模型示例"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "12365f4d-cfb2-4efa-9a81-9dadd396b842",
   "metadata": {},
   "source": [
    "### MiniBERT示例\n",
    "\n",
    "下面是一个迷你版的BERT，主要用于帮助理解模型结构：\n",
    "- Embedding（词 + 位置）\n",
    "- Transformer Encoder 层（自注意力+FFN+残差+LayerNorm）\n",
    "- 池化层（取 [CLS] 向量）\n",
    "- 分类头（线性层）\n",
    "\n",
    "下面的示例代码可以跑通一个二分类任务（比如句子情感分类），用随机数据演示。\n",
    "\n",
    "**提示**\n",
    "\n",
    "BERT模型中的 [CLS] 标记（token）主要用于分类任务。\n",
    "\n",
    "[CLS] 是 \"classification\" 的缩写，意思是“分类”。它是一个特殊的标记，总是放在输入序列的第一个位置。BERT的设计者们规定，这个 [CLS] 标记的最终隐藏状态（也就是经过所有Transformer层后，对应于 [CLS] 位置的输出向量）会聚合整个输入序列的信息，从而作为整个序列的表示。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2b00f51b-a189-4341-a509-c056f65f7260",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "\n",
    "class BertSelfAttention(nn.Module):\n",
    "    \"\"\"多头自注意力机制模块\"\"\"\n",
    "    def __init__(self, hidden_size: int, num_heads: int, dropout: float = 0.1):\n",
    "        super().__init__()\n",
    "        # 确保隐藏维度可以被头数整除\n",
    "        if hidden_size % num_heads != 0:\n",
    "            raise ValueError(\"hidden_size must be divisible by num_heads\")\n",
    "        \n",
    "        self.num_heads = num_heads\n",
    "        self.head_dim = hidden_size // num_heads\n",
    "        self.hidden_size = hidden_size\n",
    "\n",
    "        # 查询、键、值的线性变换层\n",
    "        self.q_proj = nn.Linear(hidden_size, hidden_size)\n",
    "        self.k_proj = nn.Linear(hidden_size, hidden_size)\n",
    "        self.v_proj = nn.Linear(hidden_size, hidden_size)\n",
    "        self.out_proj = nn.Linear(hidden_size, hidden_size)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "\n",
    "    def forward(self, x: torch.Tensor, mask: torch.Tensor = None) -> torch.Tensor:\n",
    "        \"\"\"\n",
    "        前向传播\n",
    "        Args:\n",
    "            x: 输入张量，形状为 (batch_size, seq_len, hidden_size)\n",
    "            mask: 可选的注意力掩码，形状为 (batch_size, seq_len)\n",
    "        Returns:\n",
    "            输出张量，形状为 (batch_size, seq_len, hidden_size)\n",
    "        \"\"\"\n",
    "        B, L, H = x.size()\n",
    "        \n",
    "        # 计算查询、键、值，并调整形状为多头形式\n",
    "        q = self.q_proj(x).view(B, L, self.num_heads, self.head_dim).transpose(1, 2)  # (B, heads, L, head_dim)\n",
    "        k = self.k_proj(x).view(B, L, self.num_heads, self.head_dim).transpose(1, 2)\n",
    "        v = self.v_proj(x).view(B, L, self.num_heads, self.head_dim).transpose(1, 2)\n",
    "\n",
    "        # 计算注意力分数并进行缩放\n",
    "        attn_scores = torch.matmul(q, k.transpose(-2, -1)) / (self.head_dim ** 0.5)  # (B, heads, L, L)\n",
    "        \n",
    "        # 如果提供了掩码，则应用掩码\n",
    "        if mask is not None:\n",
    "            attn_scores = attn_scores.masked_fill(mask == 0, float('-inf'))\n",
    "\n",
    "        # 计算注意力权重并应用 dropout\n",
    "        attn_weights = F.softmax(attn_scores, dim=-1)\n",
    "        attn_weights = self.dropout(attn_weights)\n",
    "        \n",
    "        # 计算注意力输出\n",
    "        attn_output = torch.matmul(attn_weights, v)  # (B, heads, L, head_dim)\n",
    "        \n",
    "        # 调整形状并通过输出投影层\n",
    "        attn_output = attn_output.transpose(1, 2).contiguous().view(B, L, H)  # (B, L, hidden_size)\n",
    "        return self.out_proj(attn_output)\n",
    "\n",
    "\n",
    "class BertFeedForward(nn.Module):\n",
    "    \"\"\"前馈神经网络模块\"\"\"\n",
    "    def __init__(self, hidden_size: int, ffn_size: int, dropout: float = 0.1):\n",
    "        super().__init__()\n",
    "        self.fc1 = nn.Linear(hidden_size, ffn_size)\n",
    "        self.fc2 = nn.Linear(ffn_size, hidden_size)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "\n",
    "    def forward(self, x: torch.Tensor) -> torch.Tensor:\n",
    "        \"\"\"\n",
    "        前向传播\n",
    "        Args:\n",
    "            x: 输入张量，形状为 (batch_size, seq_len, hidden_size)\n",
    "        Returns:\n",
    "            输出张量，形状为 (batch_size, seq_len, hidden_size)\n",
    "        \"\"\"\n",
    "        x = F.gelu(self.fc1(x))\n",
    "        x = self.dropout(x)\n",
    "        return self.fc2(x)\n",
    "\n",
    "\n",
    "class BertEncoderLayer(nn.Module):\n",
    "    \"\"\"BERT 编码器层\"\"\"\n",
    "    def __init__(self, hidden_size: int, num_heads: int, ffn_size: int, dropout: float = 0.1):\n",
    "        super().__init__()\n",
    "        self.attn = BertSelfAttention(hidden_size, num_heads, dropout)\n",
    "        self.norm1 = nn.LayerNorm(hidden_size)\n",
    "        self.ffn = BertFeedForward(hidden_size, ffn_size, dropout)\n",
    "        self.norm2 = nn.LayerNorm(hidden_size)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "\n",
    "    def forward(self, x: torch.Tensor, mask: torch.Tensor = None) -> torch.Tensor:\n",
    "        \"\"\"\n",
    "        前向传播\n",
    "        Args:\n",
    "            x: 输入张量，形状为 (batch_size, seq_len, hidden_size)\n",
    "            mask: 可选的注意力掩码，形状为 (batch_size, seq_len)\n",
    "        Returns:\n",
    "            输出张量，形状为 (batch_size, seq_len, hidden_size)\n",
    "        \"\"\"\n",
    "        # 自注意力 + 残差连接 + 层归一化\n",
    "        x = x + self.dropout(self.attn(self.norm1(x), mask))\n",
    "        # 前馈网络 + 残差连接 + 层归一化\n",
    "        x = x + self.dropout(self.ffn(self.norm2(x)))\n",
    "        return x\n",
    "\n",
    "\n",
    "class MiniBert(nn.Module):\n",
    "    \"\"\"MiniBERT 模型，用于文本分类任务\"\"\"\n",
    "    def __init__(self, vocab_size: int, hidden_size: int = 64, num_heads: int = 4, \n",
    "                 num_layers: int = 2, ffn_size: int = 128, max_len: int = 50, \n",
    "                 num_classes: int = 2, dropout: float = 0.1):\n",
    "        super().__init__()\n",
    "        # 参数检查\n",
    "        if vocab_size <= 0 or hidden_size <= 0 or num_heads <= 0 or num_layers <= 0 or ffn_size <= 0:\n",
    "            raise ValueError(\"All size parameters must be positive\")\n",
    "        \n",
    "        # 词嵌入和位置嵌入\n",
    "        self.token_emb = nn.Embedding(vocab_size, hidden_size)\n",
    "        self.pos_emb = nn.Embedding(max_len, hidden_size)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "\n",
    "        # 编码器层\n",
    "        self.layers = nn.ModuleList([\n",
    "            BertEncoderLayer(hidden_size, num_heads, ffn_size, dropout)\n",
    "            for _ in range(num_layers)\n",
    "        ])\n",
    "        \n",
    "        # 层归一化和分类器\n",
    "        self.norm = nn.LayerNorm(hidden_size)\n",
    "        self.classifier = nn.Linear(hidden_size, num_classes)\n",
    "\n",
    "    def forward(self, x: torch.Tensor, mask: torch.Tensor = None) -> torch.Tensor:\n",
    "        \"\"\"\n",
    "        前向传播\n",
    "        Args:\n",
    "            x: 输入张量，形状为 (batch_size, seq_len)\n",
    "            mask: 可选的注意力掩码，形状为 (batch_size, seq_len)\n",
    "        Returns:\n",
    "            输出张量，形状为 (batch_size, num_classes)\n",
    "        \"\"\"\n",
    "        B, L = x.size()\n",
    "        \n",
    "        # 生成位置编码\n",
    "        pos = torch.arange(L, device=x.device).unsqueeze(0).expand(B, L)\n",
    "        \n",
    "        # 词嵌入 + 位置嵌入 + dropout\n",
    "        x = self.token_emb(x) + self.pos_emb(pos)\n",
    "        x = self.dropout(x)\n",
    "\n",
    "        # 通过编码器层\n",
    "        for layer in self.layers:\n",
    "            x = layer(x, mask)\n",
    "        \n",
    "        # 层归一化并提取 [CLS] 向量\n",
    "        x = self.norm(x)\n",
    "        cls_token = x[:, 0, :]  # 取 [CLS] 向量\n",
    "        return self.classifier(cls_token)\n",
    "\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    # 测试模型\n",
    "    vocab_size = 100\n",
    "    model = MiniBert(vocab_size=vocab_size, hidden_size=64, num_heads=4, \n",
    "                     num_layers=2, ffn_size=128, max_len=50, num_classes=2)\n",
    "\n",
    "    # 模拟输入数据\n",
    "    input_ids = torch.randint(0, vocab_size, (4, 10))\n",
    "    logits = model(input_ids)\n",
    "\n",
    "    print(\"输入形状:\", input_ids.shape)  # [4, 10]\n",
    "    print(\"输出形状:\", logits.shape)     # [4, 2] (二分类)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "93eea25d-9409-4e6d-9e90-4bab5c7afe32",
   "metadata": {},
   "source": [
    "### 示例扩展\n",
    "\n",
    "为前一节中的MiniBERT接上一个训练循环，模拟一个二分类的情感分类任务（0 = 负面，1 = 正面），并用随机生成的数据来跑通整个流程。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f8e6d695-f41b-4cad-9895-7973d36b4282",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "\n",
    "# ====== MiniBERT 组件（保持不变） ======\n",
    "class BertSelfAttention(nn.Module):\n",
    "    def __init__(self, hidden_size, num_heads):\n",
    "        super().__init__()\n",
    "        assert hidden_size % num_heads == 0\n",
    "        self.num_heads = num_heads\n",
    "        self.head_dim = hidden_size // num_heads\n",
    "        \n",
    "        self.q_proj = nn.Linear(hidden_size, hidden_size)\n",
    "        self.k_proj = nn.Linear(hidden_size, hidden_size)\n",
    "        self.v_proj = nn.Linear(hidden_size, hidden_size)\n",
    "        self.out_proj = nn.Linear(hidden_size, hidden_size)\n",
    "    \n",
    "    def forward(self, x):\n",
    "        B, L, H = x.size()\n",
    "        q = self.q_proj(x).view(B, L, self.num_heads, self.head_dim).transpose(1, 2)\n",
    "        k = self.k_proj(x).view(B, L, self.num_heads, self.head_dim).transpose(1, 2)\n",
    "        v = self.v_proj(x).view(B, L, self.num_heads, self.head_dim).transpose(1, 2)\n",
    "\n",
    "        attn_scores = torch.matmul(q, k.transpose(-2, -1)) / (self.head_dim ** 0.5)\n",
    "        attn_weights = F.softmax(attn_scores, dim=-1)\n",
    "        attn_output = torch.matmul(attn_weights, v)\n",
    "\n",
    "        attn_output = attn_output.transpose(1, 2).contiguous().view(B, L, H)\n",
    "        return self.out_proj(attn_output)\n",
    "\n",
    "\n",
    "class BertFeedForward(nn.Module):\n",
    "    def __init__(self, hidden_size, ffn_size):\n",
    "        super().__init__()\n",
    "        self.fc1 = nn.Linear(hidden_size, ffn_size)\n",
    "        self.fc2 = nn.Linear(ffn_size, hidden_size)\n",
    "    \n",
    "    def forward(self, x):\n",
    "        return self.fc2(F.gelu(self.fc1(x)))\n",
    "\n",
    "\n",
    "class BertEncoderLayer(nn.Module):\n",
    "    def __init__(self, hidden_size, num_heads, ffn_size):\n",
    "        super().__init__()\n",
    "        self.attn = BertSelfAttention(hidden_size, num_heads)\n",
    "        self.norm1 = nn.LayerNorm(hidden_size)\n",
    "        self.ffn = BertFeedForward(hidden_size, ffn_size)\n",
    "        self.norm2 = nn.LayerNorm(hidden_size)\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = x + self.attn(self.norm1(x))   # 残差 + 注意力\n",
    "        x = x + self.ffn(self.norm2(x))    # 残差 + 前馈\n",
    "        return x\n",
    "\n",
    "\n",
    "class MiniBert(nn.Module):\n",
    "    def __init__(self, vocab_size, hidden_size=64, num_heads=4, num_layers=2, ffn_size=128, max_len=50, num_classes=2):\n",
    "        super().__init__()\n",
    "        self.token_emb = nn.Embedding(vocab_size, hidden_size)\n",
    "        self.pos_emb = nn.Embedding(max_len, hidden_size)\n",
    "        self.layers = nn.ModuleList([\n",
    "            BertEncoderLayer(hidden_size, num_heads, ffn_size)\n",
    "            for _ in range(num_layers)\n",
    "        ])\n",
    "        self.norm = nn.LayerNorm(hidden_size)\n",
    "        self.classifier = nn.Linear(hidden_size, num_classes)\n",
    "    \n",
    "    def forward(self, x):\n",
    "        B, L = x.size()\n",
    "        pos = torch.arange(L, device=x.device).unsqueeze(0).expand(B, L)\n",
    "        x = self.token_emb(x) + self.pos_emb(pos)\n",
    "\n",
    "        for layer in self.layers:\n",
    "            x = layer(x)\n",
    "        \n",
    "        x = self.norm(x)\n",
    "        cls_token = x[:, 0, :]  # 取 [CLS] 向量\n",
    "        return self.classifier(cls_token)\n",
    "\n",
    "\n",
    "# ====== 训练循环 ======\n",
    "if __name__ == \"__main__\":\n",
    "    vocab_size = 200\n",
    "    num_classes = 2\n",
    "    model = MiniBert(vocab_size=vocab_size, hidden_size=64, num_heads=4, num_layers=2, ffn_size=128, max_len=50, num_classes=num_classes)\n",
    "\n",
    "    optimizer = optim.Adam(model.parameters(), lr=0.001)\n",
    "    criterion = nn.CrossEntropyLoss()\n",
    "\n",
    "    # 模拟数据集\n",
    "    num_epochs = 5\n",
    "    batch_size = 8\n",
    "    seq_len = 10\n",
    "\n",
    "    for epoch in range(num_epochs):\n",
    "        # 随机生成输入 token 和标签 (0=负面, 1=正面)\n",
    "        inputs = torch.randint(0, vocab_size, (batch_size, seq_len))\n",
    "        labels = torch.randint(0, num_classes, (batch_size,))\n",
    "\n",
    "        # 前向\n",
    "        outputs = model(inputs)  # (batch, num_classes)\n",
    "        loss = criterion(outputs, labels)\n",
    "\n",
    "        # 反向传播\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "\n",
    "        # 计算准确率\n",
    "        preds = outputs.argmax(dim=1)\n",
    "        acc = (preds == labels).float().mean().item()\n",
    "\n",
    "        print(f\"Epoch {epoch+1}/{num_epochs}, Loss: {loss.item():.4f}, Acc: {acc:.2f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9eddddd2-0662-4ab2-a007-508a9673f6c1",
   "metadata": {},
   "source": [
    "### 在IMDb数据上训练MiniBERT"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "58e44a94-3ed2-4431-a985-98df9cd76366",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "from datasets import load_dataset\n",
    "from transformers import AutoTokenizer\n",
    "from torch.utils.data import DataLoader\n",
    "from tqdm import tqdm\n",
    "import logging\n",
    "\n",
    "# 设置日志\n",
    "logging.basicConfig(level=logging.INFO, format=\"%(asctime)s - %(levelname)s - %(message)s\")\n",
    "logger = logging.getLogger(__name__)\n",
    "\n",
    "# ====== MiniBERT 组件 ======\n",
    "class BertSelfAttention(nn.Module):\n",
    "    \"\"\"多头自注意力机制模块\"\"\"\n",
    "    def __init__(self, hidden_size: int, num_heads: int, dropout: float = 0.1):\n",
    "        super().__init__()\n",
    "        # 验证隐藏维度是否可被头数整除\n",
    "        if hidden_size % num_heads != 0:\n",
    "            raise ValueError(\"hidden_size must be divisible by num_heads\")\n",
    "        \n",
    "        self.num_heads = num_heads\n",
    "        self.head_dim = hidden_size // num_heads\n",
    "        self.hidden_size = hidden_size\n",
    "\n",
    "        # 查询、键、值的线性变换层\n",
    "        self.q_proj = nn.Linear(hidden_size, hidden_size)\n",
    "        self.k_proj = nn.Linear(hidden_size, hidden_size)\n",
    "        self.v_proj = nn.Linear(hidden_size, hidden_size)\n",
    "        self.out_proj = nn.Linear(hidden_size, hidden_size)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "\n",
    "    def forward(self, x: torch.Tensor, mask: torch.Tensor = None) -> torch.Tensor:\n",
    "        \"\"\"\n",
    "        前向传播\n",
    "        Args:\n",
    "            x: 输入张量，形状为 (batch_size, seq_len, hidden_size)\n",
    "            mask: 可选的注意力掩码，形状为 (batch_size, seq_len)\n",
    "        Returns:\n",
    "            输出张量，形状为 (batch_size, seq_len, hidden_size)\n",
    "        \"\"\"\n",
    "        B, L, H = x.size()\n",
    "        \n",
    "        # 计算查询、键、值，并调整形状为多头形式\n",
    "        q = self.q_proj(x).view(B, L, self.num_heads, self.head_dim).transpose(1, 2)  # (B, heads, L, head_dim)\n",
    "        k = self.k_proj(x).view(B, L, self.num_heads, self.head_dim).transpose(1, 2)\n",
    "        v = self.v_proj(x).view(B, L, self.num_heads, self.head_dim).transpose(1, 2)\n",
    "\n",
    "        # 计算注意力分数并缩放\n",
    "        attn_scores = torch.matmul(q, k.transpose(-2, -1)) / (self.head_dim ** 0.5)  # (B, heads, L, L)\n",
    "        \n",
    "        # 应用注意力掩码（如果提供）\n",
    "        if mask is not None:\n",
    "            mask = mask.unsqueeze(1).unsqueeze(2)  # (B, 1, 1, L)\n",
    "            attn_scores = attn_scores.masked_fill(mask == 0, float('-inf'))\n",
    "\n",
    "        # 计算注意力权重并应用 dropout\n",
    "        attn_weights = F.softmax(attn_scores, dim=-1)\n",
    "        attn_weights = self.dropout(attn_weights)\n",
    "        \n",
    "        # 计算注意力输出\n",
    "        attn_output = torch.matmul(attn_weights, v)  # (B, heads, L, head_dim)\n",
    "        \n",
    "        # 调整形状并通过输出投影层\n",
    "        attn_output = attn_output.transpose(1, 2).contiguous().view(B, L, H)\n",
    "        return self.out_proj(attn_output)\n",
    "\n",
    "\n",
    "class BertFeedForward(nn.Module):\n",
    "    \"\"\"前馈神经网络模块\"\"\"\n",
    "    def __init__(self, hidden_size: int, ffn_size: int, dropout: float = 0.1):\n",
    "        super().__init__()\n",
    "        self.fc1 = nn.Linear(hidden_size, ffn_size)\n",
    "        self.fc2 = nn.Linear(ffn_size, hidden_size)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "\n",
    "    def forward(self, x: torch.Tensor) -> torch.Tensor:\n",
    "        \"\"\"\n",
    "        前向传播\n",
    "        Args:\n",
    "            x: 输入张量，形状为 (batch_size, seq_len, hidden_size)\n",
    "        Returns:\n",
    "            输出张量，形状为 (batch_size, seq_len, hidden_size)\n",
    "        \"\"\"\n",
    "        x = F.gelu(self.fc1(x))\n",
    "        x = self.dropout(x)\n",
    "        return self.fc2(x)\n",
    "\n",
    "\n",
    "class BertEncoderLayer(nn.Module):\n",
    "    \"\"\"BERT 编码器层\"\"\"\n",
    "    def __init__(self, hidden_size: int, num_heads: int, ffn_size: int, dropout: float = 0.1):\n",
    "        super().__init__()\n",
    "        self.attn = BertSelfAttention(hidden_size, num_heads, dropout)\n",
    "        self.norm1 = nn.LayerNorm(hidden_size)\n",
    "        self.ffn = BertFeedForward(hidden_size, ffn_size, dropout)\n",
    "        self.norm2 = nn.LayerNorm(hidden_size)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "\n",
    "    def forward(self, x: torch.Tensor, mask: torch.Tensor = None) -> torch.Tensor:\n",
    "        \"\"\"\n",
    "        前向传播\n",
    "        Args:\n",
    "            x: 输入张量，形状为 (batch_size, seq_len, hidden_size)\n",
    "            mask: 可选的注意力掩码，形状为 (batch_size, seq_len)\n",
    "        Returns:\n",
    "            输出张量，形状为 (batch_size, seq_len, hidden_size)\n",
    "        \"\"\"\n",
    "        # 自注意力 + 残差连接 + 层归一化\n",
    "        x = x + self.dropout(self.attn(self.norm1(x), mask))\n",
    "        # 前馈网络 + 残差连接 + 层归一化\n",
    "        x = x + self.dropout(self.ffn(self.norm2(x)))\n",
    "        return x\n",
    "\n",
    "\n",
    "class MiniBert(nn.Module):\n",
    "    \"\"\"MiniBERT 模型，用于文本分类任务\"\"\"\n",
    "    def __init__(self, vocab_size: int, hidden_size: int = 128, num_heads: int = 4, \n",
    "                 num_layers: int = 2, ffn_size: int = 256, max_len: int = 256, \n",
    "                 num_classes: int = 2, dropout: float = 0.1):\n",
    "        super().__init__()\n",
    "        # 参数验证\n",
    "        if vocab_size <= 0 or hidden_size <= 0 or num_heads <= 0 or num_layers <= 0 or ffn_size <= 0:\n",
    "            raise ValueError(\"All size parameters must be positive\")\n",
    "        \n",
    "        # 词嵌入和位置嵌入\n",
    "        self.token_emb = nn.Embedding(vocab_size, hidden_size)\n",
    "        self.pos_emb = nn.Embedding(max_len, hidden_size)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "\n",
    "        # 编码器层\n",
    "        self.layers = nn.ModuleList([\n",
    "            BertEncoderLayer(hidden_size, num_heads, ffn_size, dropout)\n",
    "            for _ in range(num_layers)\n",
    "        ])\n",
    "        \n",
    "        # 层归一化和分类器\n",
    "        self.norm = nn.LayerNorm(hidden_size)\n",
    "        self.classifier = nn.Linear(hidden_size, num_classes)\n",
    "\n",
    "    def forward(self, x: torch.Tensor, mask: torch.Tensor = None) -> torch.Tensor:\n",
    "        \"\"\"\n",
    "        前向传播\n",
    "        Args:\n",
    "            x: 输入张量，形状为 (batch_size, seq_len)\n",
    "            mask: 可选的注意力掩码，形状为 (batch_size, seq_len)\n",
    "        Returns:\n",
    "            输出张量，形状为 (batch_size, num_classes)\n",
    "        \"\"\"\n",
    "        B, L = x.size()\n",
    "        \n",
    "        # 生成位置编码\n",
    "        pos = torch.arange(L, device=x.device).unsqueeze(0).expand(B, L)\n",
    "        \n",
    "        # 词嵌入 + 位置嵌入 + dropout\n",
    "        x = self.token_emb(x) + self.pos_emb(pos)\n",
    "        x = self.dropout(x)\n",
    "\n",
    "        # 通过编码器层\n",
    "        for layer in self.layers:\n",
    "            x = layer(x, mask)\n",
    "        \n",
    "        # 层归一化并提取 [CLS] 向量\n",
    "        x = self.norm(x)\n",
    "        cls_token = x[:, 0, :]  # 取 [CLS] 向量\n",
    "        return self.classifier(cls_token)\n",
    "\n",
    "\n",
    "# ====== 数据预处理 ======\n",
    "def collate_fn(batch, tokenizer, max_len=256):\n",
    "    \"\"\"\n",
    "    数据批处理函数，用于动态填充和生成注意力掩码\n",
    "    Args:\n",
    "        batch: 数据集中的一个批次\n",
    "        tokenizer: HuggingFace 分词器\n",
    "        max_len: 最大序列长度\n",
    "    Returns:\n",
    "        input_ids: 输入 token IDs，形状为 (batch_size, max_len)\n",
    "        attention_mask: 注意力掩码，形状为 (batch_size, max_len)\n",
    "        labels: 标签，形状为 (batch_size,)\n",
    "    \"\"\"\n",
    "    texts = [b[\"text\"] for b in batch]\n",
    "    labels = [b[\"label\"] for b in batch]\n",
    "    # 动态填充到批次中最长序列长度，但不超过 max_len\n",
    "    enc = tokenizer(texts, truncation=True, padding=True, max_length=max_len, return_tensors=\"pt\")\n",
    "    return enc[\"input_ids\"], enc[\"attention_mask\"], torch.tensor(labels, dtype=torch.long)\n",
    "\n",
    "\n",
    "def main():\n",
    "    \"\"\"主函数，负责数据加载、模型训练、测试和推理示例\"\"\"\n",
    "    try:\n",
    "        # 加载数据集（IMDb 用于二分类）\n",
    "        logger.info(\"加载 IMDb 数据集\")\n",
    "        dataset = load_dataset(\"imdb\")\n",
    "\n",
    "        # 加载预训练分词器\n",
    "        logger.info(\"加载 BERT 分词器\")\n",
    "        tokenizer = AutoTokenizer.from_pretrained(\"bert-base-uncased\")\n",
    "\n",
    "        # 创建数据加载器\n",
    "        train_loader = DataLoader(\n",
    "            dataset[\"train\"],\n",
    "            batch_size=16,\n",
    "            shuffle=True,\n",
    "            collate_fn=lambda x: collate_fn(x, tokenizer, max_len=256),\n",
    "            num_workers=2,\n",
    "            pin_memory=True\n",
    "        )\n",
    "        test_loader = DataLoader(\n",
    "            dataset[\"test\"],\n",
    "            batch_size=16,\n",
    "            collate_fn=lambda x: collate_fn(x, tokenizer, max_len=256),\n",
    "            num_workers=2,\n",
    "            pin_memory=True\n",
    "        )\n",
    "\n",
    "        # 初始化模型\n",
    "        vocab_size = tokenizer.vocab_size\n",
    "        num_classes = 2  # IMDb 为二分类\n",
    "        model = MiniBert(\n",
    "            vocab_size=vocab_size,\n",
    "            hidden_size=128,\n",
    "            num_heads=4,\n",
    "            num_layers=2,\n",
    "            ffn_size=256,\n",
    "            max_len=256,\n",
    "            num_classes=num_classes,\n",
    "            dropout=0.1\n",
    "        )\n",
    "\n",
    "        # 选择设备\n",
    "        device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "        model.to(device)\n",
    "        logger.info(f\"使用设备: {device}\")\n",
    "\n",
    "        # 定义优化器和损失函数\n",
    "        optimizer = optim.Adam(model.parameters(), lr=1e-3, weight_decay=1e-5)\n",
    "        criterion = nn.CrossEntropyLoss()\n",
    "\n",
    "        # 训练循环\n",
    "        num_epochs = 2\n",
    "        for epoch in range(num_epochs):\n",
    "            model.train()\n",
    "            total_loss, total_acc = 0, 0\n",
    "            train_bar = tqdm(train_loader, desc=f\"Epoch {epoch+1}/{num_epochs}\", leave=False)\n",
    "            \n",
    "            for input_ids, attention_mask, labels in train_bar:\n",
    "                input_ids, attention_mask, labels = input_ids.to(device), attention_mask.to(device), labels.to(device)\n",
    "\n",
    "                # 前向传播\n",
    "                outputs = model(input_ids, attention_mask)\n",
    "                loss = criterion(outputs, labels)\n",
    "\n",
    "                # 反向传播\n",
    "                optimizer.zero_grad()\n",
    "                loss.backward()\n",
    "                # 梯度裁剪，防止梯度爆炸\n",
    "                torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)\n",
    "                optimizer.step()\n",
    "\n",
    "                # 计算准确率和损失\n",
    "                preds = outputs.argmax(dim=1)\n",
    "                total_acc += (preds == labels).float().sum().item()\n",
    "                total_loss += loss.item() * labels.size(0)\n",
    "                train_bar.set_postfix({\"loss\": loss.item(), \"acc\": (preds == labels).float().mean().item()})\n",
    "\n",
    "            # 打印训练结果\n",
    "            avg_loss = total_loss / len(dataset[\"train\"])\n",
    "            avg_acc = total_acc / len(dataset[\"train\"])\n",
    "            logger.info(f\"Epoch {epoch+1}/{num_epochs}, Train Loss: {avg_loss:.4f}, Train Acc: {avg_acc:.4f}\")\n",
    "\n",
    "        # 测试循环\n",
    "        model.eval()\n",
    "        correct = 0\n",
    "        total = 0\n",
    "        test_bar = tqdm(test_loader, desc=\"Testing\", leave=False)\n",
    "        \n",
    "        with torch.no_grad():\n",
    "            for input_ids, attention_mask, labels in test_bar:\n",
    "                input_ids, attention_mask, labels = input_ids.to(device), attention_mask.to(device), labels.to(device)\n",
    "                outputs = model(input_ids, attention_mask)\n",
    "                preds = outputs.argmax(dim=1)\n",
    "                correct += (preds == labels).float().sum().item()\n",
    "                total += labels.size(0)\n",
    "                test_bar.set_postfix({\"acc\": (preds == labels).float().mean().item()})\n",
    "\n",
    "        # 打印测试结果\n",
    "        test_acc = correct / total\n",
    "        logger.info(f\"Test Acc: {test_acc:.4f}\")\n",
    "\n",
    "        # 推理示例\n",
    "        logger.info(\"进行模型推理示例\")\n",
    "        examples = [\n",
    "            \"This movie was absolutely fantastic! I loved every moment of it.\",\n",
    "            \"What a terrible film. It was a complete waste of time.\",\n",
    "            \"The acting was superb, but the storyline was predictable and boring.\",\n",
    "            \"An instant classic with brilliant direction and stunning visuals.\"\n",
    "        ]\n",
    "        \n",
    "        class_names = [\"Negative\", \"Positive\"]  # IMDb 标签: 0=负面, 1=正面\n",
    "        \n",
    "        for text in examples:\n",
    "            # 分词并准备输入\n",
    "            enc = tokenizer(text, truncation=True, padding=\"max_length\", max_length=256, return_tensors=\"pt\")\n",
    "            input_ids = enc[\"input_ids\"].to(device)\n",
    "            attention_mask = enc[\"attention_mask\"].to(device)\n",
    "            \n",
    "            with torch.no_grad():\n",
    "                outputs = model(input_ids, attention_mask)\n",
    "                pred = outputs.argmax(dim=1).item()\n",
    "            \n",
    "            logger.info(f\"文本: '{text}'\")\n",
    "            logger.info(f\"预测: {class_names[pred]} (类别 {pred})\\n\")\n",
    "\n",
    "    except Exception as e:\n",
    "        logger.error(f\"发生错误: {str(e)}\")\n",
    "        raise\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    main()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "406169cb-da67-4c13-acaa-6151677254bb",
   "metadata": {},
   "source": [
    "## GPT模型示例"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "91b77e2a-b65d-4665-b165-5e9b02cd7db4",
   "metadata": {},
   "source": [
    "### 极简GPT模型示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4e93db91-34fc-4182-a03e-03a549267d11",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "\n",
    "# ======================\n",
    "# 自注意力机制（带因果 Mask）\n",
    "# ======================\n",
    "class SelfAttention(nn.Module):\n",
    "    def __init__(self, embed_dim, num_heads, max_len=128):\n",
    "        super().__init__()\n",
    "        assert embed_dim % num_heads == 0, \"embed_dim 必须能被 num_heads 整除\"\n",
    "        self.num_heads = num_heads\n",
    "        self.head_dim = embed_dim // num_heads\n",
    "\n",
    "        # 一次性映射出 Q, K, V 三个矩阵\n",
    "        self.qkv = nn.Linear(embed_dim, 3 * embed_dim)\n",
    "        self.out = nn.Linear(embed_dim, embed_dim)\n",
    "\n",
    "        # 注册一个 buffer：下三角 mask (避免看未来)\n",
    "        mask = torch.tril(torch.ones(max_len, max_len))\n",
    "        self.register_buffer(\"mask\", mask)\n",
    "\n",
    "    def forward(self, x):\n",
    "        B, L, D = x.size()   # B: batch, L: 序列长度, D: embedding 维度\n",
    "        qkv = self.qkv(x)    # (B, L, 3D)\n",
    "        q, k, v = qkv.chunk(3, dim=-1)  # 分成三块 (B, L, D)\n",
    "\n",
    "        # 变换成多头 (B, h, L, d)\n",
    "        q = q.view(B, L, self.num_heads, self.head_dim).transpose(1, 2)\n",
    "        k = k.view(B, L, self.num_heads, self.head_dim).transpose(1, 2)\n",
    "        v = v.view(B, L, self.num_heads, self.head_dim).transpose(1, 2)\n",
    "\n",
    "        # 计算注意力分数\n",
    "        scores = q @ k.transpose(-2, -1) / (self.head_dim ** 0.5)\n",
    "\n",
    "        # 应用因果 mask，保证预测位置不能看到未来 token\n",
    "        mask = self.mask[:L, :L]  # 取前 L×L 的部分\n",
    "        scores = scores.masked_fill(mask == 0, float('-inf'))\n",
    "\n",
    "        # Softmax 权重\n",
    "        attn = F.softmax(scores, dim=-1)\n",
    "\n",
    "        # 加权求和\n",
    "        out = attn @ v  # (B, h, L, d)\n",
    "\n",
    "        # 拼回原维度\n",
    "        out = out.transpose(1, 2).contiguous().view(B, L, D)\n",
    "        return self.out(out)\n",
    "\n",
    "\n",
    "# ======================\n",
    "# 前馈网络（FFN）\n",
    "# ======================\n",
    "class FeedForward(nn.Module):\n",
    "    def __init__(self, embed_dim, hidden_dim):\n",
    "        super().__init__()\n",
    "        self.fc1 = nn.Linear(embed_dim, hidden_dim)\n",
    "        self.fc2 = nn.Linear(hidden_dim, embed_dim)\n",
    "\n",
    "    def forward(self, x):\n",
    "        return self.fc2(F.gelu(self.fc1(x)))\n",
    "\n",
    "\n",
    "# ======================\n",
    "# GPT Block: 自注意力 + FFN + 残差 + LayerNorm\n",
    "# ======================\n",
    "class GPTBlock(nn.Module):\n",
    "    def __init__(self, embed_dim, num_heads, hidden_dim, max_len=128):\n",
    "        super().__init__()\n",
    "        self.ln1 = nn.LayerNorm(embed_dim)\n",
    "        self.ln2 = nn.LayerNorm(embed_dim)\n",
    "        self.attn = SelfAttention(embed_dim, num_heads, max_len=max_len)\n",
    "        self.ff = FeedForward(embed_dim, hidden_dim)\n",
    "\n",
    "    def forward(self, x):\n",
    "        # 残差连接 + 注意力\n",
    "        x = x + self.attn(self.ln1(x))\n",
    "        # 残差连接 + 前馈网络\n",
    "        x = x + self.ff(self.ln2(x))\n",
    "        return x\n",
    "\n",
    "\n",
    "# ======================\n",
    "# MiniGPT 主体\n",
    "# ======================\n",
    "class MiniGPT(nn.Module):\n",
    "    def __init__(self, vocab_size, max_len=128, embed_dim=128, num_heads=4, num_layers=2, hidden_dim=256):\n",
    "        super().__init__()\n",
    "        self.token_emb = nn.Embedding(vocab_size, embed_dim)  # token 嵌入\n",
    "        self.pos_emb = nn.Embedding(max_len, embed_dim)       # 位置嵌入\n",
    "        self.blocks = nn.ModuleList([\n",
    "            GPTBlock(embed_dim, num_heads, hidden_dim, max_len=max_len)\n",
    "            for _ in range(num_layers)\n",
    "        ])\n",
    "        self.ln = nn.LayerNorm(embed_dim)\n",
    "        self.fc_out = nn.Linear(embed_dim, vocab_size)\n",
    "\n",
    "    def forward(self, x):\n",
    "        B, L = x.shape\n",
    "        pos = torch.arange(L, device=x.device).unsqueeze(0)  # (1, L)\n",
    "        x = self.token_emb(x) + self.pos_emb(pos)\n",
    "        for block in self.blocks:\n",
    "            x = block(x)\n",
    "        x = self.ln(x)\n",
    "        return self.fc_out(x)\n",
    "\n",
    "\n",
    "# ======================\n",
    "# 测试：字符级小数据集\n",
    "# ======================\n",
    "if __name__ == \"__main__\":\n",
    "    text = \"hello world! this is a tiny gpt demo.\"\n",
    "    vocab = sorted(list(set(text)))\n",
    "    stoi = {ch: i for i, ch in enumerate(vocab)}\n",
    "    itos = {i: ch for ch, i in stoi.items()}\n",
    "\n",
    "    # 编码 & 解码函数\n",
    "    def encode(s): return [stoi[ch] for ch in s]\n",
    "    def decode(ids): return ''.join([itos[i] for i in ids])\n",
    "\n",
    "    data = torch.tensor(encode(text), dtype=torch.long)\n",
    "    vocab_size = len(vocab)\n",
    "\n",
    "    # 定义模型\n",
    "    model = MiniGPT(vocab_size, max_len=64, embed_dim=128, num_heads=4, num_layers=2)\n",
    "    optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)\n",
    "\n",
    "    # 简单训练几个step\n",
    "    for step in range(200):\n",
    "        idx = torch.randint(0, len(data) - 10, (1,))\n",
    "        x = data[idx:idx+8].unsqueeze(0)   # 输入序列\n",
    "        y = data[idx+1:idx+9].unsqueeze(0) # 目标序列（右移一位）\n",
    "\n",
    "        logits = model(x)  # (B, L, vocab_size)\n",
    "        loss = F.cross_entropy(logits.view(-1, vocab_size), y.view(-1))\n",
    "\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "\n",
    "        if step % 50 == 0:\n",
    "            print(f\"Step {step}, Loss {loss.item():.4f}\")\n",
    "\n",
    "    # 生成文本 (随机采样，而不是贪心)\n",
    "    context = torch.tensor([stoi[\"h\"]], dtype=torch.long).unsqueeze(0)\n",
    "    model.eval()\n",
    "    with torch.no_grad():\n",
    "        for _ in range(50):\n",
    "            logits = model(context)\n",
    "            probs = F.softmax(logits[0, -1], dim=-1)\n",
    "            next_id = torch.multinomial(probs, num_samples=1).unsqueeze(0)  # 采样而非贪心\n",
    "            context = torch.cat([context, next_id], dim=1)\n",
    "\n",
    "    print(\"Generated:\", decode(context[0].tolist()))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "63e8f49b-ad44-4b88-9fb8-20f59ed1d378",
   "metadata": {},
   "source": [
    "### 真实数据集上训练GPT模型\n",
    "\n",
    "**代码整体功能**\n",
    "\n",
    "下面这段代码实现了一个简化版的GPT（生成式预训练Transformer）模型，我们在代码中称其为MiniGPT，主要用于自然语言处理任务。它基于PyTorch框架，结合了Transformer架构，能够在WikiText数据集上进行语言建模训练，并支持自回归文本生成。\n",
    "\n",
    "主要功能包括：\n",
    "- 模型训练：在 WikiText 数据集上训练 MiniGPT 模型，通过语言建模任务预测下一个 token。\n",
    "- 文本生成：基于给定的提示（prompt）生成后续文本，采用自回归方式。\n",
    "- 模块化设计：代码包含数据预处理、模型架构、训练逻辑和文本生成等模块，具有良好的可扩展性。\n",
    "\n",
    "代码的核心是实现了一个小型的类GPT模型，模仿GPT-2的结构，但规模较小，仅适合用于学习和实验。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "515d3d65-6d0a-4114-a7e8-b46d546cf85d",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "from torch.utils.data import DataLoader, Dataset\n",
    "from datasets import load_dataset\n",
    "from transformers import GPT2Tokenizer\n",
    "import math\n",
    "from tqdm import tqdm  # 用于显示训练进度条\n",
    "\n",
    "# ===============================\n",
    "# MiniGPT 配置类\n",
    "# ===============================\n",
    "class Config:\n",
    "    vocab_size = 50257    # 词汇表大小，与 GPT-2 分词器一致\n",
    "    n_embd = 256          # 嵌入向量的维度\n",
    "    n_head = 8            # 多头注意力机制中的注意力头数量\n",
    "    n_layer = 6           # Transformer 层数\n",
    "    max_seq_len = 128     # 输入序列的最大长度\n",
    "    dropout = 0.1         # Dropout 正则化比率，用于防止过拟合\n",
    "\n",
    "\n",
    "# ===============================\n",
    "# 多头自注意力机制\n",
    "# ===============================\n",
    "class MultiHeadSelfAttention(nn.Module):\n",
    "    def __init__(self, config):\n",
    "        super().__init__()\n",
    "        # 确保嵌入维度能被注意力头数整除\n",
    "        assert config.n_embd % config.n_head == 0\n",
    "        self.n_head = config.n_head              # 注意力头数\n",
    "        self.n_embd = config.n_embd              # 嵌入维度\n",
    "        self.head_size = config.n_embd // config.n_head  # 每个注意力头的维度\n",
    "\n",
    "        # 线性变换层，用于生成查询（query）、键（key）和值（value）\n",
    "        self.query = nn.Linear(config.n_embd, config.n_embd)\n",
    "        self.key = nn.Linear(config.n_embd, config.n_embd)\n",
    "        self.value = nn.Linear(config.n_embd, config.n_embd)\n",
    "        self.dropout = nn.Dropout(config.dropout)  # 注意力权重的 Dropout\n",
    "        self.out = nn.Linear(config.n_embd, config.n_embd)  # 输出线性层\n",
    "\n",
    "    def forward(self, x):\n",
    "        B, T, C = x.size()  # B: 批次大小, T: 序列长度, C: 嵌入维度\n",
    "        # 计算查询、键、值，并将其重塑为多头格式\n",
    "        q = self.query(x).reshape(B, T, self.n_head, self.head_size).transpose(1, 2)  # (B, nh, T, hs)\n",
    "        k = self.key(x).reshape(B, T, self.n_head, self.head_size).transpose(1, 2)    # (B, nh, T, hs)\n",
    "        v = self.value(x).reshape(B, T, self.n_head, self.head_size).transpose(1, 2)  # (B, nh, T, hs)\n",
    "\n",
    "        # 计算注意力分数（点积注意力机制）\n",
    "        scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.head_size)  # (B, nh, T, T)\n",
    "\n",
    "        # 应用因果掩码，确保模型只能看到当前和之前的 token（解码器特性）\n",
    "        mask = torch.triu(torch.ones(T, T, device=x.device), diagonal=1).bool()\n",
    "        scores = scores.masked_fill(mask.unsqueeze(0).unsqueeze(0), float('-inf'))\n",
    "\n",
    "        # 对注意力分数进行 softmax 归一化，得到注意力权重\n",
    "        attn = torch.softmax(scores, dim=-1)\n",
    "        attn = self.dropout(attn)  # 应用 Dropout 防止过拟合\n",
    "\n",
    "        # 使用注意力权重对值（value）进行加权求和\n",
    "        y = torch.matmul(attn, v)  # (B, nh, T, hs)\n",
    "        # 将多头结果重塑并拼接为原始维度\n",
    "        y = y.transpose(1, 2).contiguous().reshape(B, T, C)  # (B, T, C)\n",
    "        # 最后通过输出线性层\n",
    "        return self.out(y)\n",
    "\n",
    "\n",
    "# ===============================\n",
    "# Transformer 块\n",
    "# ===============================\n",
    "class TransformerBlock(nn.Module):\n",
    "    def __init__(self, config):\n",
    "        super().__init__()\n",
    "        self.attn = MultiHeadSelfAttention(config)  # 多头自注意力子层\n",
    "        self.ln1 = nn.LayerNorm(config.n_embd)     # 第一个层归一化\n",
    "        # 前馈神经网络（MLP），包括两层线性变换和 GELU 激活函数\n",
    "        self.mlp = nn.Sequential(\n",
    "            nn.Linear(config.n_embd, 4 * config.n_embd),  # 扩展维度\n",
    "            nn.GELU(),                                    # GELU 激活函数\n",
    "            nn.Linear(4 * config.n_embd, config.n_embd),  # 还原维度\n",
    "            nn.Dropout(config.dropout)                    # Dropout 正则化\n",
    "        )\n",
    "        self.ln2 = nn.LayerNorm(config.n_embd)        # 第二个层归一化\n",
    "\n",
    "    def forward(self, x):\n",
    "        # 残差连接 + 层归一化 + 自注意力\n",
    "        x = x + self.attn(self.ln1(x))\n",
    "        # 残差连接 + 层归一化 + 前馈网络\n",
    "        x = x + self.mlp(self.ln2(x))\n",
    "        return x\n",
    "\n",
    "\n",
    "# ===============================\n",
    "# MiniGPT 模型\n",
    "# ===============================\n",
    "class MiniGPT(nn.Module):\n",
    "    def __init__(self, config):\n",
    "        super().__init__()\n",
    "        self.config = config\n",
    "        # 词嵌入层，将 token ID 转换为嵌入向量\n",
    "        self.token_embedding = nn.Embedding(config.vocab_size, config.n_embd)\n",
    "        # 位置嵌入层，记录 token 在序列中的位置信息\n",
    "        self.position_embedding = nn.Embedding(config.max_seq_len, config.n_embd)\n",
    "        # 堆叠多个 Transformer 块\n",
    "        self.blocks = nn.ModuleList([TransformerBlock(config) for _ in range(config.n_layer)])\n",
    "        self.ln_f = nn.LayerNorm(config.n_embd)  # 最后的层归一化\n",
    "        self.head = nn.Linear(config.n_embd, config.vocab_size, bias=False)  # 输出层，预测词汇表中的 token\n",
    "        self.dropout = nn.Dropout(config.dropout)  # 输入嵌入的 Dropout\n",
    "\n",
    "    def forward(self, idx):\n",
    "        B, T = idx.size()  # B: 批次大小, T: 序列长度\n",
    "        # 获取 token 嵌入\n",
    "        tok_emb = self.token_embedding(idx)  # (B, T, C)\n",
    "        # 获取位置嵌入，生成 0 到 T-1 的位置索引\n",
    "        pos_emb = self.position_embedding(torch.arange(T, device=idx.device)).unsqueeze(0)  # (1, T, C)\n",
    "        # 合并 token 嵌入和位置嵌入，并应用 Dropout\n",
    "        x = self.dropout(tok_emb + pos_emb)\n",
    "        # 依次通过所有 Transformer 块\n",
    "        for block in self.blocks:\n",
    "            x = block(x)\n",
    "        # 最后的层归一化\n",
    "        x = self.ln_f(x)\n",
    "        # 输出预测 logits\n",
    "        logits = self.head(x)  # (B, T, vocab_size)\n",
    "        return logits\n",
    "\n",
    "    @torch.no_grad()\n",
    "    def generate(self, idx, max_new_tokens):\n",
    "        # 自回归生成文本\n",
    "        for _ in range(max_new_tokens):\n",
    "            # 截取最后 max_seq_len 个 token 以避免超出序列长度限制\n",
    "            idx_cond = idx[:, -self.config.max_seq_len:]\n",
    "            # 前向传播获取 logits\n",
    "            logits = self(idx_cond)\n",
    "            # 取最后一个时间步的 logits\n",
    "            logits = logits[:, -1, :]  # (B, vocab_size)\n",
    "            # 转换为概率分布\n",
    "            probs = torch.softmax(logits, dim=-1)\n",
    "            # 从概率分布中采样下一个 token\n",
    "            idx_next = torch.multinomial(probs, num_samples=1)\n",
    "            # 将新 token 拼接到序列中\n",
    "            idx = torch.cat((idx, idx_next), dim=1)\n",
    "        return idx\n",
    "\n",
    "\n",
    "# ===============================\n",
    "# 自定义数据集类（基于 WikiText）\n",
    "# ===============================\n",
    "class WikiTextDataset(Dataset):\n",
    "    def __init__(self, data, tokenizer, max_seq_len):\n",
    "        self.tokenizer = tokenizer\n",
    "        self.max_seq_len = max_seq_len\n",
    "        # 将数据集中的所有文本拼接成一个长字符串\n",
    "        all_text = \" \".join([x[\"text\"] for x in data if x[\"text\"].strip()])\n",
    "        # 使用分词器将文本编码为 token ID\n",
    "        tokens = tokenizer.encode(all_text, add_special_tokens=False)\n",
    "        # 将 token 序列切分为固定长度的片段\n",
    "        self.examples = []\n",
    "        for i in range(0, len(tokens) - max_seq_len, max_seq_len):\n",
    "            chunk = tokens[i:i+max_seq_len]\n",
    "            self.examples.append(chunk)\n",
    "\n",
    "    def __len__(self):\n",
    "        # 返回数据集中的样本数量\n",
    "        return len(self.examples)\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        # 返回指定索引的样本，并转换为张量\n",
    "        return torch.tensor(self.examples[idx], dtype=torch.long)\n",
    "\n",
    "\n",
    "# ===============================\n",
    "# 训练函数\n",
    "# ===============================\n",
    "def train_model(model, train_loader, optimizer, device, tokenizer, epochs=1):\n",
    "    model.train()  # 设置模型为训练模式\n",
    "    # 使用交叉熵损失函数，忽略填充 token 的损失\n",
    "    criterion = nn.CrossEntropyLoss(ignore_index=tokenizer.pad_token_id)\n",
    "\n",
    "    for epoch in range(epochs):\n",
    "        total_loss = 0\n",
    "        # 使用 tqdm 显示训练进度条\n",
    "        progress_bar = tqdm(train_loader, desc=f\"Epoch {epoch+1}\")\n",
    "        for batch in progress_bar:\n",
    "            batch = batch.to(device)  # 将批次数据移动到指定设备（CPU/GPU）\n",
    "            optimizer.zero_grad()     # 清空梯度\n",
    "            # 输入序列去掉最后一个 token，预测下一个 token\n",
    "            logits = model(batch[:, :-1])  # (B, T-1, vocab_size)\n",
    "            # 计算损失，目标是右移一位的序列\n",
    "            loss = criterion(\n",
    "                logits.reshape(-1, model.config.vocab_size),\n",
    "                batch[:, 1:].reshape(-1)\n",
    "            )\n",
    "            loss.backward()  # 反向传播\n",
    "            optimizer.step()  # 更新参数\n",
    "            total_loss += loss.item()\n",
    "            # 更新进度条显示当前损失\n",
    "            progress_bar.set_postfix(loss=loss.item())\n",
    "        # 打印每个 epoch 的平均损失\n",
    "        print(f\"Epoch {epoch + 1}, Avg Loss: {total_loss / len(train_loader):.4f}\")\n",
    "\n",
    "\n",
    "# ===============================\n",
    "# 主程序\n",
    "# ===============================\n",
    "def main():\n",
    "    # 加载 WikiText 数据集（训练集）\n",
    "    dataset = load_dataset(\"wikitext\", \"wikitext-2-raw-v1\", split=\"train\")\n",
    "    # 加载 GPT-2 分词器\n",
    "    tokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")\n",
    "    tokenizer.pad_token = tokenizer.eos_token  # 设置填充 token 为结束 token\n",
    "\n",
    "    # 创建自定义数据集和数据加载器\n",
    "    train_dataset = WikiTextDataset(dataset, tokenizer, Config.max_seq_len)\n",
    "    train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)\n",
    "\n",
    "    # 初始化模型并移动到设备\n",
    "    device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "    model = MiniGPT(Config()).to(device)\n",
    "    # 使用 AdamW 优化器，设置学习率和权重衰减\n",
    "    optimizer = optim.AdamW(model.parameters(), lr=3e-4, weight_decay=0.01)\n",
    "\n",
    "    # 训练模型\n",
    "    train_model(model, train_loader, optimizer, device, tokenizer, epochs=3)\n",
    "\n",
    "    # 使用模型生成文本\n",
    "    model.eval()  # 设置模型为评估模式\n",
    "    prompt = \"The history of artificial intelligence\"  # 提示文本\n",
    "    # 将提示文本编码为 token ID\n",
    "    input_ids = torch.tensor([tokenizer.encode(prompt)], dtype=torch.long).to(device)\n",
    "    # 生成最多 50 个新 token\n",
    "    generated = model.generate(input_ids, max_new_tokens=50)\n",
    "    # 解码生成的 token 为文本并打印\n",
    "    print(\"\\n=== 生成结果 ===\")\n",
    "    print(tokenizer.decode(generated[0], skip_special_tokens=True))\n",
    "\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    main()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2f102b9c-bc82-4995-8846-ec412e2aee7a",
   "metadata": {},
   "source": [
    "### GPT的中文模型\n",
    "\n",
    "基于中文字符集训练一个GPT风格的中文模型。\n",
    "\n",
    "**获取数据集**\n",
    "\n",
    "CLUECorpusSmall是CLUECorpus2020的子集，约14GB，包含50亿中文字符，分为以下部分：\n",
    "\n",
    "- 新闻语料（news2016zh_corpus，8GB，2000 个文件，密码：mzlk）\n",
    "- 社区互动语料（webText2019zh_corpus，3GB，900 多个文件，密码：qvlq）\n",
    "- 维基百科语料（wiki2019zh_corpus，1.1GB，300 个文件，密码：xv7e）\n",
    "- 评论语料（comments2019zh_corpus，2.3GB，784 个文件，密码：gc3m）\n",
    "\n",
    "下载地址为 https://github.com/CLUEbenchmark/CLUECorpus2020 ，使用前需要手动下载，并将下载后的文件解压至 ./cluecorpussmall/ 目录，也可以修改代码中的data_dir变量为实际解压的路径。\n",
    "\n",
    "数据格式：每个文件不超过4MB，每行一句，文档间以空行分隔，适合语言建模任务。\n",
    "\n",
    "**数据预处理：**\n",
    "\n",
    "CLUECorpusSmallDataset类使用 os.walk 遍历 data_dir 中的所有文件，逐个编码为token ID，并切分为max_seq_len=128的序列。\n",
    "\n",
    "另外，为避免内存溢出，编码时设置truncation=True和max_length=max_seq_len * 2，以确保不会因单文件过长导致问题。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "163ca998-af37-441f-ae9f-bebdac4d9b11",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "from torch.utils.data import DataLoader, Dataset\n",
    "from transformers import BertTokenizer\n",
    "import math\n",
    "from tqdm import tqdm\n",
    "import os\n",
    "\n",
    "# ===============================\n",
    "# MiniGPT 配置类\n",
    "# ===============================\n",
    "class Config:\n",
    "    vocab_size = 21128    # 词汇表大小，与 bert-base-chinese 分词器一致\n",
    "    n_embd = 256          # 嵌入向量的维度\n",
    "    n_head = 8            # 多头注意力机制中的注意力头数量\n",
    "    n_layer = 6           # Transformer 层数\n",
    "    max_seq_len = 128     # 输入序列的最大长度\n",
    "    dropout = 0.1         # Dropout 正则化比率，用于防止过拟合\n",
    "\n",
    "\n",
    "# ===============================\n",
    "# 多头自注意力机制\n",
    "# ===============================\n",
    "class MultiHeadSelfAttention(nn.Module):\n",
    "    def __init__(self, config):\n",
    "        super().__init__()\n",
    "        assert config.n_embd % config.n_head == 0\n",
    "        self.n_head = config.n_head\n",
    "        self.n_embd = config.n_embd\n",
    "        self.head_size = config.n_embd // config.n_head\n",
    "\n",
    "        self.query = nn.Linear(config.n_embd, config.n_embd)\n",
    "        self.key = nn.Linear(config.n_embd, config.n_embd)\n",
    "        self.value = nn.Linear(config.n_embd, config.n_embd)\n",
    "        self.dropout = nn.Dropout(config.dropout)\n",
    "        self.out = nn.Linear(config.n_embd, config.n_embd)\n",
    "\n",
    "    def forward(self, x):\n",
    "        B, T, C = x.size()\n",
    "        q = self.query(x).reshape(B, T, self.n_head, self.head_size).transpose(1, 2)\n",
    "        k = self.key(x).reshape(B, T, self.n_head, self.head_size).transpose(1, 2)\n",
    "        v = self.value(x).reshape(B, T, self.n_head, self.head_size).transpose(1, 2)\n",
    "\n",
    "        scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.head_size)\n",
    "        mask = torch.triu(torch.ones(T, T, device=x.device), diagonal=1).bool()\n",
    "        scores = scores.masked_fill(mask.unsqueeze(0).unsqueeze(0), float('-inf'))\n",
    "        attn = torch.softmax(scores, dim=-1)\n",
    "        attn = self.dropout(attn)\n",
    "        y = torch.matmul(attn, v)\n",
    "        y = y.transpose(1, 2).contiguous().reshape(B, T, C)\n",
    "        return self.out(y)\n",
    "\n",
    "\n",
    "# ===============================\n",
    "# Transformer 块\n",
    "# ===============================\n",
    "class TransformerBlock(nn.Module):\n",
    "    def __init__(self, config):\n",
    "        super().__init__()\n",
    "        self.attn = MultiHeadSelfAttention(config)\n",
    "        self.ln1 = nn.LayerNorm(config.n_embd)\n",
    "        self.mlp = nn.Sequential(\n",
    "            nn.Linear(config.n_embd, 4 * config.n_embd),\n",
    "            nn.GELU(),\n",
    "            nn.Linear(4 * config.n_embd, config.n_embd),\n",
    "            nn.Dropout(config.dropout)\n",
    "        )\n",
    "        self.ln2 = nn.LayerNorm(config.n_embd)\n",
    "\n",
    "    def forward(self, x):\n",
    "        x = x + self.attn(self.ln1(x))\n",
    "        x = x + self.mlp(self.ln2(x))\n",
    "        return x\n",
    "\n",
    "\n",
    "# ===============================\n",
    "# MiniGPT 模型\n",
    "# ===============================\n",
    "class MiniGPT(nn.Module):\n",
    "    def __init__(self, config):\n",
    "        super().__init__()\n",
    "        self.config = config\n",
    "        self.token_embedding = nn.Embedding(config.vocab_size, config.n_embd)\n",
    "        self.position_embedding = nn.Embedding(config.max_seq_len, config.n_embd)\n",
    "        self.blocks = nn.ModuleList([TransformerBlock(config) for _ in range(config.n_layer)])\n",
    "        self.ln_f = nn.LayerNorm(config.n_embd)\n",
    "        self.head = nn.Linear(config.n_embd, config.vocab_size, bias=False)\n",
    "        self.dropout = nn.Dropout(config.dropout)\n",
    "\n",
    "    def forward(self, idx):\n",
    "        B, T = idx.size()\n",
    "        tok_emb = self.token_embedding(idx)\n",
    "        pos_emb = self.position_embedding(torch.arange(T, device=idx.device)).unsqueeze(0)\n",
    "        x = self.dropout(tok_emb + pos_emb)\n",
    "        for block in self.blocks:\n",
    "            x = block(x)\n",
    "        x = self.ln_f(x)\n",
    "        logits = self.head(x)\n",
    "        return logits\n",
    "\n",
    "    @torch.no_grad()\n",
    "    def generate(self, idx, max_new_tokens):\n",
    "        if idx.size(1) == 0:\n",
    "            raise ValueError(\"输入的 input_ids 为空，无法生成文本\")\n",
    "        for _ in range(max_new_tokens):\n",
    "            idx_cond = idx[:, -self.config.max_seq_len:]\n",
    "            logits = self(idx_cond)\n",
    "            if logits.size(1) == 0:\n",
    "                raise ValueError(\"模型输出 logits 为空，可能未正确训练\")\n",
    "            logits = logits[:, -1, :]\n",
    "            probs = torch.softmax(logits, dim=-1)\n",
    "            if probs.size(1) != self.config.vocab_size:\n",
    "                raise ValueError(f\"概率分布形状错误，预期 ({self.config.vocab_size})，实际 ({probs.size(1)})\")\n",
    "            idx_next = torch.multinomial(probs, num_samples=1)\n",
    "            idx = torch.cat((idx, idx_next), dim=1)\n",
    "        return idx\n",
    "\n",
    "\n",
    "# ===============================\n",
    "# 自定义数据集类（基于 CLUECorpusSmall，支持无扩展名文件）\n",
    "# ===============================\n",
    "class CLUECorpusSmallDataset(Dataset):\n",
    "    def __init__(self, data_dir, tokenizer, max_seq_len):\n",
    "        self.tokenizer = tokenizer\n",
    "        self.max_seq_len = max_seq_len\n",
    "        self.examples = []\n",
    "        \n",
    "        # 使用绝对路径确保 JupyterLab 环境正确解析\n",
    "        data_dir = os.path.abspath(data_dir)\n",
    "        print(f\"正在加载CLUECorpusSmall数据集，路径：{data_dir}\")\n",
    "        \n",
    "        # 递归加载所有文件\n",
    "        file_count = 0\n",
    "        for root, dirs, files in os.walk(data_dir):\n",
    "            for filename in files:\n",
    "                # 接受所有文件\n",
    "                file_path = os.path.join(root, filename)\n",
    "                try:\n",
    "                    with open(file_path, 'r', encoding='utf-8', errors='ignore') as f:\n",
    "                        text = f.read()\n",
    "                        # 清理空行和多余空格\n",
    "                        text = ' '.join(line.strip() for line in text.splitlines() if line.strip())\n",
    "                        if not text:\n",
    "                            print(f\"警告：文件 {file_path} 为空，跳过\")\n",
    "                            continue\n",
    "                        # 使用分词器编码文本\n",
    "                        tokens = tokenizer.encode(text, add_special_tokens=False, truncation=True, max_length=max_seq_len * 2)\n",
    "                        if len(tokens) < max_seq_len:\n",
    "                            print(f\"警告：文件 {file_path} 的token数量 ({len(tokens)}) 小于max_seq_len ({max_seq_len})，跳过\")\n",
    "                            continue\n",
    "                        # 切分为固定长度的序列\n",
    "                        for i in range(0, len(tokens) - max_seq_len, max_seq_len):\n",
    "                            chunk = tokens[i:i + max_seq_len]\n",
    "                            self.examples.append(chunk)\n",
    "                        print(f\"处理文件 {file_path}，生成 {len(tokens)//max_seq_len} 个样本\")\n",
    "                        file_count += 1\n",
    "                except Exception as e:\n",
    "                    print(f\"警告：无法读取文件 {file_path}，错误：{e}\")\n",
    "        print(f\"共处理 {file_count} 个文件，生成 {len(self.examples)} 个样本\")\n",
    "        if not self.examples:\n",
    "            raise ValueError(f\"未生成任何样本，请检查数据目录 {data_dir} 是否包含有效的文件\")\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.examples)\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        return torch.tensor(self.examples[idx], dtype=torch.long)\n",
    "\n",
    "\n",
    "# ===============================\n",
    "# 训练函数\n",
    "# ===============================\n",
    "def train_model(model, train_loader, optimizer, device, tokenizer, epochs=1):\n",
    "    model.train()\n",
    "    criterion = nn.CrossEntropyLoss(ignore_index=tokenizer.pad_token_id)\n",
    "\n",
    "    for epoch in range(epochs):\n",
    "        total_loss = 0\n",
    "        progress_bar = tqdm(train_loader, desc=f\"Epoch {epoch+1}\")\n",
    "        for batch in progress_bar:\n",
    "            batch = batch.to(device)\n",
    "            optimizer.zero_grad()\n",
    "            logits = model(batch[:, :-1])\n",
    "            loss = criterion(\n",
    "                logits.reshape(-1, model.config.vocab_size),\n",
    "                batch[:, 1:].reshape(-1)\n",
    "            )\n",
    "            loss.backward()\n",
    "            optimizer.step()\n",
    "            total_loss += loss.item()\n",
    "            progress_bar.set_postfix(loss=loss.item())\n",
    "        print(f\"Epoch {epoch + 1}, Avg Loss: {total_loss / len(train_loader):.4f}\")\n",
    "\n",
    "\n",
    "# ===============================\n",
    "# 主程序\n",
    "# ===============================\n",
    "def main():\n",
    "    # 加载bert-base-chinese分词器\n",
    "    tokenizer = BertTokenizer.from_pretrained(\"bert-base-chinese\")\n",
    "    tokenizer.pad_token = tokenizer.pad_token or tokenizer.eos_token\n",
    "    print(\"分词器加载完成\")\n",
    "\n",
    "    # 加载 CLUECorpusSmall 数据集\n",
    "    data_dir = \"./cluecorpussmall/\"\n",
    "    try:\n",
    "        train_dataset = CLUECorpusSmallDataset(data_dir, tokenizer, Config.max_seq_len)\n",
    "    except ValueError as e:\n",
    "        print(f\"错误：{e}\")\n",
    "        return\n",
    "    print(f\"数据集大小：{len(train_dataset)} 个样本\")\n",
    "    if len(train_dataset) == 0:\n",
    "        print(\"错误：数据集为空，无法进行训练，请检查./cluecorpussmall/下是否存在有效的文件\")\n",
    "        return\n",
    "    train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)\n",
    "    print(f\"数据加载器大小：{len(train_loader)} 个批次\")\n",
    "\n",
    "    # 初始化模型并移动到设备\n",
    "    device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "    print(f\"使用设备：{device}\")\n",
    "    model = MiniGPT(Config()).to(device)\n",
    "    optimizer = optim.AdamW(model.parameters(), lr=3e-4, weight_decay=0.01)\n",
    "\n",
    "    # 训练模型\n",
    "    try:\n",
    "        train_model(model, train_loader, optimizer, device, tokenizer, epochs=3)\n",
    "    except Exception as e:\n",
    "        print(f\"训练过程中出错：{e}\")\n",
    "        return\n",
    "\n",
    "    # 使用模型生成文本\n",
    "    model.eval()\n",
    "    prompt = \"人工智能的发展历史\"\n",
    "    try:\n",
    "        input_ids = torch.tensor([tokenizer.encode(prompt, add_special_tokens=False)], dtype=torch.long).to(device)\n",
    "        print(f\"输入 prompt 的 token 数量：{input_ids.size(1)}\")\n",
    "        if input_ids.size(1) == 0:\n",
    "            print(\"错误：输入prompt编码为空，请检查prompt或分词器\")\n",
    "            return\n",
    "        generated = model.generate(input_ids, max_new_tokens=50)\n",
    "        print(\"\\n=== 生成结果 ===\")\n",
    "        print(tokenizer.decode(generated[0], skip_special_tokens=True))\n",
    "    except Exception as e:\n",
    "        print(f\"生成文本时出错：{e}\")\n",
    "\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    main()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
