{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "# 多头自注意力机制指南\n",
    "\n",
    "## 1. 引言\n",
    "多头注意力机制是深度学习中一种重要的技术，广泛应用于自然语言处理（NLP）和计算机视觉等领域。它是Transformer模型的核心组成部分，能够有效地捕捉输入序列中不同位置之间的关系。但是因为Transformer模型过于复杂，所以本文只讨论多头注意力机制的基本原理和应用场景。\n",
    "## 2. 背景故事\n",
    "一花一世界，一树一菩提。天才总是从一个缝里面看到一个世界。这个故事发生在2020年，我在腾讯衍天实验室中，其中的研究员指导我完成滑动窗口注意力机制，当时的方案是配合 LSTM 完成针对长文本的 NER 任务。\n",
    "那会比较主流的Bert模型，在论文发表和测试中，老师指导我使用 LSTM 结合滑动窗口注意力去实现，我当时感觉 LSTM 太落伍了，应该是指导老师似懂非懂可能并不太会写代码，当然事实上可能代码能力真的一般。\n",
    "当时给出的一件事如果使用 Bert 模型需要做庞大的消融实验，学术论证要有消融实验的理念，证明自己提出的模块算法是有效的，而不是去调用别人的算法，不论 Bert 算法效果多好，这个是别人的贡献不是自己的学术贡献，不能够阐述自己的学术观点。\n",
    "那是一个很不愉快的经历。可就是这个简单的事儿，可能非常多人需要到博士第二年第三年甚至整个研究生阶段都未必能够搞明白这种思想。所以我是修行了，但是修行天赋太一般。因此我再次重新整理下这段代码，这段小故事以儆效尤自己失败的研究生生涯中的小 Tips。另外，因为在做滑动窗口注意力的时候，考虑到上下问的 attention 因为两个 for 循环被开除实习资格。结合当前的字节大模型投毒案，其实职场谈不上有多少的朋友，但针对工作而言，我特别喜欢入职浪潮培训的一句话。同事之间非常少有恶意的去批评，指责。我因为在多轮对话中写模型，使用了两个 For 训练计算量增加了几十倍。当时算力便宜，一张 V100 显卡被我用了三周没有训练出来，我当时代码能力不足，而且特别嘴硬。领导在三要求我核查代码。当时年轻，代码能力不足，什么都有胆量承诺。当时 V100 我记得一个月是在 七八千块。这毫无疑问导致了公司没必要的资源浪费，延缓了项目进度。因为要尽快投论文，论文组已经论文进行来 大部分工作，牵扯 Idea 的创新。可是真正的工作中牵扯资金成本。这段经历对我成长非常重要，职场不是学校，不要乱答应。什么都敢去试错。尤其是重要的工作。\n",
    "而且这个是几乎所有大厂面试必问的问题，所以有有必要搞明白：多头、自、注意力机制、为什么是 Q、K、V。\n",
    "## 3. 基本概念\n",
    "注意力机制的核心思想是根据输入序列中各个元素的重要性动态地调整其权重。多头注意力机制通过并行计算多个注意力头，能够从不同的子空间中提取信息，从而增强模型的表达能力。\n",
    "\n",
    "\n",
    "\n",
    "## 3. 工作原理\n",
    "多头注意力机制的工作流程如下：\n",
    "\n",
    "1. **输入表示**：\n",
    "   输入序列通过嵌入层转换为向量表示。\n",
    "\n",
    "2. **线性变换**：\n",
    "   输入向量通过三个不同的线性变换生成查询（Query）、键（Key）和值（Value）：\n",
    "   \\begin{equation}  \n",
    "   Q = XW^Q  \\tag{1}  \n",
    "   \\end{equation} \n",
    "\n",
    "   \\begin{equation}  \n",
    "   K = XW^K  \\tag{2}  \n",
    "   \\end{equation}  \n",
    "   \n",
    "   \\begin{equation}  \n",
    "   V = XW^V  \\tag{3}  \n",
    "   \\end{equation}  \n",
    "\n",
    "3. **计算注意力权重**：\n",
    "   对于每个注意力头，计算注意力权重：\n",
    "   \\begin{equation}\n",
    "   \\text{Attention}(Q, K, V) = \\text{softmax}\\left(\\frac{QK^T}{\\sqrt{d_k}}\\right)V  \\tag{4}  \n",
    "   \\end{equation}\n",
    "   其中，\\( d_k \\) 是键的维度，用于缩放。\n",
    "\n",
    "4. **多头计算**：\n",
    "   将多个注意力头的输出拼接在一起：\n",
    "   \\begin{equation}\n",
    "   \\text{MultiHead}(Q, K, V) = \\text{Concat}(\\text{head}_1, \\text{head}_2, \\ldots, \\text{head}_h)W^O  \\tag{5}\n",
    "   \\end{equation}\n",
    "\n",
    "5. **输出**：\n",
    "   最终的输出通过线性变换得到。\n",
    "\n",
    "\n",
    "## 4. 优缺点\n",
    "### 优点\n",
    "- **并行计算**：多头注意力机制允许并行计算，提高了效率。\n",
    "- **丰富的表示能力**：通过多个头，模型能够捕捉到不同的特征和关系。\n",
    "\n",
    "### 缺点\n",
    "- **计算开销**：随着头数的增加，计算和内存开销也会增加。\n",
    "- **超参数调优**：需要合理选择头的数量和维度，增加了模型调优的复杂性。\n",
    "\n",
    "## 5. 应用场景\n",
    "- **自然语言处理**：机器翻译、文本生成、情感分析等任务。\n",
    "- **计算机视觉**：图像分类、目标检测等任务。\n",
    "- **推荐系统**：用户行为建模和推荐生成。\n",
    "\n",
    "## 6. 结论\n",
    "多头注意力机制是现代深度学习模型中不可或缺的组成部分，通过并行处理和丰富的特征提取能力，极大地提升了模型的性能。因为他的庞大开销，针对长文本的开销优化是发表论文的一种核心思路。\n",
    "\n",
    "## 参考文献\n",
    "- Vaswani et al. (2017). \"Attention is All You Need\". \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "# 代码实现\n",
    "\n",
    "## 多头注意力的结合\n",
    "以下是多头注意力直接完成任务。\n",
    "\n",
    "\n",
    "\n",
    "## 任务描述\n",
    "本次任务使用的是 **STSbenchmark** 数据集，包括 2012 年至 2017 年之间 SemEval 组织的 STS 任务中使用的英语数据集。数据集的选择包括图像标题，新闻标题和用户论坛中的文本。\n",
    "\n",
    "每条数据包含两个句子和一个人工打分，分数决定两个句子是否相似。\n",
    "\n",
    "### 数据集描述\n",
    "数据集包含多个样本，每个样本由以下字段组成：\n",
    "- **id**: 唯一标识符\n",
    "- **sentence_a**: 第一个句子\n",
    "- **sentence_b**: 第二个句子\n",
    "- **similarity**: 两个句子之间的相似度评分，范围通常为 0 到 5，5 表示完全相似，0 表示完全不相似。\n",
    "\n",
    "例如，以下是数据集中的两个样本：\n",
    "| id | sentence_a                                              | sentence_b                                | similarity |\n",
    "|----|--------------------------------------------------------|-------------------------------------------|------------|\n",
    "| 0  | A kitten is playing with a blue rope toy.             | A kitten is playing with a toy.          | 4.4        |\n",
    "| 1  | A black, brown and white dog running through a field.  | A white and brown dog runs in a field.   | 2.83       |\n",
    "\n",
    "## Evaluation\n",
    "### 评价指标\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 1. 导入必要的库"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/root/miniconda3/lib/python3.12/site-packages/torchtext/vocab/__init__.py:4: UserWarning: \n",
      "/!\\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\\ \n",
      "Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: `import torchtext; torchtext.disable_torchtext_deprecation_warning()`\n",
      "  warnings.warn(torchtext._TORCHTEXT_DEPRECATION_MSG)\n",
      "/root/miniconda3/lib/python3.12/site-packages/torchtext/utils.py:4: UserWarning: \n",
      "/!\\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\\ \n",
      "Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: `import torchtext; torchtext.disable_torchtext_deprecation_warning()`\n",
      "  warnings.warn(torchtext._TORCHTEXT_DEPRECATION_MSG)\n",
      "/root/miniconda3/lib/python3.12/site-packages/torchtext/data/__init__.py:4: UserWarning: \n",
      "/!\\ IMPORTANT WARNING ABOUT TORCHTEXT STATUS /!\\ \n",
      "Torchtext is deprecated and the last released version will be 0.18 (this one). You can silence this warning by calling the following at the beginnign of your scripts: `import torchtext; torchtext.disable_torchtext_deprecation_warning()`\n",
      "  warnings.warn(torchtext._TORCHTEXT_DEPRECATION_MSG)\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import math\n",
    "from typing import Union, List\n",
    "\n",
    "import pandas as pd\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "import torch.optim as optim\n",
    "from torch.utils.data import DataLoader, Dataset\n",
    "from torchtext.vocab import GloVe, build_vocab_from_iterator\n",
    "from torchtext.data.utils import get_tokenizer\n",
    "from tqdm import tqdm\n",
    "import logging  # 新增"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 2. 定义数据集类"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "class MyDataset(Dataset):\n",
    "    \"\"\"自定义数据集类，用于加载和处理数据\"\"\"\n",
    "    def __init__(self, path, tokenizer, vocab, max_length=200, test=False):\n",
    "        \"\"\"\n",
    "        初始化数据集\n",
    "\n",
    "        参数：\n",
    "            path (str): 数据文件的路径\n",
    "            tokenizer (callable): 分词函数\n",
    "            vocab (Vocab): 词汇表对象\n",
    "            max_length (int): 序列的最大长度\n",
    "            test (bool): 是否为测试集\n",
    "        \"\"\"\n",
    "        self.path = path\n",
    "        self.tokenizer = tokenizer\n",
    "        self.vocab = vocab\n",
    "        self.max_length = max_length\n",
    "        self.test = test\n",
    "\n",
    "        self.data = self.load_data()\n",
    "\n",
    "    def load_data(self):\n",
    "        \"\"\"加载并预处理数据\"\"\"\n",
    "        data = pd.read_csv(self.path)\n",
    "        data = data.dropna(subset=['sentence_a', 'sentence_b', 'similarity'])\n",
    "        logging.info(f\"数据集 {self.path} 的前几行：\")\n",
    "        logging.info(data.head())\n",
    "        examples = []\n",
    "\n",
    "        if self.test:\n",
    "            for text_a, text_b in tqdm(zip(data[\"sentence_a\"], data[\"sentence_b\"]), total=len(data)):\n",
    "                tokens_a = self.tokenizer(str(text_a))\n",
    "                tokens_b = self.tokenizer(str(text_b))\n",
    "                tokens_a = tokens_a[:self.max_length]\n",
    "                tokens_b = tokens_b[:self.max_length]\n",
    "                examples.append((tokens_a, tokens_b, None))\n",
    "        else:\n",
    "            for text_a, text_b, label in tqdm(zip(data[\"sentence_a\"], data[\"sentence_b\"], data[\"similarity\"]), total=len(data)):\n",
    "                tokens_a = self.tokenizer(str(text_a))\n",
    "                tokens_b = self.tokenizer(str(text_b))\n",
    "                tokens_a = tokens_a[:self.max_length]\n",
    "                tokens_b = tokens_b[:self.max_length]\n",
    "                examples.append((tokens_a, tokens_b, label))\n",
    "\n",
    "        return examples\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.data)\n",
    "\n",
    "    def numericalize(self, tokens):\n",
    "        \"\"\"将tokens转换为索引，并进行填充或截断\"\"\"\n",
    "        indices = [self.vocab[token] for token in tokens]\n",
    "        if len(indices) < self.max_length:\n",
    "            indices += [self.vocab['<pad>']] * (self.max_length - len(indices))\n",
    "        else:\n",
    "            indices = indices[:self.max_length]\n",
    "        return torch.tensor(indices, dtype=torch.long)\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        tokens_a, tokens_b, label = self.data[idx]\n",
    "        numericalized_a = self.numericalize(tokens_a)\n",
    "        numericalized_b = self.numericalize(tokens_b)\n",
    "        if label is not None:\n",
    "            label = float(label)\n",
    "            label = torch.tensor(label, dtype=torch.float)\n",
    "        return numericalized_a, numericalized_b, label"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3. 定义模型类（AI 里程碑，万亿产业的起点）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 多头注意力机制详解\n",
    "\n",
    "在自然语言处理和深度学习中，**注意力机制**允许模型在处理序列数据时，动态地关注序列中的不同位置。**多头注意力（Multi-Head Attention）**通过并行多个注意力头来捕获不同的特征和关系，是 Transformer 模型的核心组件，用于捕获序列中不同位置之间的相关性。\n",
    "\n",
    "---\n",
    "\n",
    "## **1. 背景知识**\n",
    "\n",
    "### **1.1 查询、键、值矩阵**\n",
    "\n",
    "给定输入：\n",
    "\n",
    "- **查询（Query）矩阵**：$ Q $\n",
    "- **键（Key）矩阵**：$ K $\n",
    "- **值（Value）矩阵**：$ V $\n",
    "\n",
    "这些矩阵通常从输入嵌入中通过线性变换得到，代表了输入序列的不同视角。\n",
    "\n",
    "### **1.2 缩放点积注意力**\n",
    "\n",
    "单头的缩放点积注意力计算如下：\n",
    "\n",
    "$$\n",
    "\\text{Attention}(Q, K, V) = \\text{Softmax}\\left( \\frac{Q K^\\top}{\\sqrt{d_k}} \\right) V\n",
    "$$\n",
    "\n",
    "- $ Q \\in \\mathbb{R}^{N \\times L_Q \\times d_k} $：查询矩阵，$ N $ 是批量大小，$ L_Q $ 是查询序列长度，$ d_k $ 是键和查询的维度。\n",
    "- $ K \\in \\mathbb{R}^{N \\times L_K \\times d_k} $：键矩阵，$ L_K $ 是键序列长度。\n",
    "- $ V \\in \\mathbb{R}^{N \\times L_V \\times d_v} $：值矩阵，$ L_V $ 是值序列长度，通常 $ L_K = L_V $。\n",
    "- $ \\sqrt{d_k} $：缩放因子，用于防止点积结果过大，从而影响 Softmax 的梯度。\n",
    "\n",
    "---\n",
    "\n",
    "## **2. 多头注意力机制**\n",
    "\n",
    "多头注意力将查询、键、值矩阵分别投影到 $ h $ 个子空间，然后在每个子空间独立计算注意力，最后将结果拼接起来。\n",
    "\n",
    "### **2.1 投影到多头**\n",
    "\n",
    "对于第 $ i $ 个头，我们有：\n",
    "\n",
    "$$\n",
    "Q_i = Q W_i^Q\n",
    "$$\n",
    "\n",
    "$$\n",
    "K_i = K W_i^K\n",
    "$$\n",
    "\n",
    "$$\n",
    "V_i = V W_i^V\n",
    "$$\n",
    "\n",
    "- **$ W_i^Q, W_i^K, W_i^V \\in \\mathbb{R}^{d_{\\text{model}} \\times d_k} $**：投影矩阵，$ d_{\\text{model}} $ 是模型的嵌入维度。\n",
    "- **$ Q_i, K_i, V_i \\in \\mathbb{R}^{N \\times L \\times d_k} $**：第 $ i $ 个头的查询、键、值矩阵。\n",
    "\n",
    "### **2.2 计算每个头的注意力**\n",
    "\n",
    "对于每个头 $ i $，计算注意力输出：\n",
    "\n",
    "$$\n",
    "\\text{head}_i = \\text{Attention}(Q_i, K_i, V_i) = \\text{Softmax}\\left( \\frac{Q_i K_i^\\top}{\\sqrt{d_k}} \\right) V_i\n",
    "$$\n",
    "\n",
    "- **$ \\text{head}_i \\in \\mathbb{R}^{N \\times L \\times d_v} $**：第 $ i $ 个头的输出。\n",
    "\n",
    "#### **缩放因子 $ \\sqrt{d_k} $ 的作用**\n",
    "\n",
    "- **原因**：当 $ d_k $ 较大时，$ Q K^\\top $ 的值可能会非常大，导致 `softmax` 函数的梯度非常小，梯度消失，模型难以训练。\n",
    "- **解决方案**：通过除以 $ \\sqrt{d_k} $ 来缩放 $ Q K^\\top $，使其值在一个适当的范围内，保持梯度的有效性。\n",
    "\n",
    "**具体数值示例：**\n",
    "\n",
    "假设每个头的维度 $ d_k = 4 $，随机生成查询和键向量：\n",
    "\n",
    "```python\n",
    "import torch\n",
    "\n",
    "Q = torch.tensor([1.0, 2.0, 3.0, 4.0])  # 查询向量\n",
    "K = torch.tensor([0.5, 1.5, 2.5, 3.5])  # 键向量\n",
    "```\n",
    "\n",
    "- **未缩放的点积结果：**\n",
    "\n",
    "  ```python\n",
    "  dot_product = torch.dot(Q, K)  # 1*0.5 + 2*1.5 + 3*2.5 + 4*3.5 = 25.0\n",
    "  ```\n",
    "\n",
    "- **缩放后的点积结果：**\n",
    "\n",
    "  ```python\n",
    "  scaling_factor = torch.sqrt(torch.tensor(d_k).float())  # sqrt(4) = 2.0\n",
    "  scaled_dot_product = dot_product / scaling_factor  # 25.0 / 2.0 = 12.5\n",
    "  ```\n",
    "\n",
    "- **比较未缩放和缩放后的 `softmax` 输出：**\n",
    "\n",
    "  ```python\n",
    "  # 未缩放\n",
    "  energy = torch.tensor([25.0, 15.0, 10.0, 5.0])\n",
    "  attention = torch.softmax(energy, dim=0)\n",
    "  print(\"未缩放的 attention:\", attention)  # tensor([1.0000, 0.0000, 0.0000, 0.0000])\n",
    "\n",
    "  # 缩放后\n",
    "  scaled_energy = energy / scaling_factor\n",
    "  attention_scaled = torch.softmax(scaled_energy, dim=0)\n",
    "  print(\"缩放后的 attention:\", attention_scaled)  # tensor([0.9241, 0.0594, 0.0150, 0.0015])\n",
    "  ```\n",
    "\n",
    "- **结论**：缩放后，`softmax` 输出更平滑，梯度更大，有助于模型的训练。\n",
    "\n",
    "### **2.3 拼接所有头的输出**\n",
    "\n",
    "将所有头的输出拼接起来：\n",
    "\n",
    "$$\n",
    "\\text{MultiHead}(Q, K, V) = \\text{Concat}(\\text{head}_1, \\text{head}_2, \\ldots, \\text{head}_h)W^O\n",
    "$$\n",
    "\n",
    "- **$ \\text{Concat}(\\cdot) \\in \\mathbb{R}^{N \\times L \\times h \\cdot d_v} $**：将所有头的输出在最后一个维度上拼接。\n",
    "- **$ W^O \\in \\mathbb{R}^{h \\cdot d_v \\times d_{\\text{model}}} $**：输出投影矩阵，将拼接后的结果映射回模型的嵌入维度。\n",
    "\n",
    "---\n",
    "\n",
    "## **3. 代码与公式的对应关系**\n",
    "\n",
    "下面，我们将代码中的每一部分与公式对应起来，详细解释每个变量、维度和背后的含义，并提供具体的代码示例。\n",
    "\n",
    "### **3.1 初始化**\n",
    "\n",
    "```python\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "\n",
    "class BasicAttention(nn.Module):\n",
    "    \"\"\"基础注意力机制模块\"\"\"\n",
    "\n",
    "    def __init__(self, emb_dim, heads, dropout=0.5):\n",
    "        super(BasicAttention, self).__init__()\n",
    "        self.emb_dim = emb_dim  # D_model，模型的嵌入维度\n",
    "        self.heads = heads      # h，注意力头的数量\n",
    "        self.head_dim = emb_dim // heads  # d_k，单个头的维度\n",
    "\n",
    "        assert self.head_dim * heads == emb_dim, \"嵌入维度必须能被头数整除\"\n",
    "\n",
    "        # 定义线性变换矩阵，相当于 W_i^Q, W_i^K, W_i^V\n",
    "        self.q_linear = nn.Linear(emb_dim, emb_dim)\n",
    "        self.k_linear = nn.Linear(emb_dim, emb_dim)\n",
    "        self.v_linear = nn.Linear(emb_dim, emb_dim)\n",
    "        self.fc_out = nn.Linear(emb_dim, emb_dim)  # W^O\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "```\n",
    "\n",
    "- **`emb_dim`（$ D_{\\text{model}} $）**：模型的嵌入维度，例如 8。\n",
    "- **`heads`（$ h $）**：多头注意力的头数，例如 2。\n",
    "- **`self.head_dim`（$ d_k $）**：单个头的维度，计算方式为 $ D_{\\text{model}} / h $。\n",
    "- **线性层**：\n",
    "  - `self.q_linear`、`self.k_linear`、`self.v_linear` 对应投影矩阵 $ W^Q, W^K, W^V $。\n",
    "  - `self.fc_out` 对应输出投影矩阵 $ W^O $。\n",
    "\n",
    "### **3.2 前向传播**\n",
    "\n",
    "```python\n",
    "    def forward(self, values, keys, query, mask=None):\n",
    "        N = query.shape[0]  # 批量大小 N\n",
    "\n",
    "        # 对Q、K、V进行线性变换并拆分为多头\n",
    "        Q = self.q_linear(query).view(N, -1, self.heads, self.head_dim)\n",
    "        K = self.k_linear(keys).view(N, -1, self.heads, self.head_dim)\n",
    "        V = self.v_linear(values).view(N, -1, self.heads, self.head_dim)\n",
    "```\n",
    "\n",
    "- **输入张量维度**：\n",
    "  - **`query`、`keys`、`values`**：形状为 $ (N, L, D_{\\text{model}}) $，其中 $ L $ 是序列长度。\n",
    "- **线性变换**：\n",
    "  - **`self.q_linear(query)`**：实现公式 (2)，得到 $ Q_i $。\n",
    "  - **`self.k_linear(keys)`**：实现公式 (3)，得到 $ K_i $。\n",
    "  - **`self.v_linear(values)`**：实现公式 (4)，得到 $ V_i $。\n",
    "- **调整形状**：\n",
    "  - **`.view(N, -1, self.heads, self.head_dim)`**：将嵌入维度拆分为多头，形状变为 $ (N, L, h, d_k) $。\n",
    "\n",
    "**具体代码示例：**\n",
    "\n",
    "```python\n",
    "emb_dim = 8\n",
    "heads = 2\n",
    "batch_size = 1\n",
    "seq_length = 3\n",
    "\n",
    "# 初始化模型\n",
    "attention = BasicAttention(emb_dim=emb_dim, heads=heads)\n",
    "\n",
    "# 生成随机输入\n",
    "query = torch.randn(batch_size, seq_length, emb_dim)\n",
    "keys = torch.randn(batch_size, seq_length, emb_dim)\n",
    "values = torch.randn(batch_size, seq_length, emb_dim)\n",
    "\n",
    "# 线性变换并拆分为多头\n",
    "Q = attention.q_linear(query).view(batch_size, -1, heads, attention.head_dim)\n",
    "K = attention.k_linear(keys).view(batch_size, -1, heads, attention.head_dim)\n",
    "V = attention.v_linear(values).view(batch_size, -1, heads, attention.head_dim)\n",
    "\n",
    "print(\"Q shape:\", Q.shape)  # 输出: Q shape: torch.Size([1, 3, 2, 4])\n",
    "```\n",
    "\n",
    "### **3.3 调整维度以便计算注意力**\n",
    "\n",
    "```python\n",
    "        # 调整维度以便于计算注意力\n",
    "        Q = Q.permute(0, 2, 1, 3)  # [N, heads, L, head_dim]\n",
    "        K = K.permute(0, 2, 1, 3)\n",
    "        V = V.permute(0, 2, 1, 3)\n",
    "```\n",
    "\n",
    "- **`permute`**：重新排列张量的维度，将头数维度放到前面，方便并行计算。\n",
    "- **新的形状**：\n",
    "  - **$ Q \\in \\mathbb{R}^{N \\times h \\times L \\times d_k} $**\n",
    "  - **$ K \\in \\mathbb{R}^{N \\times h \\times L \\times d_k} $**\n",
    "  - **$ V \\in \\mathbb{R}^{N \\times h \\times L \\times d_v} $**\n",
    "\n",
    "**`permute` 操作示例：**\n",
    "\n",
    "```python\n",
    "# 原始形状\n",
    "print(\"Original Q shape:\", Q.shape)  # torch.Size([1, 3, 2, 4])\n",
    "\n",
    "# 执行 permute 操作\n",
    "Q = Q.permute(0, 2, 1, 3)\n",
    "print(\"Permuted Q shape:\", Q.shape)  # torch.Size([1, 2, 3, 4])\n",
    "```\n",
    "\n",
    "**解释：**\n",
    "\n",
    "- **`permute(0, 2, 1, 3)`**：将维度从 `(batch_size, seq_length, heads, head_dim)` 调整为 `(batch_size, heads, seq_length, head_dim)`。\n",
    "\n",
    "### **3.4 计算缩放点积注意力分数**\n",
    "\n",
    "```python\n",
    "        # 计算缩放点积注意力分数\n",
    "        energy = torch.matmul(Q, K.permute(0, 1, 3, 2)) / math.sqrt(self.head_dim)\n",
    "        if mask is not None:\n",
    "            energy = energy.masked_fill(mask == 0, float(\"-1e20\"))\n",
    "\n",
    "        attention = torch.softmax(energy, dim=-1)\n",
    "        attention = self.dropout(attention)\n",
    "```\n",
    "\n",
    "- **计算能量矩阵**：\n",
    "  - **`torch.matmul(Q, K.permute(0, 1, 3, 2))`**：实现 $ Q_i K_i^\\top $。\n",
    "  - **维度解释**：\n",
    "    - $ Q \\in \\mathbb{R}^{N \\times h \\times L_Q \\times d_k} $\n",
    "    - $ K^\\top \\in \\mathbb{R}^{N \\times h \\times d_k \\times L_K} $\n",
    "    - 结果 $ \\text{energy} \\in \\mathbb{R}^{N \\times h \\times L_Q \\times L_K} $\n",
    "- **缩放因子**：\n",
    "  - **`/ math.sqrt(self.head_dim)`**：对应公式中的 $ \\sqrt{d_k} $，用于缩放。\n",
    "- **应用掩码**：\n",
    "  - **`energy.masked_fill(mask == 0, float(\"-1e20\"))`**：将掩码为 0 的位置填充一个极小值，确保这些位置的 `softmax` 输出接近于 0。\n",
    "- **计算注意力权重**：\n",
    "  - **`torch.softmax(energy, dim=-1)`**：在最后一个维度上（即键的序列长度维度）应用 `softmax`，得到注意力权重 $ \\alpha $。\n",
    "- **应用 Dropout**：\n",
    "  - **`self.dropout(attention)`**：在注意力权重上应用 Dropout，防止过拟合。\n",
    "\n",
    "**具体代码示例：**\n",
    "\n",
    "```python\n",
    "import math\n",
    "\n",
    "# 假设没有掩码\n",
    "mask = None\n",
    "\n",
    "# 计算能量矩阵\n",
    "K_transposed = K.permute(0, 1, 3, 2)\n",
    "print(\"K_transposed shape:\", K_transposed.shape)  # torch.Size([1, 2, 4, 3])\n",
    "\n",
    "energy = torch.matmul(Q, K_transposed) / math.sqrt(attention.head_dim)\n",
    "print(\"Energy shape:\", energy.shape)  # torch.Size([1, 2, 3, 3])\n",
    "\n",
    "# 计算注意力权重\n",
    "attention = torch.softmax(energy, dim=-1)\n",
    "print(\"Attention shape:\", attention.shape)  # torch.Size([1, 2, 3, 3])\n",
    "```\n",
    "\n",
    "**解释：**\n",
    "\n",
    "- **`torch.matmul(Q, K_transposed)`**：对每个头，计算查询和键的点积。\n",
    "- **`torch.softmax(energy, dim=-1)`**：在最后一个维度（键的序列长度）上计算 softmax。\n",
    "\n",
    "### **3.5 计算注意力输出**\n",
    "\n",
    "```python\n",
    "        # 计算注意力输出\n",
    "        out = torch.matmul(attention, V)  # [N, heads, L_Q, head_dim]\n",
    "        out = out.permute(0, 2, 1, 3).contiguous()\n",
    "        out = out.view(N, -1, self.emb_dim)\n",
    "        out = self.fc_out(out)\n",
    "        return out\n",
    "```\n",
    "\n",
    "- **矩阵乘法**：\n",
    "  - **`torch.matmul(attention, V)`**：实现 $ \\alpha V_i $，对应公式 (5)。\n",
    "- **调整维度**：\n",
    "  - **`out.permute(0, 2, 1, 3)`**：将头数维度和序列长度维度交换，形状变为 $ (N, L_Q, h, d_v) $。\n",
    "  - **`.contiguous()`**：确保内存连续性，便于后续操作。\n",
    "- **拼接多头输出**：\n",
    "  - **`out.view(N, -1, self.emb_dim)`**：将多头的输出拼接起来，恢复到嵌入维度 $ D_{\\text{model}} $。\n",
    "- **输出投影**：\n",
    "  - **`self.fc_out(out)`**：对应公式 (6) 中的 $ W^O $，将拼接的结果映射回模型的嵌入维度。\n",
    "\n",
    "**具体代码示例：**\n",
    "\n",
    "```python\n",
    "# 计算注意力输出\n",
    "out = torch.matmul(attention, V)\n",
    "print(\"Out shape before permute:\", out.shape)  # torch.Size([1, 2, 3, 4])\n",
    "\n",
    "# 调整维度\n",
    "out = out.permute(0, 2, 1, 3).contiguous()\n",
    "print(\"Out shape after permute:\", out.shape)  # torch.Size([1, 3, 2, 4])\n",
    "\n",
    "# 拼接多头输出\n",
    "out = out.view(batch_size, -1, attention.emb_dim)\n",
    "print(\"Out shape after view:\", out.shape)  # torch.Size([1, 3, 8])\n",
    "\n",
    "# 输出投影\n",
    "out = attention.fc_out(out)\n",
    "print(\"Final output shape:\", out.shape)  # torch.Size([1, 3, 8])\n",
    "```\n",
    "\n",
    "**解释：**\n",
    "\n",
    "- **`out.permute(0, 2, 1, 3)`**：调整维度顺序，使得序列长度在第二维度，方便后续处理。\n",
    "- **`.contiguous()`**：确保张量在内存中连续，以便使用 `view`。\n",
    "- **`view(batch_size, -1, emb_dim)`**：将多头的输出拼接起来，恢复到原始的嵌入维度。\n",
    "- **`self.fc_out(out)`**：对应公式 (6) 中的 $ W^O $，将拼接的结果映射回模型的嵌入维度。\n",
    "\n",
    "---\n",
    "\n",
    "## **4. 符号解释和原因**\n",
    "\n",
    "### **4.1 符号解释**\n",
    "\n",
    "- **$ N $**：批量大小（batch size）。\n",
    "- **$ L_Q, L_K, L_V $**：查询、键、值的序列长度。\n",
    "- **$ D_{\\text{model}} $**：模型的嵌入维度。\n",
    "- **$ h $**：多头注意力的头数。\n",
    "- **$ d_k $**：键和查询的维度，$ d_k = D_{\\text{model}} / h $。\n",
    "- **$ d_v $**：值的维度，通常等于 $ d_k $。\n",
    "- **$ Q, K, V $**：查询、键、值矩阵。\n",
    "- **$ W^Q, W^K, W^V $**：查询、键、值的投影矩阵。\n",
    "- **$ W^O $**：输出投影矩阵。\n",
    "\n",
    "### **4.2 为什么使用 $ Q, K, V $？**\n",
    "\n",
    "- **查询（Query，$ Q $）**：表示需要获取信息的位置。\n",
    "- **键（Key，$ K $）**：表示可供匹配的信息位置。\n",
    "- **值（Value，$ V $）**：表示实际的信息内容。\n",
    "\n",
    "### **4.3 为什么需要缩放因子 $ \\sqrt{d_k} $？**\n",
    "\n",
    "- **问题**：当 $ d_k $ 较大时，点积结果可能过大，导致 `softmax` 输出极端化。\n",
    "- **解决方案**：通过缩放因子将点积结果调整到适当范围。\n",
    "\n",
    "### **4.4 为什么要使用多头（Multi-Head）？**\n",
    "\n",
    "- **优势**：\n",
    "  - **捕获多样化的信息**：不同的头可以关注不同的位置和特征。\n",
    "  - **增强模型能力**：提高模型的表达和学习能力。\n",
    "\n",
    "### **4.5 为什么要拼接并通过线性层 $ W^O $？**\n",
    "\n",
    "- **拼接**：整合多个头的输出。\n",
    "- **线性变换**：映射回嵌入维度，便于后续处理。\n",
    "\n",
    "---\n",
    "\n",
    "## **5. 关键操作的详细解析**\n",
    "\n",
    "### **5.1 `permute` 操作**\n",
    "\n",
    "- **功能**：改变张量的维度顺序。\n",
    "- **用法**：`tensor.permute(*dims)`\n",
    "- **示例**：\n",
    "\n",
    "  ```python\n",
    "  x = torch.randn(2, 3, 4)\n",
    "  x_permuted = x.permute(1, 0, 2)  # 形状从 (2, 3, 4) 变为 (3, 2, 4)\n",
    "  print(\"Original shape:\", x.shape)       # torch.Size([2, 3, 4])\n",
    "  print(\"Permuted shape:\", x_permuted.shape)  # torch.Size([3, 2, 4])\n",
    "  ```\n",
    "\n",
    "  **解释**：\n",
    "\n",
    "  - **`x` 的形状**：`(2, 3, 4)`，表示 `(batch_size, seq_length, feature_dim)`\n",
    "  - **`x_permuted` 的形状**：`(3, 2, 4)`，将 `seq_length` 和 `batch_size` 维度交换\n",
    "\n",
    "### **5.2 `view` 操作**\n",
    "\n",
    "- **功能**：改变张量的形状，不改变数据。\n",
    "- **注意**：需要张量在内存中连续。\n",
    "- **用法**：`tensor.view(*shape)`\n",
    "- **示例**：\n",
    "\n",
    "  ```python\n",
    "  x = torch.randn(2, 3, 4)\n",
    "  x_viewed = x.view(2, -1)  # 合并后两个维度\n",
    "  print(\"Original shape:\", x.shape)  # torch.Size([2, 3, 4])\n",
    "  print(\"Viewed shape:\", x_viewed.shape)  # torch.Size([2, 12])\n",
    "  ```\n",
    "\n",
    "  **解释**：\n",
    "\n",
    "  - 将形状 `(2, 3, 4)` 的张量重塑为 `(2, 12)`，即将 `seq_length` 和 `feature_dim` 合并\n",
    "\n",
    "### **5.3 `contiguous` 操作**\n",
    "\n",
    "- **功能**：返回一个内存连续的张量。\n",
    "- **原因**：某些操作（如 `view`）需要张量内存连续。\n",
    "- **用法**：`tensor.contiguous()`\n",
    "- **示例**：\n",
    "\n",
    "  ```python\n",
    "  x = torch.randn(2, 3, 4)\n",
    "  x_permuted = x.permute(1, 0, 2)\n",
    "  x_contiguous = x_permuted.contiguous()\n",
    "  print(\"Is contiguous before:\", x_permuted.is_contiguous())  # False\n",
    "  print(\"Is contiguous after:\", x_contiguous.is_contiguous())  # True\n",
    "  ```\n",
    "\n",
    "  **解释**：\n",
    "\n",
    "  - `permute` 后的张量在内存中不连续，使用 `contiguous` 使其连续，便于后续的 `view` 操作\n",
    "\n",
    "### **5.4 `torch.matmul` 操作**\n",
    "\n",
    "- **功能**：执行矩阵乘法，支持高维张量的批量矩阵乘法。\n",
    "- **用法**：`torch.matmul(tensor_a, tensor_b)`\n",
    "- **示例**：\n",
    "\n",
    "  ```python\n",
    "  a = torch.randn(2, 3, 4)\n",
    "  b = torch.randn(2, 4, 5)\n",
    "  result = torch.matmul(a, b)  # 结果形状为 (2, 3, 5)\n",
    "  print(\"Result shape:\", result.shape)  # torch.Size([2, 3, 5])\n",
    "  ```\n",
    "\n",
    "  **解释**：\n",
    "\n",
    "  - 对于每个批次，计算形状为 `(3, 4)` 的矩阵与形状为 `(4, 5)` 的矩阵的乘积\n",
    "\n",
    "### **5.5 `torch.softmax` 操作**\n",
    "\n",
    "- **功能**：对指定维度计算 `softmax`，得到概率分布。\n",
    "- **用法**：`torch.softmax(tensor, dim)`\n",
    "- **示例**：\n",
    "\n",
    "  ```python\n",
    "  x = torch.tensor([2.0, 1.0, 0.1])\n",
    "  softmax_x = torch.softmax(x, dim=0)\n",
    "  print(\"Softmax result:\", softmax_x)  # tensor([0.6590, 0.2424, 0.0986])\n",
    "  ```\n",
    "\n",
    "  **解释**：\n",
    "\n",
    "  - 计算 `softmax`，将输入转换为概率分布，元素之和为 1\n",
    "\n",
    "### **5.6 `masked_fill` 操作**\n",
    "\n",
    "- **功能**：根据掩码将指定位置填充为给定值。\n",
    "- **用法**：`tensor.masked_fill(mask, value)`\n",
    "- **示例**：\n",
    "\n",
    "  ```python\n",
    "  x = torch.tensor([1.0, 2.0, 3.0])\n",
    "  mask = torch.tensor([False, True, False])\n",
    "  x_masked = x.masked_fill(mask, -1e20)\n",
    "  print(\"Masked x:\", x_masked)  # tensor([1.0000e+00, -1.0000e+20, 3.0000e+00])\n",
    "  ```\n",
    "\n",
    "  **解释**：\n",
    "\n",
    "  - 将掩码为 `True` 的位置填充为指定的值（例如 `-1e20`）\n",
    "\n",
    "---\n",
    "\n",
    "## **6. 总结**\n",
    "\n",
    "通过对多头注意力机制的详细解析，以及代码与公式的对应关系，并结合具体的代码示例，我们深入理解了其实现原理和背后的数学依据。关键操作如 `permute`、`view`、`torch.matmul`、`torch.softmax`、`masked_fill` 等都在实现中扮演了重要角色。\n",
    "\n",
    "希望这能帮助您更好地理解多头注意力的工作方式，为深入研究和应用大型语言模型打下坚实的基础。\n",
    "\n",
    "---\n",
    "\n",
    "如果您还有任何疑问，或者需要进一步的解释，请随时告诉我！"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "class BasicAttention(nn.Module):\n",
    "    \"\"\"基础注意力机制模块\"\"\"\n",
    "\n",
    "    def __init__(self, emb_dim, heads, dropout=0.5):\n",
    "        super(BasicAttention, self).__init__()\n",
    "        self.emb_dim = emb_dim\n",
    "        self.heads = heads\n",
    "        self.head_dim = emb_dim // heads\n",
    "\n",
    "        assert self.head_dim * heads == emb_dim, \"嵌入维度必须能被头数整除\"\n",
    "\n",
    "        self.q_linear = nn.Linear(emb_dim, emb_dim)\n",
    "        self.k_linear = nn.Linear(emb_dim, emb_dim)\n",
    "        self.v_linear = nn.Linear(emb_dim, emb_dim)\n",
    "        self.fc_out = nn.Linear(emb_dim, emb_dim)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "\n",
    "    def forward(self, values, keys, query, mask=None):\n",
    "        N = query.shape[0]\n",
    "\n",
    "        # 对Q、K、V进行线性变换并拆分为多头\n",
    "        Q = self.q_linear(query).view(N, -1, self.heads, self.head_dim)\n",
    "        K = self.k_linear(keys).view(N, -1, self.heads, self.head_dim)\n",
    "        V = self.v_linear(values).view(N, -1, self.heads, self.head_dim)\n",
    "\n",
    "        # 调整维度以便于计算注意力\n",
    "        Q = Q.permute(0, 2, 1, 3)  # [N, heads, seq_len, head_dim]\n",
    "        K = K.permute(0, 2, 1, 3)\n",
    "        V = V.permute(0, 2, 1, 3)\n",
    "\n",
    "        # 计算缩放点积注意力分数\n",
    "        energy = torch.matmul(Q, K.permute(0, 1, 3, 2)) / math.sqrt(self.head_dim)\n",
    "        if mask is not None:\n",
    "            energy = energy.masked_fill(mask == 0, float(\"-1e20\"))\n",
    "\n",
    "        attention = torch.softmax(energy, dim=-1)\n",
    "        attention = self.dropout(attention)\n",
    "\n",
    "        # 计算注意力输出\n",
    "        out = torch.matmul(attention, V)  # [N, heads, seq_len, head_dim]\n",
    "        out = out.permute(0, 2, 1, 3).contiguous()\n",
    "        out = out.view(N, -1, self.emb_dim)\n",
    "        out = self.fc_out(out)\n",
    "        return out\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "class TextSimilarityModel(nn.Module):\n",
    "    \"\"\"文本相似度模型，包含嵌入层、注意力机制和全连接层\"\"\"\n",
    "\n",
    "    def __init__(self, weight_matrix, emb_dim, heads):\n",
    "        super(TextSimilarityModel, self).__init__()\n",
    "        self.embedding = nn.Embedding.from_pretrained(weight_matrix, freeze=False, padding_idx=0)\n",
    "        self.attention = BasicAttention(emb_dim, heads)\n",
    "        self.fc = nn.Linear(emb_dim, emb_dim)\n",
    "        self.dropout = nn.Dropout(0.5)\n",
    "\n",
    "    def forward(self, input):\n",
    "        \"\"\"\n",
    "        前向传播\n",
    "\n",
    "        参数：\n",
    "            input (Tensor): [batch_size, seq_len]\n",
    "\n",
    "        返回：\n",
    "            Tensor: [batch_size, emb_dim]\n",
    "        \"\"\"\n",
    "        embedded = self.embedding(input)  # [batch_size, seq_len, emb_dim]\n",
    "        embedded = self.dropout(embedded)\n",
    "\n",
    "        # 注意力机制\n",
    "        attention_output = self.attention(embedded, embedded, embedded)  # [batch_size, seq_len, emb_dim]\n",
    "        attention_output = self.dropout(attention_output)\n",
    "\n",
    "        # 平均池化\n",
    "        pooled_output = attention_output.mean(dim=1)  # [batch_size, emb_dim]\n",
    "\n",
    "        outputs = self.fc(pooled_output)\n",
    "        return outputs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 4. 定义训练函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "def train_epoch(model, iterator, optimizer, criterion, clip, device):\n",
    "    \"\"\"训练模型一个epoch\"\"\"\n",
    "    model.train()\n",
    "    epoch_loss = 0\n",
    "\n",
    "    for i, (src_a, src_b, labels) in enumerate(iterator):\n",
    "        src_a = src_a.to(device)\n",
    "        src_b = src_b.to(device)\n",
    "        labels = labels.to(device)\n",
    "\n",
    "        optimizer.zero_grad()\n",
    "\n",
    "        output_a = model(src_a)  # [batch_size, emb_dim]\n",
    "        output_b = model(src_b)  # [batch_size, emb_dim]\n",
    "\n",
    "        # 计算余弦相似度并调整范围\n",
    "        similarity = torch.cosine_similarity(output_a, output_b, dim=1)  # [batch_size]\n",
    "        similarity = (similarity + 1) * 2.5  # 将 [-1,1] 映射到 [0,5]\n",
    "\n",
    "        # 计算损失\n",
    "        loss = criterion(similarity, labels)  # [batch_size] vs [batch_size]\n",
    "\n",
    "        loss.backward()\n",
    "\n",
    "        # 梯度裁剪\n",
    "        torch.nn.utils.clip_grad_norm_(model.parameters(), clip)\n",
    "\n",
    "        optimizer.step()\n",
    "\n",
    "        epoch_loss += loss.item()\n",
    "\n",
    "    return epoch_loss / len(iterator)\n",
    "\n",
    "def evaluate(model, iterator, criterion, device):\n",
    "    \"\"\"评估模型性能\"\"\"\n",
    "    model.eval()\n",
    "    epoch_loss = 0\n",
    "    all_preds = []\n",
    "    all_labels = []\n",
    "\n",
    "    with torch.no_grad():\n",
    "        for i, (src_a, src_b, labels) in enumerate(iterator):\n",
    "            src_a = src_a.to(device)\n",
    "            src_b = src_b.to(device)\n",
    "            labels = labels.to(device)\n",
    "\n",
    "            output_a = model(src_a)  # [batch_size, emb_dim]\n",
    "            output_b = model(src_b)  # [batch_size, emb_dim]\n",
    "\n",
    "            # 计算余弦相似度并调整范围\n",
    "            similarity = torch.cosine_similarity(output_a, output_b, dim=1)  # [batch_size]\n",
    "            similarity = (similarity + 1) * 2.5  # 将 [-1,1] 映射到 [0,5]\n",
    "\n",
    "            # 计算损失\n",
    "            loss = criterion(similarity, labels)  # [batch_size] vs [batch_size]\n",
    "\n",
    "            epoch_loss += loss.item()\n",
    "\n",
    "            all_preds.extend(similarity.cpu().numpy())\n",
    "            all_labels.extend(labels.cpu().numpy())\n",
    "\n",
    "    # 计算 Pearson 相关系数\n",
    "    from scipy.stats import pearsonr\n",
    "    pearson_corr, _ = pearsonr(all_preds, all_labels)\n",
    "\n",
    "    return epoch_loss / len(iterator), pearson_corr"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 5. 主程序"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|██████████| 5632/5632 [00:00<00:00, 71712.31it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "第 1/30 轮训练\n",
      "训练损失: 3.4693 | 验证损失: 4.1828 | 验证 Pearson 相关系数: 0.3210\n",
      "第 2/30 轮训练\n",
      "训练损失: 2.6747 | 验证损失: 3.8793 | 验证 Pearson 相关系数: 0.3230\n",
      "第 3/30 轮训练\n",
      "训练损失: 2.5129 | 验证损失: 3.7033 | 验证 Pearson 相关系数: 0.3806\n",
      "第 4/30 轮训练\n",
      "训练损失: 2.2686 | 验证损失: 3.4435 | 验证 Pearson 相关系数: 0.4416\n",
      "第 5/30 轮训练\n",
      "训练损失: 2.1431 | 验证损失: 3.4166 | 验证 Pearson 相关系数: 0.4298\n",
      "第 6/30 轮训练\n",
      "训练损失: 1.9357 | 验证损失: 3.2021 | 验证 Pearson 相关系数: 0.4951\n",
      "第 7/30 轮训练\n",
      "训练损失: 1.7855 | 验证损失: 3.0361 | 验证 Pearson 相关系数: 0.5312\n",
      "第 8/30 轮训练\n",
      "训练损失: 1.5893 | 验证损失: 2.9354 | 验证 Pearson 相关系数: 0.5648\n",
      "第 9/30 轮训练\n",
      "训练损失: 1.4157 | 验证损失: 2.8108 | 验证 Pearson 相关系数: 0.5679\n",
      "第 10/30 轮训练\n",
      "训练损失: 1.3449 | 验证损失: 2.8245 | 验证 Pearson 相关系数: 0.5749\n",
      "第 11/30 轮训练\n",
      "训练损失: 1.1668 | 验证损失: 2.7225 | 验证 Pearson 相关系数: 0.5918\n",
      "第 12/30 轮训练\n",
      "训练损失: 1.0700 | 验证损失: 2.7465 | 验证 Pearson 相关系数: 0.5897\n",
      "第 13/30 轮训练\n",
      "训练损失: 0.9835 | 验证损失: 2.6290 | 验证 Pearson 相关系数: 0.5894\n",
      "第 14/30 轮训练\n",
      "训练损失: 0.8871 | 验证损失: 2.7045 | 验证 Pearson 相关系数: 0.5849\n",
      "第 15/30 轮训练\n",
      "训练损失: 0.8249 | 验证损失: 2.6498 | 验证 Pearson 相关系数: 0.5946\n",
      "第 16/30 轮训练\n",
      "训练损失: 0.7701 | 验证损失: 2.6348 | 验证 Pearson 相关系数: 0.5945\n",
      "第 17/30 轮训练\n",
      "训练损失: 0.7368 | 验证损失: 2.6352 | 验证 Pearson 相关系数: 0.5787\n",
      "第 18/30 轮训练\n",
      "训练损失: 0.6870 | 验证损失: 2.6310 | 验证 Pearson 相关系数: 0.5878\n",
      "第 19/30 轮训练\n",
      "训练损失: 0.6693 | 验证损失: 2.5844 | 验证 Pearson 相关系数: 0.6017\n",
      "第 20/30 轮训练\n",
      "训练损失: 0.5915 | 验证损失: 2.5757 | 验证 Pearson 相关系数: 0.5934\n",
      "第 21/30 轮训练\n",
      "训练损失: 0.5660 | 验证损失: 2.6115 | 验证 Pearson 相关系数: 0.5765\n",
      "第 22/30 轮训练\n",
      "训练损失: 0.5479 | 验证损失: 2.5720 | 验证 Pearson 相关系数: 0.5857\n",
      "第 23/30 轮训练\n",
      "训练损失: 0.5145 | 验证损失: 2.5546 | 验证 Pearson 相关系数: 0.5950\n",
      "第 24/30 轮训练\n",
      "训练损失: 0.5116 | 验证损失: 2.5066 | 验证 Pearson 相关系数: 0.5946\n",
      "第 25/30 轮训练\n",
      "训练损失: 0.4736 | 验证损失: 2.5444 | 验证 Pearson 相关系数: 0.5870\n",
      "第 26/30 轮训练\n",
      "训练损失: 0.4618 | 验证损失: 2.6487 | 验证 Pearson 相关系数: 0.5832\n",
      "第 27/30 轮训练\n",
      "训练损失: 0.4326 | 验证损失: 2.5720 | 验证 Pearson 相关系数: 0.5849\n",
      "第 28/30 轮训练\n",
      "训练损失: 0.4265 | 验证损失: 2.4541 | 验证 Pearson 相关系数: 0.6008\n",
      "第 29/30 轮训练\n",
      "训练损失: 0.4212 | 验证损失: 2.5064 | 验证 Pearson 相关系数: 0.5936\n",
      "第 30/30 轮训练\n",
      "训练损失: 0.3878 | 验证损失: 2.5036 | 验证 Pearson 相关系数: 0.5934\n"
     ]
    }
   ],
   "source": [
    "def main():\n",
    "    # 设置随机种子\n",
    "    import random\n",
    "    import numpy as np\n",
    "\n",
    "    SEED = 42\n",
    "    random.seed(SEED)\n",
    "    np.random.seed(SEED)\n",
    "    torch.manual_seed(SEED)\n",
    "    if torch.cuda.is_available():\n",
    "        torch.cuda.manual_seed(SEED)\n",
    "\n",
    "    # 配置日志记录\n",
    "    logging.basicConfig(\n",
    "        filename='training.log',\n",
    "        filemode='w',  # 覆盖之前的日志文件\n",
    "        level=logging.INFO,\n",
    "        format='%(asctime)s - %(levelname)s - %(message)s'\n",
    "    )\n",
    "\n",
    "    # 数据路径\n",
    "    train_path = 'data/sts-kaggle-train.csv'\n",
    "\n",
    "    # 定义tokenizer\n",
    "    tokenizer = get_tokenizer('basic_english')\n",
    "\n",
    "    # 构建词汇表\n",
    "    def yield_tokens(data_path):\n",
    "        data = pd.read_csv(data_path)\n",
    "        data = data.dropna(subset=['sentence_a', 'sentence_b'])\n",
    "        for sentence in data['sentence_a'].tolist() + data['sentence_b'].tolist():\n",
    "            yield tokenizer(str(sentence))\n",
    "\n",
    "    vocab = build_vocab_from_iterator(yield_tokens(train_path), specials=[\"<pad>\", \"<unk>\"])\n",
    "    vocab.set_default_index(vocab[\"<unk>\"])\n",
    "\n",
    "    # 加载预训练词向量\n",
    "    glove = GloVe(name='6B', dim=300, cache='glove.6B')\n",
    "\n",
    "    # 将词向量赋值给词汇表\n",
    "    vocab.vectors = glove.get_vecs_by_tokens(vocab.get_itos())\n",
    "\n",
    "    # 加载数据并分割\n",
    "    full_dataset = MyDataset(train_path, tokenizer, vocab, test=False)\n",
    "    test_size = int(len(full_dataset) * 0.1)\n",
    "    train_size = len(full_dataset) - test_size\n",
    "    train_valid_dataset, test_dataset = torch.utils.data.random_split(\n",
    "        full_dataset, [train_size, test_size], generator=torch.Generator().manual_seed(SEED))\n",
    "\n",
    "    # 分割训练和验证数据集\n",
    "    valid_size = int(len(train_valid_dataset) * 0.1)\n",
    "    train_size = len(train_valid_dataset) - valid_size\n",
    "    train_dataset, valid_dataset = torch.utils.data.random_split(\n",
    "        train_valid_dataset, [train_size, valid_size], generator=torch.Generator().manual_seed(SEED))\n",
    "\n",
    "    # 创建数据加载器\n",
    "    BATCH_SIZE = 16\n",
    "    train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)\n",
    "    valid_loader = DataLoader(valid_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=4)\n",
    "    test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=False, num_workers=4)\n",
    "\n",
    "    # 获取词向量矩阵\n",
    "    weight_matrix = vocab.vectors\n",
    "\n",
    "    # 初始化模型\n",
    "    emb_dim = 300\n",
    "    heads = 4\n",
    "    model = TextSimilarityModel(weight_matrix, emb_dim=emb_dim, heads=heads)\n",
    "    device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "    model = model.to(device)\n",
    "\n",
    "    # 训练参数\n",
    "    N_EPOCHS = 30\n",
    "    CLIP = 1\n",
    "    best_valid_loss = float('inf')\n",
    "    criterion = nn.MSELoss()\n",
    "    optimizer = optim.Adam(model.parameters(), lr=0.001)\n",
    "\n",
    "    # 训练循环\n",
    "    for epoch in range(N_EPOCHS):\n",
    "        train_loss = train_epoch(model, train_loader, optimizer, criterion, CLIP, device)\n",
    "        valid_loss, valid_corr = evaluate(model, valid_loader, criterion, device)\n",
    "\n",
    "        # 保存最优模型\n",
    "        if valid_loss < best_valid_loss:\n",
    "            best_valid_loss = valid_loss\n",
    "            torch.save(model.state_dict(), 'best-model.pt')\n",
    "\n",
    "        # 使用日志记录训练信息\n",
    "        logging.info(f'第 {epoch+1}/{N_EPOCHS} 轮训练')\n",
    "        logging.info(f'训练损失: {train_loss:.4f} | 验证损失: {valid_loss:.4f} | 验证 Pearson 相关系数: {valid_corr:.4f}')\n",
    "\n",
    "        # 如果需要，也可以打印到控制台\n",
    "        print(f'第 {epoch+1}/{N_EPOCHS} 轮训练')\n",
    "        print(f'训练损失: {train_loss:.4f} | 验证损失: {valid_loss:.4f} | 验证 Pearson 相关系数: {valid_corr:.4f}')\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    main()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Columns in data/sts-kaggle-train.csv:\n",
      "['id', 'sentence_a', 'sentence_b', 'similarity']\n",
      "Columns in data/sts-kaggle-train.csv:\n",
      "['id', 'sentence_a', 'sentence_b', 'similarity']\n",
      "Columns in data/sts-kaggle-train.csv:\n",
      "['id', 'sentence_a', 'sentence_b', 'similarity']\n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "\n",
    "def check_csv_columns(file_path):\n",
    "    try:\n",
    "        data = pd.read_csv(file_path)\n",
    "        print(f\"Columns in {file_path}:\")\n",
    "        print(data.columns.tolist())\n",
    "    except Exception as e:\n",
    "        print(f\"Error reading {file_path}: {e}\")\n",
    "\n",
    "# 检查训练集、验证集和测试集的列名\n",
    "check_csv_columns('data/sts-kaggle-train.csv')\n",
    "check_csv_columns('data/sts-kaggle-train.csv')\n",
    "check_csv_columns('data/sts-kaggle-train.csv')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## **需要注意的优化点**\n",
    "### **1. 数据预处理**\n",
    "\n",
    "- **空值处理**：确保在加载数据时处理缺失值，防止在训练过程中出现错误。\n",
    "- **类型转换**：在处理文本和标签时，确保将其转换为正确的类型（如字符串、浮点数）。\n",
    "\n",
    "### **2. 模型设计**\n",
    "\n",
    "- **自定义注意力机制**：由于您希望练习代码实现，所以保留了自定义的注意力机制。在实现过程中，要注意维度的匹配和矩阵运算的正确性。\n",
    "- **模型可扩展性**：当前的模型结构相对简单，未来可以考虑增加层数或引入其他机制（如卷积层、循环神经网络）以提升模型性能。\n",
    "\n",
    "### **3. 训练和评估**\n",
    "\n",
    "- **损失函数选择**：使用 `nn.MSELoss()` 适用于回归任务，但也可以尝试其他损失函数（如 `nn.SmoothL1Loss()`）以观察对模型的影响。\n",
    "- **评估指标**：除了 Pearson 相关系数，可以考虑引入 Spearman 相关系数或均方根误差（RMSE）作为补充评估指标。\n",
    "- **超参数调整**：调整学习率、批量大小、训练轮数等超参数，观察模型在验证集上的表现，以找到最佳配置。\n",
    "\n",
    "### **4. 代码规范和可维护性**\n",
    "\n",
    "- **代码格式**：遵循 PEP 8 代码规范，使用一致的缩进和命名风格，增加代码的可读性。\n",
    "- **注释和文档**：为类和函数添加文档字符串（docstrings），解释其功能、参数和返回值，便于他人理解和维护代码。\n",
    "- **变量命名**：使用有意义的变量名，避免使用过于简短或模糊的名称，如 `src_a` 可以改为 `sentence_a_tensor`。\n",
    "\n",
    "### **5. 性能优化**\n",
    "\n",
    "- **数据加载优化**：对于大型数据集，可以考虑使用更高效的数据加载方式，或在 `DataLoader` 中启用多进程（设置 `num_workers` 参数）。\n",
    "- **模型效率**：自定义的注意力机制可能在性能上不如内置的高效。尽管为了练习手动实现了注意力机制，但在实际项目中可以考虑优化实现，或使用经过高度优化的库。\n",
    "- **显存管理**：在使用大型嵌入矩阵和批量大小时，要注意显存的使用，防止出现 OOM（Out of Memory）错误。\n",
    "\n",
    "### **6. 安全和健壮性**\n",
    "\n",
    "- **输入验证**：确保在处理输入数据时，验证其格式和内容，防止异常数据导致程序崩溃。\n",
    "- **异常处理**：在关键的代码段添加异常捕获机制，提升程序的健壮性。\n",
    "\n",
    "### **7. 未来扩展**\n",
    "\n",
    "- **模型保存和加载**：当前仅在验证损失降低时保存模型，可以添加模型加载功能，方便在训练中断后继续训练或进行预测。\n",
    "- **可视化训练过程**：使用 TensorBoard 或 Matplotlib 可视化损失和评估指标的变化趋势，便于分析模型的训练过程。\n",
    "\n",
    "### **8. 代码复用和模块化**\n",
    "\n",
    "- **模块化设计**：将代码按照功能模块进行划分，例如将数据处理、模型定义、训练流程等分别放入不同的文件或类中，提升代码的复用性和可维护性。\n",
    "- **函数封装**：对于重复的代码段，可以封装成函数，减少冗余代码。\n",
    "\n",
    "### **9. 实验记录和复现**\n",
    "\n",
    "- **随机种子**：在代码中设置随机种子，以确保实验结果的可复现性。\n",
    "- **实验日志**：记录每次实验的参数配置和结果，方便日后分析和比较不同的模型和参数设置。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "将逐条解释在代码中如何实现了您列出的优化点，以便于您学习和理解。\n",
    "\n",
    "---\n",
    "\n",
    "### **1. 数据预处理**\n",
    "\n",
    "**（1）空值处理：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "在 `MyDataset` 类的 `load_data` 方法中，我添加了对缺失值的处理，使用了 `dropna` 函数：\n",
    "\n",
    "```python\n",
    "data = pd.read_csv(self.path)\n",
    "data = data.dropna(subset=['sentence_a', 'sentence_b', 'similarity'])\n",
    "```\n",
    "\n",
    "这样可以确保在加载数据时，任何包含空值的行都会被删除，防止在训练过程中出现错误。\n",
    "\n",
    "**（2）类型转换：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "在数据加载和处理过程中，确保将文本和标签转换为正确的类型：\n",
    "\n",
    "```python\n",
    "tokens_a = self.tokenizer(str(text_a))\n",
    "tokens_b = self.tokenizer(str(text_b))\n",
    "```\n",
    "\n",
    "将文本转换为字符串，以防止出现非字符串类型的数据。\n",
    "\n",
    "对于标签，确保其被转换为浮点数：\n",
    "\n",
    "```python\n",
    "if label is not None:\n",
    "    label = float(label)\n",
    "    label = torch.tensor(label, dtype=torch.float)\n",
    "```\n",
    "\n",
    "这样可以确保标签在训练过程中被正确处理。\n",
    "\n",
    "---\n",
    "\n",
    "### **2. 模型设计**\n",
    "\n",
    "**（1）自定义注意力机制：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "我保留了自定义的注意力机制 `BasicAttention`，并在实现过程中仔细匹配了各个张量的维度，确保矩阵运算的正确性。\n",
    "\n",
    "在 `BasicAttention` 类中：\n",
    "\n",
    "- 确保嵌入维度可以被头数整除：\n",
    "\n",
    "  ```python\n",
    "  assert self.head_dim * heads == emb_dim, \"嵌入维度必须能被头数整除\"\n",
    "  ```\n",
    "\n",
    "- 在前向传播过程中，注意张量的形状变化和维度排列，使用 `permute` 和 `view` 等方法。\n",
    "\n",
    "**（2）模型可扩展性：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "当前模型结构包括嵌入层、自定义注意力机制、全连接层和平均池化层。为了保持代码的简洁性，暂未增加额外的层数或机制。\n",
    "\n",
    "您可以在此基础上，尝试增加模型的深度，或者引入循环神经网络（如 LSTM）或卷积层，以提升模型的性能。\n",
    "\n",
    "---\n",
    "\n",
    "### **3. 训练和评估**\n",
    "\n",
    "**（1）损失函数选择：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "在代码中，我使用了 `nn.MSELoss()` 作为损失函数：\n",
    "\n",
    "```python\n",
    "criterion = nn.MSELoss()\n",
    "```\n",
    "\n",
    "此外，您可以尝试使用 `nn.SmoothL1Loss()` 或其他损失函数，方法是替换上述代码，并观察对模型训练的影响。\n",
    "\n",
    "**（2）评估指标：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "在评估函数中，除了计算损失外，还计算了 Pearson 相关系数：\n",
    "\n",
    "```python\n",
    "from scipy.stats import pearsonr\n",
    "pearson_corr, _ = pearsonr(all_preds, all_labels)\n",
    "```\n",
    "\n",
    "您可以进一步添加 Spearman 相关系数或均方根误差（RMSE）作为补充评估指标。例如，计算 RMSE：\n",
    "\n",
    "```python\n",
    "from sklearn.metrics import mean_squared_error\n",
    "rmse = mean_squared_error(all_labels, all_preds, squared=False)\n",
    "```\n",
    "\n",
    "**（3）超参数调整：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "在代码中设置了可调节的超参数，如学习率、批量大小、训练轮数等：\n",
    "\n",
    "```python\n",
    "BATCH_SIZE = 16\n",
    "N_EPOCHS = 30\n",
    "optimizer = optim.Adam(model.parameters(), lr=0.001)\n",
    "```\n",
    "\n",
    "您可以通过修改这些参数，观察模型在验证集上的表现，以找到最佳配置。\n",
    "\n",
    "---\n",
    "\n",
    "### **4. 代码规范和可维护性**\n",
    "\n",
    "**（1）代码格式：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "- 使用了4个空格的缩进，符合PEP 8规范。\n",
    "- 保持了一致的命名风格，变量和函数名使用小写加下划线的形式。\n",
    "\n",
    "**（2）注释和文档：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "为类和函数添加了文档字符串（docstrings），解释其功能、参数和返回值。例如：\n",
    "\n",
    "```python\n",
    "class MyDataset(Dataset):\n",
    "    \"\"\"自定义数据集类，用于加载和处理数据\"\"\"\n",
    "    def __init__(self, path, tokenizer, vocab, max_length=200, test=False):\n",
    "        \"\"\"\n",
    "        初始化数据集\n",
    "\n",
    "        参数：\n",
    "            path (str): 数据文件的路径\n",
    "            tokenizer (callable): 分词函数\n",
    "            vocab (Vocab): 词汇表对象\n",
    "            max_length (int): 序列的最大长度\n",
    "            test (bool): 是否为测试集\n",
    "        \"\"\"\n",
    "        # 初始化代码...\n",
    "```\n",
    "\n",
    "**（3）变量命名：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "- 使用有意义的变量名，避免过于简短或模糊。\n",
    "- 例如，将 `src_a` 和 `src_b` 修改为 `sentence_a_tensor` 和 `sentence_b_tensor`，但为了代码的简洁，在训练循环中保持了 `src_a` 的命名。您可以根据需要进一步优化变量名。\n",
    "\n",
    "---\n",
    "\n",
    "### **5. 性能优化**\n",
    "\n",
    "**（1）数据加载优化：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "在 `DataLoader` 中启用了多进程数据加载，通过设置 `num_workers` 参数：\n",
    "\n",
    "```python\n",
    "train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=4)\n",
    "```\n",
    "\n",
    "这样可以加速数据的加载，提升训练效率。\n",
    "\n",
    "**（2）模型效率：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "虽然保留了自定义的注意力机制，但在实现过程中，尽量优化了代码，减少不必要的计算和数据拷贝。\n",
    "\n",
    "**（3）显存管理：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "- 注意了批量大小的设置，防止显存溢出。\n",
    "- 在模型和数据转移到GPU时，使用了 `.to(device)`，并在训练和评估过程中，使用 `with torch.no_grad()` 来减少显存占用。\n",
    "\n",
    "---\n",
    "\n",
    "### **6. 安全和健壮性**\n",
    "\n",
    "**（1）输入验证：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "在数据加载时，使用 `dropna` 来确保输入数据的完整性。\n",
    "\n",
    "**（2）异常处理：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "在关键代码段，可以添加异常捕获机制。例如，在加载词向量时，添加异常处理：\n",
    "\n",
    "```python\n",
    "try:\n",
    "    glove = GloVe(name='6B', dim=300, cache='glove.6B')\n",
    "except Exception as e:\n",
    "    print(\"加载词向量时出错：\", e)\n",
    "```\n",
    "\n",
    "---\n",
    "\n",
    "### **7. 未来扩展**\n",
    "\n",
    "**（1）模型保存和加载：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "在训练过程中，保存了验证损失最小的模型：\n",
    "\n",
    "```python\n",
    "torch.save(model.state_dict(), 'best-model.pt')\n",
    "```\n",
    "\n",
    "您可以添加模型加载的功能，以便在训练中断后继续训练或进行预测：\n",
    "\n",
    "```python\n",
    "model.load_state_dict(torch.load('best-model.pt'))\n",
    "```\n",
    "\n",
    "**（2）可视化训练过程：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "可以使用 Matplotlib 或 TensorBoard 来可视化训练和验证的损失、评估指标等。由于代码中未包含此部分，您可以根据需要添加。例如，使用 Matplotlib 绘制损失曲线。\n",
    "\n",
    "---\n",
    "\n",
    "### **8. 代码复用和模块化**\n",
    "\n",
    "**（1）模块化设计：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "将代码按照功能划分为不同的部分，如数据集类、模型类、训练和评估函数、主函数等。\n",
    "\n",
    "**（2）函数封装：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "将重复的代码封装成函数，例如数据加载、模型训练和评估等。\n",
    "\n",
    "---\n",
    "\n",
    "### **9. 实验记录和复现**\n",
    "\n",
    "**（1）随机种子：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "在代码中设置了随机种子，以确保实验结果的可复现性：\n",
    "\n",
    "```python\n",
    "SEED = 42\n",
    "random.seed(SEED)\n",
    "np.random.seed(SEED)\n",
    "torch.manual_seed(SEED)\n",
    "if torch.cuda.is_available():\n",
    "    torch.cuda.manual_seed(SEED)\n",
    "```\n",
    "\n",
    "**（2）实验日志：**\n",
    "\n",
    "**实现方式：**\n",
    "\n",
    "在训练过程中，打印了训练和验证的损失，以及验证集的 Pearson 相关系数。您可以进一步将这些结果保存到日志文件中，或者使用日志库如 `logging` 进行记录。\n",
    "\n",
    "---\n",
    "\n",
    "希望以上解释能够帮助您理解代码中如何实现了这些优化点，并为您的学习提供帮助。如有任何疑问，欢迎继续提问！"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "base",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
