{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "9653a331-bda6-4e98-a2b5-1763e35b99f1",
   "metadata": {},
   "source": [
    "<font size=\"5\">SQuAD数据集-QA微调</font>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1c5b2e98-4caa-4385-a7ad-67085e8157aa",
   "metadata": {},
   "source": [
    "Stanford Question Answering 数据集    \n",
    "SQuAD 1.1: 包含约10 万个问答对，基于500多篇维基百科文章构建。  \n",
    "所有问题均有明确的答案，答案为原文中的连续文本片段（Span），任务形式为提取式问答。  \n",
    "\n",
    "SQuAD 2.0: 在原有 10 万个可回答问题的基础上，新增了50,111 个不可回答的问题，总问题数达到150,111 个。   \n",
    "这些新增问题表面看似合理但实际无法从文本中找到答案，要求模型不仅能提取答案，还需判断问题是否可回答。  "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "899a63bd-e8a7-40e2-b5b4-9857470c9369",
   "metadata": {},
   "source": [
    "<font size=\"4\">1 导入数据</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "6d561166-622a-48f5-bafb-9a695ad296b4",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "DatasetDict({\n",
       "    train: Dataset({\n",
       "        features: ['id', 'title', 'context', 'question', 'answers'],\n",
       "        num_rows: 87599\n",
       "    })\n",
       "    validation: Dataset({\n",
       "        features: ['id', 'title', 'context', 'question', 'answers'],\n",
       "        num_rows: 10570\n",
       "    })\n",
       "})"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import os\n",
    "\n",
    "os.environ['http_proxy'] = 'http://127.0.0.1:1087'\n",
    "os.environ['https_proxy'] = 'http://127.0.0.1:1087'\n",
    "\n",
    "squad_v2 = False  # 采用 squad 1\n",
    "batch_size = 16\n",
    "max_length = 384 \n",
    "doc_stride = 128 \n",
    "\n",
    "from datasets import load_dataset\n",
    "datasets = load_dataset(\"squad_v2\" if squad_v2 else \"squad\")\n",
    "datasets"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "b7cc24c2-e96b-4a87-b014-c80a7e30e841",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'id': '5733be284776f41900661182',\n",
       " 'title': 'University_of_Notre_Dame',\n",
       " 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \"Venite Ad Me Omnes\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.',\n",
       " 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',\n",
       " 'answers': {'text': ['Saint Bernadette Soubirous'], 'answer_start': [515]}}"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "datasets[\"train\"][0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "id": "146b6110-b20b-4c73-a63b-62a4261606ad",
   "metadata": {},
   "outputs": [],
   "source": [
    "def dp(d):\n",
    "    # 打印字典\n",
    "    display(HTML(pd.DataFrame(d).to_html()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "e5571820-4e20-46ba-ac41-df42483095ba",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>title</th>\n",
       "      <th>context</th>\n",
       "      <th>question</th>\n",
       "      <th>answers</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>5733be284776f41900661182</td>\n",
       "      <td>University_of_Notre_Dame</td>\n",
       "      <td>Architecturally, the school has a Catholic character. Atop the Main Building's gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend \"Venite Ad Me Omnes\". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.</td>\n",
       "      <td>To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?</td>\n",
       "      <td>{'text': ['Saint Bernadette Soubirous'], 'answer_start': [515]}</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import pandas as pd\n",
    "from IPython.display import display, HTML\n",
    "dp(datasets[\"train\"][[0]])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0a33c7cc-56fb-4192-9e9e-0c625e52cbd8",
   "metadata": {},
   "source": [
    "<font size=\"4\">2 模型介绍和数据预处理</font>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f1debb33-1268-4b12-9fb9-4e9546efdd03",
   "metadata": {},
   "source": [
    "RoBERTa模型是BERT的改进版，在大规模语料库(160GB文本)上训练,包括BookCorpus、维基百科等数据集，适用于问答系统等下游任务的微调"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "3cab0342-c07a-4814-b7bd-1ecc3b182763",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "True\n"
     ]
    }
   ],
   "source": [
    "from transformers import AutoTokenizer\n",
    "model_checkpoint = \"/home/cc/.cache/huggingface/hub/roberta-base\"  # roberta-base模型\n",
    "tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n",
    "\n",
    "import transformers\n",
    "print(isinstance(tokenizer, transformers.PreTrainedTokenizerFast))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "2923598b-858c-4ecf-b733-d30c9b7756ad",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'input_ids': [0, 2264, 16, 110, 766, 116, 2, 2, 2387, 766, 16, 28856, 1851, 4, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}\n",
      "<s>What is your name?</s></s>My name is Sylvain.</s>\n"
     ]
    }
   ],
   "source": [
    "eg=tokenizer(\"What is your name?\", \"My name is Sylvain.\")  # 文本→模型可以理解的数字序列（通常是词向量索引）\n",
    "print(eg)  # input_ids：文本转换后的数字序列， attention_mask：指示哪些位置是实际文本（1），哪些是填充（0）\n",
    "print(tokenizer.decode(eg['input_ids']))  # 将input_ids（数字序列）解码回原始文本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "33bd59a1-49c9-492e-896c-dbcb3c5e9afb",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>title</th>\n",
       "      <th>context</th>\n",
       "      <th>question</th>\n",
       "      <th>answers</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>5733caf74776f4190066124c</td>\n",
       "      <td>University_of_Notre_Dame</td>\n",
       "      <td>The men's basketball team has over 1,600 wins, one of only 12 schools who have reached that mark, and have appeared in 28 NCAA tournaments. Former player Austin Carr holds the record for most points scored in a single game of the tournament with 61. Although the team has never won the NCAA Tournament, they were named by the Helms Athletic Foundation as national champions twice. The team has orchestrated a number of upsets of number one ranked teams, the most notable of which was ending UCLA's record 88-game winning streak in 1974. The team has beaten an additional eight number-one teams, and those nine wins rank second, to UCLA's 10, all-time in wins against the top team. The team plays in newly renovated Purcell Pavilion (within the Edmund P. Joyce Center), which reopened for the beginning of the 2009–2010 season. The team is coached by Mike Brey, who, as of the 2014–15 season, his fifteenth at Notre Dame, has achieved a 332-165 record. In 2009 they were invited to the NIT, where they advanced to the semifinals but were beaten by Penn State who went on and beat Baylor in the championship. The 2010–11 team concluded its regular season ranked number seven in the country, with a record of 25–5, Brey's fifth straight 20-win season, and a second-place finish in the Big East. During the 2014-15 season, the team went 32-6 and won the ACC conference tournament, later advancing to the Elite 8, where the Fighting Irish lost on a missed buzzer-beater against then undefeated Kentucky. Led by NBA draft picks Jerian Grant and Pat Connaughton, the Fighting Irish beat the eventual national champion Duke Blue Devils twice during the season. The 32 wins were the most by the Fighting Irish team since 1908-09.</td>\n",
       "      <td>How many wins does the Notre Dame men's basketball team have?</td>\n",
       "      <td>{'text': ['over 1,600'], 'answer_start': [30]}</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>input_ids</th>\n",
       "      <th>attention_mask</th>\n",
       "      <th>offset_mapping</th>\n",
       "      <th>overflow_to_sample_mapping</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>[0, 6179, 171, 2693, 473, 5, 10579, 9038, 604, 18, 2613, 165, 33, 116, 2, 2, 133, 604, 18, 2613, 165, 34, 81, 112, 6, 4697, 2693, 6, 65, 9, 129, 316, 1304, 54, 33, 1348, 14, 2458, 6, 8, 33, 1382, 11, 971, 5248, 11544, 4, 3531, 869, 4224, 8902, 3106, 5, 638, 13, 144, 332, 1008, 11, 10, 881, 177, 9, 5, 1967, 19, 5659, 4, 2223, 5, 165, 34, 393, 351, 5, 5248, 7647, 6, 51, 58, 1440, 30, 5, 6851, 4339, 8899, 2475, 25, 632, 4739, 2330, 4, 20, 165, 34, 24830, 10, 346, 9, 12744, ...]</td>\n",
       "      <td>[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...]</td>\n",
       "      <td>[(0, 0), (0, 3), (4, 8), (9, 13), (14, 18), (19, 22), (23, 28), (29, 33), (34, 37), (37, 39), (40, 50), (51, 55), (56, 60), (60, 61), (0, 0), (0, 0), (0, 3), (4, 7), (7, 9), (10, 20), (21, 25), (26, 29), (30, 34), (35, 36), (36, 37), (37, 40), (41, 45), (45, 46), (47, 50), (51, 53), (54, 58), (59, 61), (62, 69), (70, 73), (74, 78), (79, 86), (87, 91), (92, 96), (96, 97), (98, 101), (102, 106), (107, 115), (116, 118), (119, 121), (122, 126), (127, 138), (138, 139), (140, 146), (147, 153), (154, 160), (161, 165), (166, 171), (172, 175), (176, 182), (183, 186), (187, 191), (192, 198), (199, 205), (206, 208), (209, 210), (211, 217), (218, 222), (223, 225), (226, 229), (230, 240), (241, 245), (246, 248), (248, 249), (250, 258), (259, 262), (263, 267), (268, 271), (272, 277), (278, 281), (282, 285), (286, 290), (291, 301), (301, 302), (303, 307), (308, 312), (313, 318), (319, 321), (322, 325), (326, 329), (329, 331), (332, 340), (341, 351), (352, 354), (355, 363), (364, 373), (374, 379), (379, 380), (381, 384), (385, 389), (390, 393), (394, 406), (407, 408), (409, 415), (416, 418), (419, 422), ...]</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>[0, 6179, 171, 2693, 473, 5, 10579, 9038, 604, 18, 2613, 165, 33, 116, 2, 2, 20, 1824, 2383, 1225, 165, 4633, 63, 1675, 191, 4173, 346, 707, 11, 5, 247, 6, 19, 10, 638, 9, 564, 2383, 245, 6, 5811, 219, 18, 1998, 1359, 291, 12, 5640, 191, 6, 8, 10, 200, 12, 6406, 2073, 11, 5, 1776, 953, 4, 1590, 5, 777, 12, 996, 191, 6, 5, 165, 439, 2107, 12, 401, 8, 351, 5, 10018, 1019, 1967, 6, 423, 11511, 7, 5, 15834, 290, 6, 147, 5, 18563, 3445, 685, 15, 10, 2039, 8775, 254, 12, 1610, ...]</td>\n",
       "      <td>[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...]</td>\n",
       "      <td>[(0, 0), (0, 3), (4, 8), (9, 13), (14, 18), (19, 22), (23, 28), (29, 33), (34, 37), (37, 39), (40, 50), (51, 55), (56, 60), (60, 61), (0, 0), (0, 0), (1107, 1110), (1111, 1115), (1115, 1116), (1116, 1118), (1119, 1123), (1124, 1133), (1134, 1137), (1138, 1145), (1146, 1152), (1153, 1159), (1160, 1166), (1167, 1172), (1173, 1175), (1176, 1179), (1180, 1187), (1187, 1188), (1189, 1193), (1194, 1195), (1196, 1202), (1203, 1205), (1206, 1208), (1208, 1209), (1209, 1210), (1210, 1211), (1212, 1215), (1215, 1216), (1216, 1218), (1219, 1224), (1225, 1233), (1234, 1236), (1236, 1237), (1237, 1240), (1241, 1247), (1247, 1248), (1249, 1252), (1253, 1254), (1255, 1261), (1261, 1262), (1262, 1267), (1268, 1274), (1275, 1277), (1278, 1281), (1282, 1285), (1286, 1290), (1290, 1291), (1292, 1298), (1299, 1302), (1303, 1307), (1307, 1308), (1308, 1310), (1311, 1317), (1317, 1318), (1319, 1322), (1323, 1327), (1328, 1332), (1333, 1335), (1335, 1336), (1336, 1337), (1338, 1341), (1342, 1345), (1346, 1349), (1350, 1353), (1354, 1364), (1365, 1375), (1375, 1376), (1377, 1382), (1383, 1392), (1393, 1395), (1396, 1399), (1400, 1405), (1406, 1407), (1407, 1408), (1409, 1414), (1415, 1418), (1419, 1427), (1428, 1433), (1434, 1438), (1439, 1441), (1442, 1443), (1444, 1450), (1451, 1455), (1455, 1457), (1457, 1458), (1458, 1460), ...]</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "for i, example in enumerate(datasets[\"train\"]):\n",
    "    if len(tokenizer(example[\"question\"], example[\"context\"])[\"input_ids\"]) > 384:\n",
    "        break\n",
    "\n",
    "dp(datasets[\"train\"][[i]])\n",
    "example = datasets[\"train\"][i]\n",
    "\n",
    "tokenized_example = tokenizer(\n",
    "    example[\"question\"],\n",
    "    example[\"context\"],\n",
    "    max_length=384,  # 当输入文本（问题 + 上下文）的总 token 数超过 384 时，会截断\n",
    "    truncation=\"only_second\",  # 只截断第二个输入（即example[\"context\"]，上下文），不截断第一个输入（即example[\"question\"]，问题）\n",
    "    return_overflowing_tokens=True,  # 长上下文会被分割成多个片段（每个片段长度≤max_length），确保长文本能被完整处理（而不是直接丢弃超出部分）\n",
    "    return_offsets_mapping=True,  # 返回每个token在原始文本（未token化的question和context）中的起始和结束位置（偏移量）\n",
    "    stride=128  # 当长上下文被分割成多个片段时，相邻片段之间的重叠长度为 128 个 token\n",
    ")\n",
    "dp(dict(tokenized_example))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 67,
   "id": "07590cb6-33b2-43ef-96ce-5043b4f98f45",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[(0, 0), (0, 3), (4, 8), (9, 13), (14, 18), (19, 22), (23, 28), (29, 33), (34, 37), (37, 39), (40, 50), (51, 55), (56, 60), (60, 61), (0, 0), (0, 0), (0, 3), (4, 7), (7, 9), (10, 20), (21, 25), (26, 29), (30, 34), (35, 36), (36, 37), (37, 40), (41, 45), (45, 46), (47, 50), (51, 53), (54, 58), (59, 61), (62, 69), (70, 73), (74, 78), (79, 86), (87, 91), (92, 96), (96, 97), (98, 101), (102, 106), (107, 115), (116, 118), (119, 121), (122, 126), (127, 138), (138, 139), (140, 146), (147, 153), (154, 160), (161, 165), (166, 171), (172, 175), (176, 182), (183, 186), (187, 191), (192, 198), (199, 205), (206, 208), (209, 210), (211, 217), (218, 222), (223, 225), (226, 229), (230, 240), (241, 245), (246, 248), (248, 249), (250, 258), (259, 262), (263, 267), (268, 271), (272, 277), (278, 281), (282, 285), (286, 290), (291, 301), (301, 302), (303, 307), (308, 312), (313, 318), (319, 321), (322, 325), (326, 329), (329, 331), (332, 340), (341, 351), (352, 354), (355, 363), (364, 373), (374, 379), (379, 380), (381, 384), (385, 389), (390, 393), (394, 406), (407, 408), (409, 415), (416, 418), (419, 422)]\n"
     ]
    }
   ],
   "source": [
    "print(tokenized_example[\"offset_mapping\"][0][:100])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "id": "fcd06123-aeb1-41eb-8916-6bb95526c378",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "6179 (0, 3)\n",
      "How How\n"
     ]
    }
   ],
   "source": [
    "first_token_id = tokenized_example[\"input_ids\"][0][1]  \n",
    "offsets = tokenized_example[\"offset_mapping\"][0][1]\n",
    "print(first_token_id, offsets)\n",
    "print(tokenizer.convert_ids_to_tokens([first_token_id])[0], example[\"question\"][offsets[0]:offsets[1]])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1a43bff0-be5b-4cce-83ba-e58aa1f0f502",
   "metadata": {},
   "source": [
    "下面我们将原始文本中答案的字符位置（人类可读）转换为 token 序列中的索引位置（模型可理解），为后续模型训练（如预测答案位置）做准备。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "37bc5c01-9ac0-4f0a-86bc-1b92a26b2d7d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "22 25\n"
     ]
    }
   ],
   "source": [
    "answers = example[\"answers\"]  # {'text': ['over 1,600'], 'answer_start': [30]}\n",
    "start_char = answers[\"answer_start\"][0]  # 30\n",
    "end_char = start_char + len(answers[\"text\"][0])  # 40\n",
    "\n",
    "# 句子分片\n",
    "sequence_ids = tokenized_example.sequence_ids()  # 默认返回的是第一个片段的序列标识（如果要获取第二个片段的，需要指定索引，如 sequence_ids(1)）\n",
    "# [None, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, None, None, 1, 1, ..... 1, 1, None]\n",
    "\n",
    "token_start_index = 0\n",
    "while sequence_ids[token_start_index] != 1:\n",
    "    token_start_index += 1\n",
    "# 16\n",
    "token_end_index = len(tokenized_example[\"input_ids\"][0]) - 1\n",
    "while sequence_ids[token_end_index] != 1:\n",
    "    token_end_index -= 1\n",
    "# 382\n",
    "\n",
    "# 将token_start_index和token_end_index移动到答案字符串表示的两端\n",
    "offsets = tokenized_example[\"offset_mapping\"][0]  \n",
    "# print(offsets[token_start_index], offsets[token_end_index])\n",
    "# (0, 3) (1682, 1685)\n",
    "if (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\n",
    "    while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\n",
    "        token_start_index += 1\n",
    "    start_position = token_start_index - 1\n",
    "    while offsets[token_end_index][1] >= end_char:\n",
    "        token_end_index -= 1\n",
    "    end_position = token_end_index + 1\n",
    "    print(start_position, end_position)  # 22, 25\n",
    "else:\n",
    "    print(\"答案不在此特征中。\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "a9811ec3-38ce-47aa-a24c-f95f58361818",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      " over 1,600\n",
      "over 1,600\n"
     ]
    }
   ],
   "source": [
    "# 通过查找offset mapping位置，解码context 中的答案 \n",
    "print(tokenizer.decode(tokenized_example[\"input_ids\"][0][start_position: end_position+1]))\n",
    "# 数据集中的标准答案（answer[\"text\"])\n",
    "print(answers[\"text\"][0])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "37224c52-3214-47f1-bb96-5e9b35601358",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "True\n"
     ]
    }
   ],
   "source": [
    "pad_on_right = tokenizer.padding_side == \"right\"\n",
    "print(pad_on_right)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "113d9f8f-84e6-470b-8f9a-245e4b2c1500",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "83648beafb90426180702f93d86e264f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Map:   0%|          | 0/87599 [00:00<?, ? examples/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "1c8139d9720c4108bc0cdde085cb97ec",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Map:   0%|          | 0/10570 [00:00<?, ? examples/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "def prepare_train_features(examples):\n",
    "    examples[\"question\"] = [q.lstrip() for q in examples[\"question\"]]  # 删除左侧的空白字符，避免因多余空格导致的 tokenization 偏差\n",
    "\n",
    "    tokenized_examples = tokenizer(\n",
    "        examples[\"question\" if pad_on_right else \"context\"],\n",
    "        examples[\"context\" if pad_on_right else \"question\"],\n",
    "        truncation=\"only_second\" if pad_on_right else \"only_first\",\n",
    "        max_length=max_length,  # token序列的最大长度（超过则截断）\n",
    "        stride=doc_stride,  # 滑动窗口步长。当上下文过长被截断为多个片段时，相邻片段的重叠长度\n",
    "        return_overflowing_tokens=True,  # 记录截断后的片段对应原始哪个样本\n",
    "        return_offsets_mapping=True,  # 记录每个token在原始文本中的字符位置\n",
    "        padding=\"max_length\",\n",
    "    )\n",
    "\n",
    "    sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\") # 记录tokenized片段对应原始样本的索引列表。例如：若原始样本0的上下文过长，被截断为 3 个片段，则sample_mapping中这3个片段的位置会填 0。\n",
    "    offset_mapping = tokenized_examples.pop(\"offset_mapping\")  # 记录每个 token 在原始文本中的字符范围，如(5,8)表示token对应原始文本第 5-8 个字符\n",
    "\n",
    "    tokenized_examples[\"start_positions\"] = []\n",
    "    tokenized_examples[\"end_positions\"] = []\n",
    "\n",
    "    for i, offsets in enumerate(offset_mapping):\n",
    "        input_ids = tokenized_examples[\"input_ids\"][i]\n",
    "        cls_index = input_ids.index(tokenizer.cls_token_id)  # 特殊 token [CLS]的索引\n",
    "        sequence_ids = tokenized_examples.sequence_ids(i)\n",
    "        sample_index = sample_mapping[i]\n",
    "        answers = examples[\"answers\"][sample_index]\n",
    "        if len(answers[\"answer_start\"]) == 0:\n",
    "            # 若原始样本无答案（answer_start为空），则将start_positions和end_positions都设为cls_index\n",
    "            tokenized_examples[\"start_positions\"].append(cls_index)\n",
    "            tokenized_examples[\"end_positions\"].append(cls_index)\n",
    "        else:\n",
    "            start_char = answers[\"answer_start\"][0]\n",
    "            end_char = start_char + len(answers[\"text\"][0])\n",
    "            \n",
    "            # 找到上下文的第一个token索引\n",
    "            token_start_index = 0\n",
    "            while sequence_ids[token_start_index] != (1 if pad_on_right else 0):\n",
    "                token_start_index += 1\n",
    "            # 找到上下文的最后一个token索引\n",
    "            token_end_index = len(input_ids) - 1\n",
    "            while sequence_ids[token_end_index] != (1 if pad_on_right else 0):\n",
    "                token_end_index -= 1\n",
    "\n",
    "            if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\n",
    "                # 答案不在当前片段，用CLS标记\n",
    "                tokenized_examples[\"start_positions\"].append(cls_index)\n",
    "                tokenized_examples[\"end_positions\"].append(cls_index)\n",
    "            else:\n",
    "                # 将答案的字符位置（start_char/end_char）映射到 token 序列中的索引\n",
    "                while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\n",
    "                    token_start_index += 1\n",
    "                tokenized_examples[\"start_positions\"].append(token_start_index - 1)\n",
    "                \n",
    "                while offsets[token_end_index][1] >= end_char:\n",
    "                    token_end_index -= 1\n",
    "                tokenized_examples[\"end_positions\"].append(token_end_index + 1)\n",
    "\n",
    "    return tokenized_examples\n",
    "\n",
    "# 将prepare_train_features函数应用到数据集中的所有样本\n",
    "tokenized_datasets = datasets.map(prepare_train_features,\n",
    "                                  batched=True,  # 批量处理\n",
    "                                  remove_columns=datasets[\"train\"].column_names  # 移除原始数据集中的列（如question、context），只保留 tokenized 后的特征（input_ids、attention_mask、start_positions等）\n",
    "                                 )"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "b0f00f0b-afb4-4554-8a41-e5c8af97c539",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>input_ids</th>\n",
       "      <th>attention_mask</th>\n",
       "      <th>start_positions</th>\n",
       "      <th>end_positions</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>[0, 3972, 2661, 222, 5, 9880, 2708, 2346, 2082, 11, 504, 4432, 11, 226, 2126, 10067, 1470, 116, 2, 2, 37848, 37471, 28108, 6, 5, 334, 34, 10, 4019, 2048, 4, 497, 1517, 5, 4326, 6919, 18, 1637, 31346, 16, 10, 9030, 9577, 9, 5, 9880, 2708, 4, 29261, 11, 760, 9, 5, 4326, 6919, 8, 2114, 24, 6, 16, 10, 7621, 9577, 9, 4845, 19, 3701, 62, 33161, 19, 5, 7875, 22, 39043, 1459, 1614, 1464, 13292, 4977, 845, 4130, 7, 5, 4326, 6919, 16, 5, 26429, 2426, 9, 5, 25095, 6924, 4, 29261, 639, 5, 32394, 2426, 16, ...]</td>\n",
       "      <td>[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...]</td>\n",
       "      <td>135</td>\n",
       "      <td>142</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "dp(dict(tokenized_datasets['train'][[0]]))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "147166c3-a488-4065-806f-0652c39770e4",
   "metadata": {},
   "source": [
    "<font size=\"4\">3 微调模型</font>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "abfc6f87-e5a1-4fad-9e04-31a1bf34edaa",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Some weights of RobertaForQuestionAnswering were not initialized from the model checkpoint at /home/cc/.cache/huggingface/hub/roberta-base and are newly initialized: ['qa_outputs.bias', 'qa_outputs.weight']\n",
      "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
     ]
    }
   ],
   "source": [
    "from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer\n",
    "\n",
    "model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "425fdeb4-b150-4569-b8c8-74e518eaf8b1",
   "metadata": {},
   "outputs": [],
   "source": [
    "model_dir = f\"/home/cc/models/finetuned-models/roberta-base-finetuned-1\"\n",
    "\n",
    "args = TrainingArguments(\n",
    "    output_dir=model_dir,\n",
    "    per_device_train_batch_size=24,  # 单卡训练批次大小\n",
    "    per_device_eval_batch_size=32,  # 单卡评估批次大小\n",
    "    gradient_accumulation_steps=2,  # 梯度累积步骤（总批大小 = 16 * 2 * 2卡 = 64）\n",
    "    save_total_limit=3,  # 最多保留3个检查点\n",
    "    fp16=True,  # 启用混合精度训练（利用RTX 4090的Tensor Core）\n",
    "    remove_unused_columns=False,  # # 禁用自动移除未用列（确保保留所有特征）\n",
    "    gradient_checkpointing=False,  # 梯度检查点（节省显存）\n",
    "    \n",
    "    greater_is_better=True,  \n",
    "    evaluation_strategy = \"epoch\",\n",
    "    save_strategy=\"epoch\",\n",
    "    load_best_model_at_end=True,  # 训练结束时加载最佳模型\n",
    "    learning_rate=2e-5,\n",
    "    num_train_epochs=5,\n",
    "    weight_decay=0.01,\n",
    ")  # 训练参数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "2106e810-0e4a-454e-98c5-0e5d66ab10ff",
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import default_data_collator\n",
    "\n",
    "data_collator = default_data_collator  # 将多个样本（batch）整理成模型可接受的输入格式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "dbe7664a-0221-4cb9-bedb-69028a8d2018",
   "metadata": {},
   "outputs": [],
   "source": [
    "trainer = Trainer(\n",
    "    model,\n",
    "    args,\n",
    "    train_dataset=tokenized_datasets[\"train\"],  # 训练集\n",
    "    eval_dataset=tokenized_datasets[\"validation\"],  # 验证集\n",
    "    data_collator=data_collator,  # 数据整理器\n",
    "    tokenizer=tokenizer  \n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "94396175-be1a-4fa5-b10c-cd36d0cc345b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='6920' max='6920' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [6920/6920 1:13:45, Epoch 5/5]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Epoch</th>\n",
       "      <th>Training Loss</th>\n",
       "      <th>Validation Loss</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>0.998700</td>\n",
       "      <td>0.887529</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>0.770100</td>\n",
       "      <td>0.852657</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>0.637600</td>\n",
       "      <td>0.851553</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>4</td>\n",
       "      <td>0.548800</td>\n",
       "      <td>0.885810</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>5</td>\n",
       "      <td>0.498700</td>\n",
       "      <td>0.914835</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/cc/.virtualenvs/peft/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
      "  warnings.warn('Was asked to gather along dimension 0, but all '\n",
      "/home/cc/.virtualenvs/peft/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
      "  warnings.warn('Was asked to gather along dimension 0, but all '\n",
      "/home/cc/.virtualenvs/peft/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
      "  warnings.warn('Was asked to gather along dimension 0, but all '\n",
      "/home/cc/.virtualenvs/peft/lib/python3.9/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
      "  warnings.warn('Was asked to gather along dimension 0, but all '\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "TrainOutput(global_step=6920, training_loss=0.7303404416652084, metrics={'train_runtime': 4426.0717, 'train_samples_per_second': 100.053, 'train_steps_per_second': 1.563, 'total_flos': 8.678449181472768e+16, 'train_loss': 0.7303404416652084, 'epoch': 5.0})"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "trainer.train()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f370f70f-e113-4f8b-b006-d39fa02806bd",
   "metadata": {},
   "source": [
    "模型在训练集上的表现持续提升（损失降低），但在验证集上的最优表现出现在第3个epoch，第 4、5 个 epoch 出现过拟合。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a334dba3-c654-4d53-b6ed-9f1d65397f81",
   "metadata": {},
   "source": [
    "TrainOutput(  \n",
    "    global_step=6920,                  # 总训练步数  \n",
    "    training_loss=0.7303404416652084,  # 平均训练损失  \n",
    "    metrics={   \n",
    "        'train_runtime': 4426.0717,    # 总训练时间（秒）  \n",
    "        'train_samples_per_second': 100.053,  # 每秒处理的训练样本数  \n",
    "        'train_steps_per_second': 1.563,      # 每秒完成的训练步数  \n",
    "        'total_flos': 8.678449181472768e+16,  # 总浮点运算次数（计算量）  \n",
    "        'train_loss': 0.7303404416652084,     # 平均训练损失（与training_loss一致）  \n",
    "        'epoch': 5.0                         # 训练的总轮数  \n",
    "    }  \n",
    ")  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "fb929519-62b1-4e8c-9303-d9a6b09a45f3",
   "metadata": {},
   "outputs": [],
   "source": [
    "model_to_save = trainer.save_model(model_dir)  # 保存模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ca2d7612-0554-4c95-8bf9-8916b046d611",
   "metadata": {},
   "source": [
    "<font size=\"4\">4 查看模型预测效果</font>"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e7b7b45a-7b00-489d-aee0-60534b9aeb06",
   "metadata": {},
   "source": [
    "首先从训练器（trainer）中获取一个验证批次数据，并通过模型进行一次前向传播，最后最终查看模型输出的键（keys）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "c007a2a2-00b4-49d0-a650-801a6f5f144b",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "odict_keys(['loss', 'start_logits', 'end_logits'])"
      ]
     },
     "execution_count": 29,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import torch\n",
    "\n",
    "for batch in trainer.get_eval_dataloader():  # 从验证数据加载器中取出第一个批次（batch） 的数据\n",
    "    break\n",
    "batch = {k: v.to(trainer.args.device) for k, v in batch.items()}  # 确保输入数据和模型在同一设备上，否则会报设备不匹配的错误\n",
    "with torch.no_grad():  # 开启 PyTorch 的无梯度上下文。在这个上下文内，模型前向传播时不会计算梯度，也不会存储梯度\n",
    "    output = trainer.model(**batch)  # 执行前向传播，得到模型输出output\n",
    "output.keys()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f3a446dc-9e09-4ed5-80c3-6bee1744ad14",
   "metadata": {},
   "source": [
    "其中  \n",
    "start_logits：每个 token 作为答案起始位置的未归一化概率。  \n",
    "end_logits：每个 token 作为答案结束位置的未归一化概率。  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "c5d0210e-62b2-4c97-8913-82a4fcf5c730",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(torch.Size([64, 384]), torch.Size([64, 384]))"
      ]
     },
     "execution_count": 30,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "output.start_logits.shape, output.end_logits.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "1e5a36db-03f1-4e25-9f00-d84ddbdf7736",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "tensor([[-7.4791, -9.0262, -9.0625,  ..., -9.6502, -9.6502, -9.6502],\n",
       "        [-7.5004, -9.0389, -9.0948,  ..., -9.6503, -9.6503, -9.6503],\n",
       "        [-7.4040, -9.3254, -9.2946,  ..., -9.6689, -9.6689, -9.6689],\n",
       "        ...,\n",
       "        [-7.6094, -8.6967, -9.1115,  ..., -9.6348, -9.6348, -9.6348],\n",
       "        [-7.7614, -9.0377, -8.9848,  ..., -9.6395, -9.6395, -9.6395],\n",
       "        [-7.5818, -8.5148, -8.6430,  ..., -9.6020, -9.6020, -9.6020]],\n",
       "       device='cuda:0')"
      ]
     },
     "execution_count": 31,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "output.start_logits"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "548b822f-c5f8-4dbb-a6ab-6686331c4a67",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(tensor([ 48,  60,  81,  45, 123, 111,  75,  36, 111,  35,  76,  43,  83,  93,\n",
       "         157,  36,  86,  93,  83,  62,  81,  77,  44,  56,  43,  36,  44,  80,\n",
       "          12,  46,  29, 135,  68,  42,  89,  46,  87,  85, 129,  26,  29,  34,\n",
       "          88, 129,  97,  26,  45,  61,  86,  31,  88,  48,  25,  47,  67,  57,\n",
       "          80,  15,  58,  71,  25,  36,  56,  42], device='cuda:0'),\n",
       " tensor([ 49,  61,  94,  46, 123, 113,  78,  38, 113,  37,  79,  44,  85,  96,\n",
       "         159,  36,  86,  96,  85,  64,  84,  77,  45,  57,  44,  36,  45,  93,\n",
       "          14,  47,  30, 135,  68,  43,  91,  47,  89,  87, 129,  27,  31,  35,\n",
       "          90, 129,  99,  27,  46, 134,  88,  32,  90,  49,  26,  48,  67,  58,\n",
       "          80,  15,  59,  71,  25,  36,  57,  42], device='cuda:0'))"
      ]
     },
     "execution_count": 32,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "output.start_logits.argmax(dim=-1), output.end_logits.argmax(dim=-1)  # 最大logit对应的位置"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "41cc123e-d9a5-4bd5-b380-d22f128fc27c",
   "metadata": {},
   "source": [
    "下面我们来看，如何从上面的logits获取可能的答案"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "33a0bca8-e9bf-411d-b4b5-b2861246b304",
   "metadata": {},
   "outputs": [],
   "source": [
    "n_best_size = 20  # 最有可能的前20个结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "630888d4-03d2-48d4-9f4c-010d09300c93",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[{'score': 19.589907, 'text': ''}, {'score': 12.265194, 'text': ''}, {'score': 11.992574, 'text': ''}, {'score': 9.303947, 'text': ''}]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "\n",
    "# 取出第一行\n",
    "start_logits = output.start_logits[0].cpu().numpy()\n",
    "end_logits = output.end_logits[0].cpu().numpy()\n",
    "\n",
    "# 获取最佳的起始和结束位置的索引：\n",
    "start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()  # 从大到小排列，然后取出前n_best_size的indexes\n",
    "end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "\n",
    "valid_answers = []\n",
    "\n",
    "# 剔除掉start_index>index的组合，计算score\n",
    "for start_index in start_indexes:\n",
    "    for end_index in end_indexes:\n",
    "        if start_index <= end_index:  # 需要进一步测试以检查答案是否在上下文中\n",
    "            valid_answers.append(\n",
    "                {\n",
    "                    \"score\": start_logits[start_index] + end_logits[end_index],  #score相加作为该答案的得分\n",
    "                    \"text\": \"\"  # 我们需要找到一种方法来获取与上下文中答案对应的原始子字符串\n",
    "                }\n",
    "            )\n",
    "print(valid_answers[:4])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 75,
   "id": "db2db032-d9cf-40b2-bcf2-5bb4c900f1ea",
   "metadata": {},
   "outputs": [],
   "source": [
    "def prepare_validation_features(examples):\n",
    "    examples[\"question\"] = [q.lstrip() for q in examples[\"question\"]]\n",
    "\n",
    "    tokenized_examples = tokenizer(\n",
    "        examples[\"question\" if pad_on_right else \"context\"],\n",
    "        examples[\"context\" if pad_on_right else \"question\"],\n",
    "        truncation=\"only_second\" if pad_on_right else \"only_first\",\n",
    "        max_length=max_length,\n",
    "        stride=doc_stride,\n",
    "        return_overflowing_tokens=True,\n",
    "        return_offsets_mapping=True,\n",
    "        padding=\"max_length\",\n",
    "    )\n",
    "\n",
    "    sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\n",
    "\n",
    "    tokenized_examples[\"example_id\"] = []\n",
    "\n",
    "    for i in range(len(tokenized_examples[\"input_ids\"])):\n",
    "        sequence_ids = tokenized_examples.sequence_ids(i)\n",
    "        context_index = 1 if pad_on_right else 0\n",
    "\n",
    "        # 一个示例可以产生几个文本段，example_id为该文本段的示例的id\n",
    "        sample_index = sample_mapping[i]\n",
    "        tokenized_examples[\"example_id\"].append(examples[\"id\"][sample_index])\n",
    "\n",
    "        # 将非context设置为None\n",
    "        tokenized_examples[\"offset_mapping\"][i] = [\n",
    "            (o if sequence_ids[k] == context_index else None)\n",
    "            for k, o in enumerate(tokenized_examples[\"offset_mapping\"][i])\n",
    "        ]\n",
    "\n",
    "    return tokenized_examples"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 76,
   "id": "258e58fa-19ba-4e2c-b067-3cb92125f490",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>title</th>\n",
       "      <th>context</th>\n",
       "      <th>question</th>\n",
       "      <th>answers</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>56be4db0acb8001400a502ec</td>\n",
       "      <td>Super_Bowl_50</td>\n",
       "      <td>Super Bowl 50 was an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7, 2016, at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the tradition of naming each Super Bowl game with Roman numerals (under which the game would have been known as \"Super Bowl L\"), so that the logo could prominently feature the Arabic numerals 50.</td>\n",
       "      <td>Which NFL team represented the AFC at Super Bowl 50?</td>\n",
       "      <td>{'text': ['Denver Broncos', 'Denver Broncos', 'Denver Broncos'], 'answer_start': [177, 177, 177]}</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>input_ids</th>\n",
       "      <th>attention_mask</th>\n",
       "      <th>offset_mapping</th>\n",
       "      <th>example_id</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>[0, 32251, 1485, 165, 4625, 5, 9601, 23, 1582, 2616, 654, 116, 2, 2, 16713, 2616, 654, 21, 41, 470, 1037, 177, 7, 3094, 5, 2234, 9, 5, 496, 3910, 815, 36, 12048, 43, 13, 5, 570, 191, 4, 20, 470, 3910, 2815, 36, 250, 5268, 43, 2234, 4465, 7609, 5125, 5, 496, 3910, 2815, 36, 487, 5268, 43, 2234, 1961, 6495, 706, 2383, 698, 7, 4073, 49, 371, 1582, 2616, 1270, 4, 20, 177, 21, 702, 15, 902, 262, 6, 336, 6, 23, 20050, 18, 2689, 11, 5, 764, 2659, 1501, 4121, 23, 2005, 13606, 6, 886, 4, 287, ...]</td>\n",
       "      <td>[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...]</td>\n",
       "      <td>[None, None, None, None, None, None, None, None, None, None, None, None, None, None, (0, 5), (6, 10), (11, 13), (14, 17), (18, 20), (21, 29), (30, 38), (39, 43), (44, 46), (47, 56), (57, 60), (61, 69), (70, 72), (73, 76), (77, 85), (86, 94), (95, 101), (102, 103), (103, 106), (106, 107), (108, 111), (112, 115), (116, 120), (121, 127), (127, 128), (129, 132), (133, 141), (142, 150), (151, 161), (162, 163), (163, 164), (164, 166), (166, 167), (168, 176), (177, 183), (184, 191), (192, 200), (201, 204), (205, 213), (214, 222), (223, 233), (234, 235), (235, 236), (236, 238), (238, 239), (240, 248), (249, 257), (258, 266), (267, 269), (269, 270), (270, 272), (273, 275), (276, 280), (281, 286), (287, 292), (293, 298), (299, 303), (304, 309), (309, 310), (311, 314), (315, 319), (320, 323), (324, 330), (331, 333), (334, 342), (343, 344), (344, 345), (346, 350), (350, 351), (352, 354), (355, 359), (359, 361), (362, 369), (370, 372), (373, 376), (377, 380), (381, 390), (391, 394), (395, 399), (400, 402), (403, 408), (409, 414), (414, 415), (416, 426), (426, 427), (428, 430), ...]</td>\n",
       "      <td>56be4db0acb8001400a502ec</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "dp(datasets[\"validation\"][[0]])\n",
    "example = prepare_validation_features(datasets[\"validation\"][[0]])\n",
    "dp(dict(example))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 101,
   "id": "eeb39f89-9be6-4a2f-8eec-3edf41667e60",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 预处理时同时生成带 offset_mapping 的特征（用于评估）和不带的特征（用于模型预测）\n",
    "# 1. 生成带 offset_mapping的完整特征（将token位置映射回原始文本的字符位置，从而提取出真实的答案文本（如从上下文里截取对应片段），用于后续答案提取）\n",
    "validation_features_with_offset = datasets[\"validation\"].map(\n",
    "    prepare_validation_features,\n",
    "    batched=True,\n",
    "    remove_columns=datasets[\"validation\"].column_names\n",
    ")\n",
    "# 2. 生成模型预测用的特征（删除 offset_mapping），预测的答案在token序列中的位置（start_logits和end_logits）。\n",
    "validation_features = validation_features_with_offset.remove_columns([\"offset_mapping\"])\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8517372a-8a04-46b8-94cc-a9c251964271",
   "metadata": {},
   "source": [
    "下面获取原始预测结果，后续会被用于提取具体的答案文本（结合之前保留的offset_mapping信息），并与真实答案进行比对来评估模型性能（如计算 EM 值、F1 分数等）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 102,
   "id": "1a07effd-276a-4dbe-a461-55c6238401f3",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "raw_predictions = trainer.predict(validation_features)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 103,
   "id": "bfce32f5-a565-4bdb-ae13-9142420c6d12",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "None ['input_ids', 'attention_mask', 'offset_mapping', 'example_id']\n"
     ]
    }
   ],
   "source": [
    "print(validation_features_with_offset.format[\"type\"], list(validation_features_with_offset.features.keys()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 104,
   "id": "c1ed1321-4707-4635-b1c6-9ca3109d2bc9",
   "metadata": {},
   "outputs": [],
   "source": [
    "validation_features_with_offset.set_format(type=validation_features_with_offset.format[\"type\"], columns=list(validation_features_with_offset.features.keys()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 105,
   "id": "e1b09461-353c-4c49-aeb5-368d50ba8e68",
   "metadata": {},
   "outputs": [],
   "source": [
    "max_answer_length = 30"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 106,
   "id": "191d9b0f-78be-4ebf-a2fa-925fa9af6341",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'score': 19.589907, 'text': 'Denver Broncos'},\n",
       " {'score': 15.610378, 'text': 'Broncos'},\n",
       " {'score': 15.31563,\n",
       "  'text': 'The American Football Conference (AFC) champion Denver Broncos'},\n",
       " {'score': 13.710703,\n",
       "  'text': 'American Football Conference (AFC) champion Denver Broncos'},\n",
       " {'score': 12.650904, 'text': 'AFC) champion Denver Broncos'},\n",
       " {'score': 12.265194, 'text': 'Denver'},\n",
       " {'score': 11.992574,\n",
       "  'text': 'Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers'},\n",
       " {'score': 11.460337, 'text': 'champion Denver Broncos'},\n",
       " {'score': 9.303947,\n",
       "  'text': 'Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10'},\n",
       " {'score': 8.013045,\n",
       "  'text': 'Broncos defeated the National Football Conference (NFC) champion Carolina Panthers'},\n",
       " {'score': 7.9909163,\n",
       "  'text': 'The American Football Conference (AFC) champion Denver'},\n",
       " {'score': 7.9181066,\n",
       "  'text': 'Denver Broncos defeated the National Football Conference'},\n",
       " {'score': 7.895266,\n",
       "  'text': 'Denver Broncos defeated the National Football Conference (NFC) champion Carolina'},\n",
       " {'score': 7.87642, 'text': 'The American Football Conference'},\n",
       " {'score': 7.7182965,\n",
       "  'text': 'The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers'},\n",
       " {'score': 7.4381847,\n",
       "  'text': 'Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title.'},\n",
       " {'score': 6.5379543, 'text': 'Conference (AFC) champion Denver Broncos'},\n",
       " {'score': 6.5021763, 'text': 'FC) champion Denver Broncos'},\n",
       " {'score': 6.416417,\n",
       "  'text': 'Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24'},\n",
       " {'score': 6.38599,\n",
       "  'text': 'American Football Conference (AFC) champion Denver'}]"
      ]
     },
     "execution_count": 106,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "start_logits = output.start_logits[0].cpu().numpy()\n",
    "end_logits = output.end_logits[0].cpu().numpy()\n",
    "offset_mapping = validation_features_with_offset[0][\"offset_mapping\"]\n",
    "\n",
    "# 第一个特征来自第一个示例。对于更一般的情况，我们需要将example_id匹配到一个示例索引\n",
    "context = datasets[\"validation\"][0][\"context\"]\n",
    "\n",
    "# 收集最佳开始/结束逻辑的索引：\n",
    "start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "valid_answers = []\n",
    "for start_index in start_indexes:\n",
    "    for end_index in end_indexes:\n",
    "        # 不考虑超出范围的答案，原因是索引超出范围或对应于输入ID的部分不在上下文中。\n",
    "        if (\n",
    "            start_index >= len(offset_mapping)\n",
    "            or end_index >= len(offset_mapping)\n",
    "            or offset_mapping[start_index] is None\n",
    "            or offset_mapping[end_index] is None\n",
    "        ):\n",
    "            continue\n",
    "        # 不考虑长度小于0或大于max_answer_length的答案。\n",
    "        if end_index < start_index or end_index - start_index + 1 > max_answer_length:\n",
    "            continue\n",
    "        if start_index <= end_index: # 我们需要细化这个测试，以检查答案是否在上下文中\n",
    "            start_char = offset_mapping[start_index][0]\n",
    "            end_char = offset_mapping[end_index][1]\n",
    "            valid_answers.append(\n",
    "                {\n",
    "                    \"score\": start_logits[start_index] + end_logits[end_index],\n",
    "                    \"text\": context[start_char: end_char]\n",
    "                }\n",
    "            )\n",
    "valid_answers = sorted(valid_answers, key=lambda x: x[\"score\"], reverse=True)[:n_best_size]\n",
    "valid_answers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 107,
   "id": "30f2c5f1-77d4-4bad-b8c6-3478f4df6cf0",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'text': ['Denver Broncos', 'Denver Broncos', 'Denver Broncos'],\n",
       " 'answer_start': [177, 177, 177]}"
      ]
     },
     "execution_count": 107,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "datasets[\"validation\"][0][\"answers\"]  # 查看真实的answers，和我们score得分最高的一致"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7a55a645-f08d-4e21-a9fc-369e5ef20cf5",
   "metadata": {},
   "source": [
    "下面，我们建立 “原始样本→其所有截断片段” 的映射关系， 例如"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9d5c85de-fb22-4693-8d6f-a28f7682aef3",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 假设输出结构\n",
    "features_per_example = {\n",
    "    0: [2, 3],   # 原始样本0对应features中的索引2和3（被拆分为2个片段）\n",
    "    1: [5],      # 原始样本1对应features中的索引5（未被拆分）\n",
    "    2: [7, 8, 9] # 原始样本2对应features中的索引7、8、9（被拆分为3个片段）\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 108,
   "id": "0e750ac7-814d-4a04-a402-f4b0bbd450b5",
   "metadata": {},
   "outputs": [],
   "source": [
    "import collections\n",
    "\n",
    "examples = datasets[\"validation\"]\n",
    "features = validation_features\n",
    "\n",
    "example_id_to_index = {k: i for i, k in enumerate(examples[\"id\"])}\n",
    "features_per_example = collections.defaultdict(list)\n",
    "for i, feature in enumerate(features):\n",
    "    features_per_example[example_id_to_index[feature[\"example_id\"]]].append(i)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5d847d4e-13bd-47fd-8005-7a38e39fa71a",
   "metadata": {},
   "source": [
    "在问答任务中，一个长上下文可能被截断为多个片段（每个片段都是一个feature），而模型会对每个片段单独预测答案。通过features_per_example，可以收集同一个原始样本的所有片段的预测结果，从而从中筛选出最优的答案（如置信度最高的），最终将预测结果与原始样本的真实答案对齐，用于评估模型性能。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "df68e9f6-f29d-4d90-92cc-3dabc70f1515",
   "metadata": {},
   "source": [
    "下面，将模型对多个文本片段（features）的原始预测（logits）转换为最终的答案文本，并处理长上下文截断带来的多片段聚合问题"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 109,
   "id": "afbf25af-d24a-4713-87db-c5c896fe1608",
   "metadata": {},
   "outputs": [],
   "source": [
    "from tqdm.auto import tqdm\n",
    "\n",
    "def postprocess_qa_predictions(examples, features, raw_predictions, n_best_size = 20, max_answer_length = 30):\n",
    "    all_start_logits, all_end_logits = raw_predictions  # 拆分模型预测的开始/结束位置logits\n",
    "    example_id_to_index = {k: i for i, k in enumerate(examples[\"id\"])}  # 样本ID到索引的映射\n",
    "    features_per_example = collections.defaultdict(list)  # 存储每个样本对应的所有片段索引\n",
    "    for i, feature in enumerate(features):\n",
    "        features_per_example[example_id_to_index[feature[\"example_id\"]]].append(i)\n",
    "\n",
    "    predictions = collections.OrderedDict()\n",
    "\n",
    "    # 日志记录。\n",
    "    print(f\"正在后处理 {len(examples)} 个示例的预测，这些预测分散在 {len(features)} 个特征中。\")\n",
    "\n",
    "    #  遍历原始样本，处理每个样本的所有片段\n",
    "    for example_index, example in enumerate(tqdm(examples)):\n",
    "        feature_indices = features_per_example[example_index]  # 当前样本对应的所有片段索引\n",
    "\n",
    "        min_null_score = None # 仅在squad_v2为True时使用。\n",
    "        valid_answers = []\n",
    "        \n",
    "        context = example[\"context\"]\n",
    "        # 处理单个片段的预测结果\n",
    "        for feature_index in feature_indices:\n",
    "            start_logits = all_start_logits[feature_index]  # 该片段的开始位置logits\n",
    "            end_logits = all_end_logits[feature_index]      # 该片段的结束位置logits\n",
    "            offset_mapping = features[feature_index][\"offset_mapping\"]  # token到原始文本的字符映射\n",
    "\n",
    "            #  计算空答案的分数（针对 SQuAD v2）\n",
    "            cls_index = features[feature_index][\"input_ids\"].index(tokenizer.cls_token_id)\n",
    "            feature_null_score = start_logits[cls_index] + end_logits[cls_index]\n",
    "            if min_null_score is None or min_null_score < feature_null_score:\n",
    "                min_null_score = feature_null_score\n",
    "\n",
    "            # 筛选有效候选答案\n",
    "            # 取logits最高的前n_best_size个开始和结束位置\n",
    "            start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "            end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "\n",
    "            for start_index in start_indexes:\n",
    "                for end_index in end_indexes:\n",
    "                    # 过滤无效位置（超出范围、不在上下文中等）\n",
    "                    if (start_index >= len(offset_mapping) or end_index >= len(offset_mapping) or\n",
    "                        offset_mapping[start_index] is None or offset_mapping[end_index] is None):\n",
    "                        continue\n",
    "                    # 过滤长度无效的答案（反向或过长）\n",
    "                    if end_index < start_index or end_index - start_index + 1 > max_answer_length:\n",
    "                        continue\n",
    "                    # 转换为原始文本中的字符位置，截取答案\n",
    "                    start_char = offset_mapping[start_index][0]\n",
    "                    end_char = offset_mapping[end_index][1]\n",
    "                    valid_answers.append({\n",
    "                        \"score\": start_logits[start_index] + end_logits[end_index],\n",
    "                        \"text\": context[start_char: end_char]\n",
    "                    })\n",
    "        # 选择最优答案\n",
    "        if len(valid_answers) > 0:\n",
    "            best_answer = sorted(valid_answers, key=lambda x: x[\"score\"], reverse=True)[0]\n",
    "        else:\n",
    "            # 在极少数情况下我们没有一个非空预测，我们创建一个假预测以避免失败。\n",
    "            best_answer = {\"text\": \"\", \"score\": 0.0}\n",
    "        \n",
    "        # 最终答案确定（区分 SQuAD v1/v2）\n",
    "        if not squad_v2:\n",
    "            predictions[example[\"id\"]] = best_answer[\"text\"]  # v1直接用最佳答案\n",
    "        else:\n",
    "            # v2需比较最佳答案与空答案的分数，取分数高的\n",
    "            answer = best_answer[\"text\"] if best_answer[\"score\"] > min_null_score else \"\"\n",
    "            predictions[example[\"id\"]] = answer\n",
    "\n",
    "    return predictions\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 110,
   "id": "a5349d74-2c19-409a-99fd-5ce16f740fcb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "正在后处理 10570 个示例的预测，这些预测分散在 10790 个特征中。\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "39823587706244a89d7f6d0e08b8426b",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/10570 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "final_predictions = postprocess_qa_predictions(datasets[\"validation\"], validation_features_with_offset, raw_predictions.predictions)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 111,
   "id": "b5bbe219-2b20-4968-8a1f-f3bf0fc1713a",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/cc/.virtualenvs/peft/lib/python3.9/site-packages/datasets/load.py:752: FutureWarning: The repository for squad contains custom code which must be executed to correctly load the metric. You can inspect the repository content at https://raw.githubusercontent.com/huggingface/datasets/2.16.1/metrics/squad/squad.py\n",
      "You can avoid this message in future by passing the argument `trust_remote_code=True`.\n",
      "Passing `trust_remote_code=True` will be mandatory to load this metric from the next major release of `datasets`.\n",
      "  warnings.warn(\n"
     ]
    }
   ],
   "source": [
    "from datasets import load_metric\n",
    "\n",
    "metric = load_metric(\"squad_v2\" if squad_v2 else \"squad\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d733efc0-a614-4b67-ad37-dd5e392f3737",
   "metadata": {},
   "source": [
    "SQuAD v2：支持 “无答案” 场景，因此每个预测项需包含no_answer_probability（无答案的概率，这里简化为 0.0，实际中可根据模型输出计算）。  \n",
    "SQuAD v1：答案一定存在，只需包含样本 ID 和预测文本。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 112,
   "id": "039c2228-2abe-45b7-8e01-75e0d09fd161",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'exact_match': 85.38315988647115, 'f1': 91.86383353880744}"
      ]
     },
     "execution_count": 112,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "if squad_v2:\n",
    "    formatted_predictions = [{\"id\": k, \"prediction_text\": v, \"no_answer_probability\": 0.0} for k, v in final_predictions.items()]\n",
    "else:\n",
    "    formatted_predictions = [{\"id\": k, \"prediction_text\": v} for k, v in final_predictions.items()]\n",
    "references = [{\"id\": ex[\"id\"], \"answers\": ex[\"answers\"]} for ex in datasets[\"validation\"]]\n",
    "metric.compute(predictions=formatted_predictions, references=references)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6c1895fc-f532-4687-b256-0cadf2ac3d7d",
   "metadata": {},
   "source": [
    "EM（Exact Match，精确匹配）：预测答案与真实答案完全一致的比例  \n",
    "F1 分数：预测答案与真实答案的重叠度（考虑部分匹配），是召回率和精确率的调和平均    "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
