{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "3ccc7a4a-de6b-41a1-9a4d-6ca1352d4fd1",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>title</th>\n",
       "      <th>context</th>\n",
       "      <th>question</th>\n",
       "      <th>answers</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>56f71a5e3d8e2e1400e3735e</td>\n",
       "      <td>Josip_Broz_Tito</td>\n",
       "      <td>In 1934 the Zagreb Provincial Committee sent Tito to Vienna where all the Central Committee of the Communist Party of Yugoslavia had sought refuge. He was appointed to the Committee and started to appoint allies to him, among them Edvard Kardelj, Milovan Đilas, Aleksandar Ranković and Boris Kidrič. In 1935, Tito travelled to the Soviet Union, working for a year in the Balkans section of Comintern. He was a member of the Soviet Communist Party and the Soviet secret police (NKVD). Tito was also involved in recruiting for the Dimitrov Battalion, a group of volunteers serving in the Spanish Civil War. In 1936, the Comintern sent \"Comrade Walter\" (i.e. Tito) back to Yugoslavia to purge the Communist Party there. In 1937, Stalin had the Secretary-General of the CPY, Milan Gorkić, murdered in Moscow. Subsequently Tito was appointed Secretary-General of the still-outlawed CPY.</td>\n",
       "      <td>Who is known as \"Comrade Walter\"?</td>\n",
       "      <td>{'text': ['Tito'], 'answer_start': [656]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>5730b8338ab72b1400f9c6fd</td>\n",
       "      <td>Sumer</td>\n",
       "      <td>In the early Sumerian Uruk period, the primitive pictograms suggest that sheep, goats, cattle, and pigs were domesticated. They used oxen as their primary beasts of burden and donkeys or equids as their primary transport animal and \"woollen clothing as well as rugs were made from the wool or hair of the animals. ... By the side of the house was an enclosed garden planted with trees and other plants; wheat and probably other cereals were sown in the fields, and the shaduf was already employed for the purpose of irrigation. Plants were also grown in pots or vases.\"</td>\n",
       "      <td>What might be found by the side of a Sumerian house?</td>\n",
       "      <td>{'text': ['enclosed garden planted with trees and other plants'], 'answer_start': [350]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>56e12827cd28a01900c67665</td>\n",
       "      <td>Boston</td>\n",
       "      <td>In the 1820s, Boston's population grew rapidly, and the city's ethnic composition changed dramatically with the first wave of European immigrants. Irish immigrants dominated the first wave of newcomers during this period, especially following the Irish Potato Famine; by 1850, about 35,000 Irish lived in Boston. In the latter half of the 19th century, the city saw increasing numbers of Irish, Germans, Lebanese, Syrians, French Canadians, and Russian and Polish Jews settled in the city. By the end of the 19th century, Boston's core neighborhoods had become enclaves of ethnically distinct immigrants—Italians inhabited the North End, Irish dominated South Boston and Charlestown, and Russian Jews lived in the West End. Irish and Italian immigrants brought with them Roman Catholicism. Currently, Catholics make up Boston's largest religious community, and since the early 20th century, the Irish have played a major role in Boston politics—prominent figures include the Kennedys, Tip O'Neill, and John F. Fitzgerald.</td>\n",
       "      <td>How did Boston's population change in the 1820's?</td>\n",
       "      <td>{'text': ['Boston's population grew rapidly'], 'answer_start': [14]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>5725c99938643c19005accff</td>\n",
       "      <td>Israel</td>\n",
       "      <td>The Gaza Strip was occupied by Egypt from 1948 to 1967 and then by Israel after 1967. In 2005, as part of Israel's unilateral disengagement plan, Israel removed all of its settlers and forces from the territory. Israel does not consider the Gaza Strip to be occupied territory and declared it a \"foreign territory\". That view has been disputed by numerous international humanitarian organizations and various bodies of the United Nations. Following June 2007, when Hamas assumed power in the Gaza Strip, Israel tightened its control of the Gaza crossings along its border, as well as by sea and air, and prevented persons from entering and exiting the area except for isolated cases it deemed humanitarian. Gaza has a border with Egypt and an agreement between Israel, the European Union and the PA governed how border crossing would take place (it was monitored by European observers). Egypt adhered to this agreement under Mubarak and prevented access to Gaza until April 2011 when it announced it was opening its border with Gaza.</td>\n",
       "      <td>When did Hamas assume it's power in the Gaza Strip?</td>\n",
       "      <td>{'text': ['June 2007'], 'answer_start': [449]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>573402084776f419006616bd</td>\n",
       "      <td>Punjab,_Pakistan</td>\n",
       "      <td>Despite the lack of a coastline, Punjab is the most industrialised province of Pakistan; its manufacturing industries produce textiles, sports goods, heavy machinery, electrical appliances, surgical instruments, vehicles, auto parts, metals, sugar mill plants, aircraft, cement, agricultural machinery, bicycles and rickshaws, floor coverings, and processed foods. In 2003, the province manufactured 90% of the paper and paper boards, 71% of the fertilizers, 69% of the sugar and 40% of the cement of Pakistan.</td>\n",
       "      <td>How much of Pakistan's sugar does Punjab manufacture?</td>\n",
       "      <td>{'text': ['69%'], 'answer_start': [459]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>5726f8b75951b619008f83b3</td>\n",
       "      <td>Nigeria</td>\n",
       "      <td>Nigeria has also been pervaded by political corruption. It was ranked 143 out of 182 countries in Transparency International's 2011 Corruption Perceptions Index; however, it improved to 136th position in 2014. More than $400 billion were stolen from the treasury by Nigeria's leaders between 1960 and 1999. In late 2013, Nigeria's then central bank governor Lamido Sanusi informed President Goodluck Jonathan that the state oil company, NNPC had failed to remit US$20 billion of oil revenues, which it owed the state. Jonathan however dismissed the claim and replaced Sanusi for his mismanagement of the central bank's budget. A Senate committee also found Sanusi’s account to be lacking substance. After the conclusion of the NNPC's account Audit, it was announced in January 2015 that NNPC's non-remitted revenue is actually US$1.48billion, which it needs to refund back to the Government.</td>\n",
       "      <td>In 2011 rankings, how bad was Nigeria's corruption ranking?</td>\n",
       "      <td>{'text': ['143 out of 182 countries'], 'answer_start': [70]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6</th>\n",
       "      <td>5727ce204b864d1900163d8b</td>\n",
       "      <td>On_the_Origin_of_Species</td>\n",
       "      <td>Evolution had less obvious applications to anatomy and morphology, and at first had little impact on the research of the anatomist Thomas Henry Huxley. Despite this, Huxley strongly supported Darwin on evolution; though he called for experiments to show whether natural selection could form new species, and questioned if Darwin's gradualism was sufficient without sudden leaps to cause speciation. Huxley wanted science to be secular, without religious interference, and his article in the April 1860 Westminster Review promoted scientific naturalism over natural theology, praising Darwin for \"extending the domination of Science over regions of thought into which she has, as yet, hardly penetrated\" and coining the term \"Darwinism\" as part of his efforts to secularise and professionalise science. Huxley gained influence, and initiated the X Club, which used the journal Nature to promote evolution and naturalism, shaping much of late Victorian science. Later, the German morphologist Ernst Haeckel would convince Huxley that comparative anatomy and palaeontology could be used to reconstruct evolutionary genealogies.</td>\n",
       "      <td>What did the morphologist Ernst Haeckel convince Huxley of about comparative anatomy and paleontology?</td>\n",
       "      <td>{'text': ['that comparative anatomy and palaeontology could be used to reconstruct evolutionary genealogies'], 'answer_start': [1027]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7</th>\n",
       "      <td>5725f343ec44d21400f3d779</td>\n",
       "      <td>Buckingham_Palace</td>\n",
       "      <td>In 1901 the accession of Edward VII saw new life breathed into the palace. The new King and his wife Queen Alexandra had always been at the forefront of London high society, and their friends, known as \"the Marlborough House Set\", were considered to be the most eminent and fashionable of the age. Buckingham Palace—the Ballroom, Grand Entrance, Marble Hall, Grand Staircase, vestibules and galleries redecorated in the Belle époque cream and gold colour scheme they retain today—once again became a setting for entertaining on a majestic scale but leaving some to feel King Edward's heavy redecorations were at odds with Nash's original work.</td>\n",
       "      <td>In what year did Edward VII ascend to the throne?</td>\n",
       "      <td>{'text': ['1901'], 'answer_start': [3]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>8</th>\n",
       "      <td>56e7839137bdd419002c4074</td>\n",
       "      <td>Nanjing</td>\n",
       "      <td>Archaeological discovery shows that \"Nanjing Man\" lived in more than 500 thousand years ago. Zun, a kind of wine vessel, was found to exist in Beiyinyangying culture of Nanjing in about 5000 years ago. In the late period of Shang dynasty, Taibo of Zhou came to Jiangnan and established Wu state, and the first stop is in Nanjing area according to some historians based on discoveries in Taowu and Hushu culture. According to legend,[which?] Fuchai, King of the State of Wu, founded a fort named Yecheng (冶城) in today's Nanjing area in 495 BC. Later in 473 BC, the State of Yue conquered Wu and constructed the fort of Yuecheng (越城) on the outskirts of the present-day Zhonghua Gate. In 333 BC, after eliminating the State of Yue, the State of Chu built Jinling Yi (金陵邑) in the western part of present-day Nanjing. It was renamed Moling (秣陵) during reign of Qin Shi Huang. Since then, the city experienced destruction and renewal many times.[citation needed] The area was successively part of Kuaiji, Zhang and Danyang prefectures in Qin and Han dynasty, and part of Yangzhou region which was established as the nation's 13 supervisory and administrative regions in the 5th year of Yuanfeng in Han dynasty (106 BC). Nanjing was later the capital city of Danyang Prefecture, and had been the capital city of Yangzhou for about 400 years from late Han to early Tang.</td>\n",
       "      <td>What vessel was found 5000 years ago?</td>\n",
       "      <td>{'text': ['Zun, a kind of wine vessel'], 'answer_start': [93]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>9</th>\n",
       "      <td>5727fe9a2ca10214002d9ade</td>\n",
       "      <td>St._John%27s,_Newfoundland_and_Labrador</td>\n",
       "      <td>Sebastian Cabot declares in a handwritten Latin text in his original 1545 map, that the St. John's earned its name when he and his father, the Venetian explorer John Cabot became the first Europeans to sail into the harbour, in the morning of 24 June 1494 (against British and French historians stating 1497), the feast day of Saint John the Baptist. However, the exact locations of Cabot's landfalls are disputed. A series of expeditions to St. John's by Portuguese from the Azores took place in the early 16th century, and by 1540 French, Spanish and Portuguese ships crossed the Atlantic annually to fish the waters off the Avalon Peninsula. In the Basque Country, it is a common belief that the name of St. John's was given by Basque fishermen because the bay of St. John's is very similar to the Bay of Pasaia in the Basque Country, where one of the fishing towns is also called St. John (in Spanish, San Juan, and in Basque, Donibane).</td>\n",
       "      <td>In what language did Sebastian Cabot write his map from 1545?</td>\n",
       "      <td>{'text': ['Latin'], 'answer_start': [42]}</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[CLS] how many wins does the notre dame men ' s basketball team have? [SEP] the men ' s basketball team has over 1, 600 wins, one of only 12 schools who have reached that mark, and have appeared in 28 ncaa tournaments. former player austin carr holds the record for most points scored in a single game of the tournament with 61. although the team has never won the ncaa tournament, they were named by the helms athletic foundation as national champions twice. the team has orchestrated a number of upsets of number one ranked teams, the most notable of which was ending ucla ' s record 88 - game winning streak in 1974. the team has beaten an additional eight number - one teams, and those nine wins rank second, to ucla ' s 10, all - time in wins against the top team. the team plays in newly renovated purcell pavilion ( within the edmund p. joyce center ), which reopened for the beginning of the 2009 – 2010 season. the team is coached by mike brey, who, as of the 2014 – 15 season, his fifteenth at notre dame, has achieved a 332 - 165 record. in 2009 they were invited to the nit, where they advanced to the semifinals but were beaten by penn state who went on and beat baylor in the championship. the 2010 – 11 team concluded its regular season ranked number seven in the country, with a record of 25 – 5, brey ' s fifth straight 20 - win season, and a second - place finish in the big east. during the 2014 - 15 season, the team went 32 - 6 and won the acc conference tournament, later advancing to the elite 8, where the fighting irish lost on a missed buzzer - beater against then undefeated kentucky. led by nba draft picks jerian grant and pat connaughton, the fighting irish beat the eventual national champion duke blue devils twice during the season. the 32 wins were [SEP]\n",
      "[CLS] how many wins does the notre dame men ' s basketball team have? [SEP] championship. the 2010 – 11 team concluded its regular season ranked number seven in the country, with a record of 25 – 5, brey ' s fifth straight 20 - win season, and a second - place finish in the big east. during the 2014 - 15 season, the team went 32 - 6 and won the acc conference tournament, later advancing to the elite 8, where the fighting irish lost on a missed buzzer - beater against then undefeated kentucky. led by nba draft picks jerian grant and pat connaughton, the fighting irish beat the eventual national champion duke blue devils twice during the season. the 32 wins were the most by the fighting irish team since 1908 - 09. [SEP]\n",
      "训练集数据集预处理！\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Some weights of DistilBertForQuestionAnswering were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['qa_outputs.bias', 'qa_outputs.weight']\n",
      "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始训练！\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='39' max='39' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [39/39 00:14, Epoch 3/3]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Epoch</th>\n",
       "      <th>Training Loss</th>\n",
       "      <th>Validation Loss</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>No log</td>\n",
       "      <td>5.727952</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>No log</td>\n",
       "      <td>5.520641</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>No log</td>\n",
       "      <td>5.427135</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "model_to_save:  None\n",
      "odict_keys(['loss', 'start_logits', 'end_logits'])\n",
      "torch.Size([8, 384]) torch.Size([8, 384])\n",
      "tensor([84,  7, 56, 74, 28, 14, 89, 73], device='cuda:0') tensor([126, 156,  96,  38,  11,  15, 123,  65], device='cuda:0')\n",
      "valid_answers:  [{'score': np.float32(3.1342316), 'text': ''}, {'score': np.float32(2.8419073), 'text': ''}, {'score': np.float32(2.7281783), 'text': ''}, {'score': np.float32(2.7182245), 'text': ''}, {'score': np.float32(2.69587), 'text': ''}, {'score': np.float32(2.6946898), 'text': ''}, {'score': np.float32(2.677681), 'text': ''}, {'score': np.float32(2.675403), 'text': ''}, {'score': np.float32(2.6415372), 'text': ''}, {'score': np.float32(2.632486), 'text': ''}, {'score': np.float32(3.0198717), 'text': ''}, {'score': np.float32(2.6038647), 'text': ''}, {'score': np.float32(2.9661405), 'text': ''}, {'score': np.float32(2.6738162), 'text': ''}, {'score': np.float32(2.5501337), 'text': ''}, {'score': np.float32(2.527779), 'text': ''}, {'score': np.float32(2.5095901), 'text': ''}, {'score': np.float32(2.464395), 'text': ''}, {'score': np.float32(2.9583802), 'text': ''}, {'score': np.float32(2.5423732), 'text': ''}, {'score': np.float32(2.5200186), 'text': ''}, {'score': np.float32(2.947894), 'text': ''}, {'score': np.float32(2.531887), 'text': ''}, {'score': np.float32(2.5095327), 'text': ''}, {'score': np.float32(2.9470682), 'text': ''}, {'score': np.float32(2.6547441), 'text': ''}, {'score': np.float32(2.584158), 'text': ''}, {'score': np.float32(2.5410151), 'text': ''}, {'score': np.float32(2.5310612), 'text': ''}, {'score': np.float32(2.5087068), 'text': ''}, {'score': np.float32(2.5075266), 'text': ''}, {'score': np.float32(2.4905179), 'text': ''}, {'score': np.float32(2.4882398), 'text': ''}, {'score': np.float32(2.4543738), 'text': ''}, {'score': np.float32(2.445323), 'text': ''}, {'score': np.float32(2.9073308), 'text': ''}, {'score': np.float32(2.491324), 'text': ''}, {'score': np.float32(2.4689693), 'text': ''}, {'score': np.float32(2.8908923), 'text': ''}, {'score': np.float32(2.7134018), 'text': ''}, {'score': np.float32(2.6447487), 'text': ''}, {'score': np.float32(2.598568), 'text': ''}, {'score': np.float32(2.527982), 'text': ''}, {'score': np.float32(2.505121), 'text': ''}, {'score': np.float32(2.484839), 'text': ''}, {'score': np.float32(2.4770412), 'text': ''}, {'score': np.float32(2.4748855), 'text': ''}, {'score': np.float32(2.4525309), 'text': ''}, {'score': np.float32(2.4513507), 'text': ''}, {'score': np.float32(2.4389188), 'text': ''}, {'score': np.float32(2.434342), 'text': ''}, {'score': np.float32(2.432064), 'text': ''}, {'score': np.float32(2.4211078), 'text': ''}, {'score': np.float32(2.3981981), 'text': ''}, {'score': np.float32(2.3891468), 'text': ''}, {'score': np.float32(2.8907204), 'text': ''}, {'score': np.float32(2.7132297), 'text': ''}, {'score': np.float32(2.6445768), 'text': ''}, {'score': np.float32(2.598396), 'text': ''}, {'score': np.float32(2.52781), 'text': ''}, {'score': np.float32(2.504949), 'text': ''}, {'score': np.float32(2.484667), 'text': ''}, {'score': np.float32(2.476869), 'text': ''}, {'score': np.float32(2.4747133), 'text': ''}, {'score': np.float32(2.4523587), 'text': ''}, {'score': np.float32(2.4511786), 'text': ''}, {'score': np.float32(2.438747), 'text': ''}, {'score': np.float32(2.4341698), 'text': ''}, {'score': np.float32(2.431892), 'text': ''}, {'score': np.float32(2.4209359), 'text': ''}, {'score': np.float32(2.402389), 'text': ''}, {'score': np.float32(2.398026), 'text': ''}, {'score': np.float32(2.388975), 'text': ''}, {'score': np.float32(2.886723), 'text': ''}, {'score': np.float32(2.7092323), 'text': ''}, {'score': np.float32(2.6405795), 'text': ''}, {'score': np.float32(2.5943987), 'text': ''}, {'score': np.float32(2.5238128), 'text': ''}, {'score': np.float32(2.5009518), 'text': ''}, {'score': np.float32(2.4806697), 'text': ''}, {'score': np.float32(2.4728718), 'text': ''}, {'score': np.float32(2.470716), 'text': ''}, {'score': np.float32(2.4483614), 'text': ''}, {'score': np.float32(2.4471812), 'text': ''}, {'score': np.float32(2.4347496), 'text': ''}, {'score': np.float32(2.4301724), 'text': ''}, {'score': np.float32(2.4278946), 'text': ''}, {'score': np.float32(2.4169385), 'text': ''}, {'score': np.float32(2.3983917), 'text': ''}, {'score': np.float32(2.3940287), 'text': ''}, {'score': np.float32(2.3849776), 'text': ''}, {'score': np.float32(2.867637), 'text': ''}, {'score': np.float32(2.6901464), 'text': ''}, {'score': np.float32(2.6214933), 'text': ''}, {'score': np.float32(2.5753126), 'text': ''}, {'score': np.float32(2.5047266), 'text': ''}, {'score': np.float32(2.4818656), 'text': ''}, {'score': np.float32(2.4615836), 'text': ''}, {'score': np.float32(2.453786), 'text': ''}, {'score': np.float32(2.45163), 'text': ''}, {'score': np.float32(2.4292755), 'text': ''}, {'score': np.float32(2.4280953), 'text': ''}, {'score': np.float32(2.4156635), 'text': ''}, {'score': np.float32(2.4110866), 'text': ''}, {'score': np.float32(2.4088087), 'text': ''}, {'score': np.float32(2.3978524), 'text': ''}, {'score': np.float32(2.3749428), 'text': ''}, {'score': np.float32(2.3658915), 'text': ''}, {'score': np.float32(2.8525734), 'text': ''}, {'score': np.float32(2.675083), 'text': ''}, {'score': np.float32(2.60643), 'text': ''}, {'score': np.float32(2.5602493), 'text': ''}, {'score': np.float32(2.4896631), 'text': ''}, {'score': np.float32(2.4668021), 'text': ''}, {'score': np.float32(2.4465203), 'text': ''}, {'score': np.float32(2.4387224), 'text': ''}, {'score': np.float32(2.4365664), 'text': ''}, {'score': np.float32(2.414212), 'text': ''}, {'score': np.float32(2.4130318), 'text': ''}, {'score': np.float32(2.4006), 'text': ''}, {'score': np.float32(2.396023), 'text': ''}, {'score': np.float32(2.393745), 'text': ''}, {'score': np.float32(2.3827891), 'text': ''}, {'score': np.float32(2.359879), 'text': ''}, {'score': np.float32(2.3508282), 'text': ''}, {'score': np.float32(2.852527), 'text': ''}, {'score': np.float32(2.5602026), 'text': ''}, {'score': np.float32(2.4896166), 'text': ''}, {'score': np.float32(2.4464736), 'text': ''}, {'score': np.float32(2.43652), 'text': ''}, {'score': np.float32(2.4141655), 'text': ''}, {'score': np.float32(2.4129853), 'text': ''}, {'score': np.float32(2.3959765), 'text': ''}, {'score': np.float32(2.3936987), 'text': ''}, {'score': np.float32(2.3598328), 'text': ''}, {'score': np.float32(2.3507814), 'text': ''}, {'score': np.float32(2.8481393), 'text': ''}, {'score': np.float32(2.670649), 'text': ''}, {'score': np.float32(2.601996), 'text': ''}, {'score': np.float32(2.5558152), 'text': ''}, {'score': np.float32(2.485229), 'text': ''}, {'score': np.float32(2.462368), 'text': ''}, {'score': np.float32(2.4420862), 'text': ''}, {'score': np.float32(2.4354331), 'text': ''}, {'score': np.float32(2.4342885), 'text': ''}, {'score': np.float32(2.4321325), 'text': ''}, {'score': np.float32(2.409778), 'text': ''}, {'score': np.float32(2.408598), 'text': ''}, {'score': np.float32(2.3961658), 'text': ''}, {'score': np.float32(2.3915892), 'text': ''}, {'score': np.float32(2.389311), 'text': ''}, {'score': np.float32(2.378355), 'text': ''}, {'score': np.float32(2.3598082), 'text': ''}, {'score': np.float32(2.3554451), 'text': ''}, {'score': np.float32(2.3499923), 'text': ''}, {'score': np.float32(2.346394), 'text': ''}, {'score': np.float32(2.8410912), 'text': ''}, {'score': np.float32(2.663601), 'text': ''}, {'score': np.float32(2.548767), 'text': ''}, {'score': np.float32(2.478181), 'text': ''}, {'score': np.float32(2.45532), 'text': ''}, {'score': np.float32(2.435038), 'text': ''}, {'score': np.float32(2.4272404), 'text': ''}, {'score': np.float32(2.4250844), 'text': ''}, {'score': np.float32(2.40273), 'text': ''}, {'score': np.float32(2.4015498), 'text': ''}, {'score': np.float32(2.3891177), 'text': ''}, {'score': np.float32(2.384541), 'text': ''}, {'score': np.float32(2.382263), 'text': ''}, {'score': np.float32(2.371307), 'text': ''}, {'score': np.float32(2.348397), 'text': ''}, {'score': np.float32(2.339346), 'text': ''}, {'score': np.float32(2.8231335), 'text': ''}, {'score': np.float32(2.5308094), 'text': ''}, {'score': np.float32(2.4170804), 'text': ''}, {'score': np.float32(2.4071264), 'text': ''}, {'score': np.float32(2.384772), 'text': ''}, {'score': np.float32(2.366583), 'text': ''}, {'score': np.float32(2.330439), 'text': ''}, {'score': np.float32(2.3213882), 'text': ''}, {'score': np.float32(2.8051448), 'text': ''}, {'score': np.float32(2.5128205), 'text': ''}, {'score': np.float32(2.4422345), 'text': ''}, {'score': np.float32(2.4193735), 'text': ''}, {'score': np.float32(2.3990915), 'text': ''}, {'score': np.float32(2.3912935), 'text': ''}, {'score': np.float32(2.3891377), 'text': ''}, {'score': np.float32(2.3667831), 'text': ''}, {'score': np.float32(2.365603), 'text': ''}, {'score': np.float32(2.3485942), 'text': ''}, {'score': np.float32(2.3463163), 'text': ''}, {'score': np.float32(2.3353603), 'text': ''}, {'score': np.float32(2.3124504), 'text': ''}, {'score': np.float32(2.3033993), 'text': ''}, {'score': np.float32(2.7928882), 'text': ''}, {'score': np.float32(2.615398), 'text': ''}, {'score': np.float32(2.500564), 'text': ''}, {'score': np.float32(2.429978), 'text': ''}, {'score': np.float32(2.407117), 'text': ''}, {'score': np.float32(2.386835), 'text': ''}, {'score': np.float32(2.3790374), 'text': ''}, {'score': np.float32(2.3768814), 'text': ''}, {'score': np.float32(2.354527), 'text': ''}, {'score': np.float32(2.3533468), 'text': ''}, {'score': np.float32(2.336338), 'text': ''}, {'score': np.float32(2.33406), 'text': ''}, {'score': np.float32(2.323104), 'text': ''}, {'score': np.float32(2.300194), 'text': ''}, {'score': np.float32(2.291143), 'text': ''}, {'score': np.float32(2.7696795), 'text': ''}, {'score': np.float32(2.4773552), 'text': ''}, {'score': np.float32(2.3636262), 'text': ''}, {'score': np.float32(2.3536725), 'text': ''}, {'score': np.float32(2.3313181), 'text': ''}, {'score': np.float32(2.3131292), 'text': ''}, {'score': np.float32(2.267934), 'text': ''}]\n",
      "验证集数据预处理！\n"
     ]
    },
    {
     "data": {
      "text/html": [],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "raw_predictions:  PredictionOutput(predictions=(array([[ 0.50836056,  0.5516551 ,  0.82946527, ..., -0.12773295,\n",
      "        -0.09512722,  0.0537492 ],\n",
      "       [ 0.50277793,  0.5665999 ,  0.8190626 , ..., -0.13218738,\n",
      "        -0.10500239,  0.05613519],\n",
      "       [ 0.5081225 ,  0.72592586,  0.6050804 , ..., -0.18415953,\n",
      "        -0.19143014, -0.04229952],\n",
      "       ...,\n",
      "       [ 0.44645432,  0.6443466 ,  0.70505726, ..., -0.32978857,\n",
      "        -0.12036984, -0.31272304],\n",
      "       [ 0.42990816,  0.5122904 ,  0.48294824, ..., -0.42703483,\n",
      "         0.06456371, -0.15189569],\n",
      "       [ 0.4441358 ,  0.5779391 ,  0.68882775, ..., -0.31859955,\n",
      "        -0.11137663, -0.31442022]], shape=(10784, 384), dtype=float32), array([[ 0.05449938,  0.2658404 ,  1.2274301 , ..., -0.12790726,\n",
      "        -0.02991499, -0.0237491 ],\n",
      "       [ 0.0585878 ,  0.2675909 ,  1.2458016 , ..., -0.12358122,\n",
      "        -0.01199026, -0.0216753 ],\n",
      "       [ 0.07227961,  0.2739603 ,  0.64146376, ..., -0.22831652,\n",
      "        -0.18559772, -0.20388503],\n",
      "       ...,\n",
      "       [-0.20727022,  0.17556304,  0.54559994, ..., -0.27441388,\n",
      "        -0.22785388, -0.2652471 ],\n",
      "       [-0.2264016 , -0.00446633,  0.4917392 , ..., -0.41112995,\n",
      "        -0.08572497, -0.10020575],\n",
      "       [-0.21554996,  0.15028797,  0.54269034, ..., -0.25591385,\n",
      "        -0.21754387, -0.25459814]], shape=(10784, 384), dtype=float32)), label_ids=None, metrics={'test_runtime': 113.75, 'test_samples_per_second': 94.804, 'test_steps_per_second': 11.851})\n",
      "valid_answers:  [{'score': np.float32(3.0198717), 'text': 'the'}, {'score': np.float32(2.9661405), 'text': 'emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the'}, {'score': np.float32(2.9583802), 'text': ', as well as temporarily suspending the'}, {'score': np.float32(2.947894), 'text': 'temporarily suspending the'}, {'score': np.float32(2.9073308), 'text': 'ing the'}, {'score': np.float32(2.8419073), 'text': 'Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\"'}, {'score': np.float32(2.7696795), 'text': 'As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with various gold-themed initiatives, as well as temporarily suspending the'}, {'score': np.float32(2.7281783), 'text': 'Stadium in the San Francisco Bay Area at Santa Clara, California. As this was'}, {'score': np.float32(2.6946898), 'text': 'Stadium in the San'}, {'score': np.float32(2.6901464), 'text': '(AFC) champion Denver Broncos defeated the National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title'}, {'score': np.float32(2.677681), 'text': 'Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\" with'}, {'score': np.float32(2.675403), 'text': 'Stadium'}, {'score': np.float32(2.6738162), 'text': 'emphasized the \"golden anniversary\"'}, {'score': np.float32(2.663601), 'text': 'National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title'}, {'score': np.float32(2.6547441), 'text': 's Stadium in the San Francisco Bay Area at Santa Clara, California. As this was the 50th Super Bowl, the league emphasized the \"golden anniversary\"'}, {'score': np.float32(2.6447487), 'text': 'season. The American Football Conference (AFC) champion Denver'}, {'score': np.float32(2.6445768), 'text': 'an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver'}, {'score': np.float32(2.6415372), 'text': 'Stadium in the San Francisco Bay Area at Santa Clara'}, {'score': np.float32(2.6405795), 'text': 'game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion Denver'}, {'score': np.float32(2.6214933), 'text': '(AFC) champion Denver'}]\n",
      "{'text': ['Denver Broncos', 'Denver Broncos', 'Denver Broncos'], 'answer_start': [177, 177, 177]}\n",
      "正在后处理 10570 个示例的预测，这些预测分散在 10784 个特征中。\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5d135367216f4dae97a44f7cf25146d1",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "  0%|          | 0/10570 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "过滤无效样本\n",
      "执行评估\n",
      "评估结果: {'exact_match': 0.9271523178807947, 'f1': 8.51672079747674}\n"
     ]
    }
   ],
   "source": [
    "from datasets import ClassLabel, Sequence, load_dataset, Dataset\n",
    "from evaluate import load\n",
    "import random\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "from IPython.display import display, HTML\n",
    "import transformers\n",
    "from transformers import AutoTokenizer, AutoModelForQuestionAnswering, TrainingArguments, Trainer, default_data_collator\n",
    "import torch\n",
    "torch.cuda.empty_cache()  # 清理缓存\n",
    "import collections\n",
    "from tqdm.auto import tqdm\n",
    "import evaluate\n",
    "\n",
    "def main():\n",
    "    def show_random_elements(dataset, num_examples=10):\n",
    "        assert num_examples <= len(dataset), \"Can't pick more elements than there are in the dataset.\"\n",
    "        picks = []\n",
    "        for _ in range(num_examples):\n",
    "            pick = random.randint(0, len(dataset)-1)\n",
    "            while pick in picks:\n",
    "                pick = random.randint(0, len(dataset)-1)\n",
    "            picks.append(pick)\n",
    "\n",
    "        df = pd.DataFrame(dataset[picks])\n",
    "        for column, typ in dataset.features.items():\n",
    "            if isinstance(typ, ClassLabel):\n",
    "                df[column] = df[column].transform(lambda i: typ.names[i])\n",
    "            elif isinstance(typ, Sequence) and isinstance(typ.feature, ClassLabel):\n",
    "                df[column] = df[column].transform(lambda x: [typ.feature.names[i] for i in x])\n",
    "        display(HTML(df.to_html()))\n",
    "\n",
    "     # The maximum length of a feature (question and context)\n",
    "    max_length = 384\n",
    "    # The authorized overlap between two part of the context when splitting it is needed.\n",
    "    doc_stride = 128\n",
    "    squad_v2 = False\n",
    "    batch_size = 8\n",
    "    #加载 distilbert 分词器 / 模型（distilbert-base-uncased，轻量版 BERT，速度快、显存占用低\n",
    "    model_checkpoint = \"distilbert-base-uncased\"\n",
    "    #数据集（若sqaud_v2=True则加载\"squad_v2\"），包含train（训练集）和validation（验证集）\n",
    "    datasets = load_dataset(\"squad_v2\" if squad_v2 else \"squad\")\n",
    "    #函数随机展示 10 条样本，将标签转换为人类可读文本，方便直观了解数据结构\n",
    "    show_random_elements(datasets[\"train\"])\n",
    "\n",
    "    tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n",
    "    assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)\n",
    "    pad_on_right = tokenizer.padding_side == \"right\"\n",
    "\n",
    "    for i, example in enumerate(datasets[\"train\"]):\n",
    "        if len(tokenizer(example[\"question\"], example[\"context\"])[\"input_ids\"]) > 384:\n",
    "            break\n",
    "    # 挑选出来超过384（最大长度）的数据样例\n",
    "    example = datasets[\"train\"][i]\n",
    "    tokenized_example = tokenizer(\n",
    "        example[\"question\"],\n",
    "        example[\"context\"],\n",
    "        max_length=max_length,\n",
    "        #直接截断超出部分: truncation=only_second仅截断上下文（context），保留问题（question）：return_overflowing_tokens=True \n",
    "        truncation=\"only_second\",\n",
    "        #return_overflowing_tokens=True：长上下文分块（如上下文超 384token 时，拆分为多个重叠块）；\n",
    "        return_overflowing_tokens=True,\n",
    "        stride=doc_stride\n",
    "    )\n",
    "\n",
    "    for x in tokenized_example[\"input_ids\"][:2]:\n",
    "        print(tokenizer.decode(x))\n",
    "    #核心目标是将 “question-context-answers” 转换为模型可接受的格式\n",
    "    def prepare_train_features(examples):\n",
    "        # 一些问题的左侧可能有很多空白字符，这对我们没有用，而且会导致上下文的截断失败\n",
    "        # （标记化的问题将占用大量空间）。因此，我们删除左侧的空白字符。\n",
    "        examples[\"question\"] = [q.lstrip() for q in examples[\"question\"]]\n",
    "\n",
    "        # 使用截断和填充对我们的示例进行标记化，但保留溢出部分，使用步幅（stride）。\n",
    "        # 当上下文很长时，这会导致一个示例可能提供多个特征，其中每个特征的上下文都与前一个特征的上下文有一些重叠。\n",
    "        tokenized_examples = tokenizer(\n",
    "            examples[\"question\" if pad_on_right else \"context\"],\n",
    "            examples[\"context\" if pad_on_right else \"question\"],\n",
    "            truncation=\"only_second\" if pad_on_right else \"only_first\",\n",
    "            max_length=max_length,\n",
    "            stride=doc_stride,\n",
    "            return_overflowing_tokens=True,\n",
    "            return_offsets_mapping=True,\n",
    "            padding=\"max_length\",\n",
    "        )\n",
    "\n",
    "        # 由于一个示例可能给我们提供多个特征（如果它具有很长的上下文），我们需要一个从特征到其对应示例的映射。这个键就提供了这个映射关系。\n",
    "        sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\n",
    "        # 偏移映射将为我们提供从令牌到原始上下文中的字符位置的映射。这将帮助我们计算开始位置和结束位置。\n",
    "        offset_mapping = tokenized_examples.pop(\"offset_mapping\")\n",
    "\n",
    "        # 让我们为这些示例进行标记！\n",
    "        tokenized_examples[\"start_positions\"] = []\n",
    "        tokenized_examples[\"end_positions\"] = []\n",
    "\n",
    "        for i, offsets in enumerate(offset_mapping):\n",
    "            # 我们将使用 CLS 特殊 token 的索引来标记不可能的答案。\n",
    "            input_ids = tokenized_examples[\"input_ids\"][i]\n",
    "            cls_index = input_ids.index(tokenizer.cls_token_id)\n",
    "\n",
    "            # 获取与该示例对应的序列（以了解上下文和问题是什么）。\n",
    "            sequence_ids = tokenized_examples.sequence_ids(i)\n",
    "\n",
    "            # 一个示例可以提供多个跨度，这是包含此文本跨度的示例的索引。\n",
    "            sample_index = sample_mapping[i]\n",
    "            answers = examples[\"answers\"][sample_index]\n",
    "            # 如果没有给出答案，则将cls_index设置为答案。\n",
    "            if len(answers[\"answer_start\"]) == 0:\n",
    "                tokenized_examples[\"start_positions\"].append(cls_index)\n",
    "                tokenized_examples[\"end_positions\"].append(cls_index)\n",
    "            else:\n",
    "                # 答案在文本中的开始和结束字符索引。字符起始位置,\n",
    "                start_char = answers[\"answer_start\"][0]\n",
    "                #字符结束位置\n",
    "                end_char = start_char + len(answers[\"text\"][0])\n",
    "\n",
    "                # 当前跨度在文本中的开始令牌索引。\n",
    "                token_start_index = 0\n",
    "                while sequence_ids[token_start_index] != (1 if pad_on_right else 0):\n",
    "                    token_start_index += 1\n",
    "\n",
    "                # 当前跨度在文本中的结束令牌索引。\n",
    "                token_end_index = len(input_ids) - 1\n",
    "                while sequence_ids[token_end_index] != (1 if pad_on_right else 0):\n",
    "                    token_end_index -= 1\n",
    "\n",
    "                # 检测答案是否超出跨度（在这种情况下，该特征的标签将使用CLS索引）。\n",
    "                if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\n",
    "                    tokenized_examples[\"start_positions\"].append(cls_index)\n",
    "                    tokenized_examples[\"end_positions\"].append(cls_index)\n",
    "                else:\n",
    "                    # 否则，将token_start_index和token_end_index移到答案的两端。\n",
    "                    # 注意：如果答案是最后一个单词（边缘情况），我们可以在最后一个偏移之后继续。\n",
    "                    while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\n",
    "                        token_start_index += 1\n",
    "                    tokenized_examples[\"start_positions\"].append(token_start_index - 1)\n",
    "                    while offsets[token_end_index][1] >= end_char:\n",
    "                        token_end_index -= 1\n",
    "                    tokenized_examples[\"end_positions\"].append(token_end_index + 1)\n",
    "\n",
    "        return tokenized_examples\n",
    "\n",
    "    print(\"训练集数据集预处理！\")\n",
    "    tokenized_datasets = datasets.map(prepare_train_features,\n",
    "                                      batched=True,\n",
    "                                      #删除原始数据集的 “question”“context” 等非模型输入列\n",
    "                                      remove_columns=datasets[\"train\"].column_names)\n",
    "\n",
    "    \n",
    "    #模型配置与训练\n",
    "    #模型选择\n",
    "    model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)\n",
    "    model_dir = f\"models/distilbert-base-uncased-finetuned-squad\"\n",
    "    #训练参数\n",
    "    args = TrainingArguments(\n",
    "        output_dir=model_dir,\n",
    "        #每轮训练结束后在验证集评估，避免频繁评估占用资源\n",
    "        eval_strategy=\"epoch\",\n",
    "        #问答任务常用学习率（预训练模型微调需小学习率，避免覆盖预训练知识）\n",
    "        learning_rate=2e-5,\n",
    "        per_device_train_batch_size=batch_size,\n",
    "        per_device_eval_batch_size=batch_size,\n",
    "        num_train_epochs=3,\n",
    "        #L2 正则化，减少过拟合\n",
    "        weight_decay=0.01,\n",
    "        dataloader_pin_memory=True,  # 环境中没有GPU或者你不需要利用GPU加速数据加载，可以禁用pin_memory\n",
    "    )\n",
    "    data_collator = default_data_collator\n",
    "    #训练数据简化：\n",
    "    tokenized_small_train_dataset = tokenized_datasets[\"train\"].shuffle(seed=42).select(range(100))\n",
    "    tokenized_small_eval_dataset = tokenized_datasets[\"validation\"].shuffle(seed=42).select(range(100))\n",
    "    trainer = Trainer(\n",
    "        model,\n",
    "        args,\n",
    "        train_dataset=tokenized_small_train_dataset,\n",
    "        eval_dataset=tokenized_small_eval_dataset,\n",
    "        data_collator=data_collator,\n",
    "        processing_class=tokenizer,\n",
    "    )\n",
    "    print(\"开始训练！\")\n",
    "    trainer.train()\n",
    "    #保存模型\n",
    "    model_to_save = trainer.save_model(model_dir)\n",
    "    print(\"model_to_save: \", model_to_save)\n",
    "\n",
    "    # 评估模型输出需要一些额外的处理：将模型的预测映射回上下文的部分。模型直接输出的是预测答案的起始位置和结束位置的logits\n",
    "    for batch in trainer.get_eval_dataloader():\n",
    "        break\n",
    "    batch = {k: v.to(trainer.args.device) for k, v in batch.items()}\n",
    "    with torch.no_grad():\n",
    "        output = trainer.model(**batch)\n",
    "    print(output.keys())\n",
    "    print(output.start_logits.shape, output.end_logits.shape)\n",
    "    print(output.start_logits.argmax(dim=-1), output.end_logits.argmax(dim=-1))\n",
    "\n",
    "    \"\"\"为了对答案进行分类，\n",
    "        将使用通过添加起始和结束logits获得的分数\n",
    "        设计一个名为n_best_size的超参数，限制不对所有可能的答案进行排序。\n",
    "        我们将选择起始和结束logits中的最佳索引，并收集这些预测的所有答案。\n",
    "        在检查每一个是否有效后，我们将按照其分数对它们进行排序，并保留最佳的答案。\"\"\"\n",
    "    n_best_size = 20\n",
    "    start_logits = output.start_logits[0].cpu().numpy()\n",
    "    end_logits = output.end_logits[0].cpu().numpy()\n",
    "    # 获取最佳的起始和结束位置的索引：\n",
    "    start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "    end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "    valid_answers = []\n",
    "    # 遍历起始位置和结束位置的索引组合\n",
    "    for start_index in start_indexes:\n",
    "        for end_index in end_indexes:\n",
    "            if start_index <= end_index:  # 需要进一步测试以检查答案是否在上下文中\n",
    "                valid_answers.append(\n",
    "                    {\n",
    "                        \"score\": start_logits[start_index] + end_logits[end_index],\n",
    "                        \"text\": \"\"  # 我们需要找到一种方法来获取与上下文中答案对应的原始子字符串\n",
    "                    }\n",
    "                )\n",
    "    print(\"valid_answers: \", valid_answers)\n",
    "    #验证集预处理\n",
    "    def prepare_validation_features(examples):\n",
    "        # 一些问题的左侧有很多空白，这些空白并不有用且会导致上下文截断失败（分词后的问题会占用很多空间）。\n",
    "        # 因此我们移除这些左侧空白\n",
    "        examples[\"question\"] = [q.lstrip() for q in examples[\"question\"]]\n",
    "\n",
    "        # 使用截断和可能的填充对我们的示例进行分词，但使用步长保留溢出的令牌。这导致一个长上下文的示例可能产生\n",
    "        # 几个特征，每个特征的上下文都会稍微与前一个特征的上下文重叠。\n",
    "        tokenized_examples = tokenizer(\n",
    "            examples[\"question\" if pad_on_right else \"context\"],\n",
    "            examples[\"context\" if pad_on_right else \"question\"],\n",
    "            truncation=\"only_second\" if pad_on_right else \"only_first\",\n",
    "            max_length=max_length,\n",
    "            stride=doc_stride,\n",
    "            return_overflowing_tokens=True,\n",
    "            return_offsets_mapping=True,\n",
    "            padding=\"max_length\",\n",
    "        )\n",
    "\n",
    "        # 由于一个示例在上下文很长时可能会产生几个特征，我们需要一个从特征映射到其对应示例的映射。这个键就是为了这个目的。\n",
    "        sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\n",
    "\n",
    "        # 我们保留产生这个特征的示例ID，并且会存储偏移映射。\n",
    "        tokenized_examples[\"example_id\"] = []\n",
    "\n",
    "        for i in range(len(tokenized_examples[\"input_ids\"])):\n",
    "            # 获取与该示例对应的序列（以了解哪些是上下文，哪些是问题）。\n",
    "            sequence_ids = tokenized_examples.sequence_ids(i)\n",
    "            context_index = 1 if pad_on_right else 0\n",
    "\n",
    "            # 一个示例可以产生几个文本段，这里是包含该文本段的示例的索引。\n",
    "            sample_index = sample_mapping[i]\n",
    "            tokenized_examples[\"example_id\"].append(examples[\"id\"][sample_index])\n",
    "\n",
    "            # 将不属于上下文的偏移映射设置为None，以便容易确定一个令牌位置是否属于上下文。\n",
    "            tokenized_examples[\"offset_mapping\"][i] = [\n",
    "                (o if sequence_ids[k] == context_index else None)\n",
    "                for k, o in enumerate(tokenized_examples[\"offset_mapping\"][i])\n",
    "            ]\n",
    "\n",
    "        return tokenized_examples\n",
    "\n",
    "    print(\"验证集数据预处理！\")\n",
    "    validation_features = datasets[\"validation\"].map(\n",
    "        prepare_validation_features,\n",
    "        batched=True,\n",
    "        remove_columns=datasets[\"validation\"].column_names\n",
    "    )\n",
    "\n",
    "    raw_predictions = trainer.predict(validation_features)\n",
    "    print(\"raw_predictions: \", raw_predictions)\n",
    "    # Trainer会隐藏模型不使用的列（在这里是example_id和offset_mapping，我们需要它们进行后处理），所以我们需要将它们重新设置回来：\n",
    "    validation_features.set_format(type=validation_features.format[\"type\"], columns=list(validation_features.features.keys()))\n",
    "\n",
    "    max_answer_length = 30\n",
    "    start_logits = output.start_logits[0].cpu().numpy()\n",
    "    end_logits = output.end_logits[0].cpu().numpy()\n",
    "    offset_mapping = validation_features[0][\"offset_mapping\"]\n",
    "\n",
    "    # 第一个特征来自第一个示例。对于更一般的情况，我们需要将example_id匹配到一个示例索引\n",
    "    context = datasets[\"validation\"][0][\"context\"]\n",
    "\n",
    "    # 收集最佳开始/结束逻辑的索引：\n",
    "    start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "    end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "    valid_answers = []\n",
    "    for start_index in start_indexes:\n",
    "        for end_index in end_indexes:\n",
    "            # 不考虑超出范围的答案，原因是索引超出范围或对应于输入ID的部分不在上下文中。\n",
    "            if (\n",
    "                    start_index >= len(offset_mapping)\n",
    "                    or end_index >= len(offset_mapping)\n",
    "                    or offset_mapping[start_index] is None\n",
    "                    or offset_mapping[end_index] is None\n",
    "            ):\n",
    "                continue\n",
    "            # 不考虑长度小于0或大于max_answer_length的答案。\n",
    "            if end_index < start_index or end_index - start_index + 1 > max_answer_length:\n",
    "                continue\n",
    "            if start_index <= end_index: # 我们需要细化这个测试，以检查答案是否在上下文中\n",
    "                start_char = offset_mapping[start_index][0]\n",
    "                end_char = offset_mapping[end_index][1]\n",
    "                valid_answers.append(\n",
    "                    {\n",
    "                        \"score\": start_logits[start_index] + end_logits[end_index],\n",
    "                        \"text\": context[start_char: end_char]\n",
    "                    }\n",
    "                )\n",
    "\n",
    "    valid_answers = sorted(valid_answers, key=lambda x: x[\"score\"], reverse=True)[:n_best_size]\n",
    "    print(\"valid_answers: \", valid_answers)\n",
    "    print(datasets[\"validation\"][0][\"answers\"])\n",
    "\n",
    "    examples = datasets[\"validation\"]\n",
    "    features = validation_features\n",
    "\n",
    "    example_id_to_index = {k: i for i, k in enumerate(examples[\"id\"])}\n",
    "    features_per_example = collections.defaultdict(list)\n",
    "    for i, feature in enumerate(features):\n",
    "        features_per_example[example_id_to_index[feature[\"example_id\"]]].append(i)\n",
    "\n",
    "    #预测后处理\n",
    "    def postprocess_qa_predictions(examples, features, raw_predictions, n_best_size = 20, max_answer_length = 30):\n",
    "        all_start_logits, all_end_logits = raw_predictions\n",
    "        # 构建一个从示例到其对应特征的映射。\n",
    "        example_id_to_index = {k: i for i, k in enumerate(examples[\"id\"])}\n",
    "        features_per_example = collections.defaultdict(list)\n",
    "        for i, feature in enumerate(features):\n",
    "            features_per_example[example_id_to_index[feature[\"example_id\"]]].append(i)\n",
    "\n",
    "        # 我们需要填充的字典。\n",
    "        predictions = collections.OrderedDict()\n",
    "\n",
    "        # 日志记录。\n",
    "        print(f\"正在后处理 {len(examples)} 个示例的预测，这些预测分散在 {len(features)} 个特征中。\")\n",
    "\n",
    "        # 遍历所有示例！\n",
    "        for example_index, example in enumerate(tqdm(examples)):\n",
    "            # 这些是与当前示例关联的特征的索引。\n",
    "            feature_indices = features_per_example[example_index]\n",
    "\n",
    "            min_null_score = None # 仅在squad_v2为True时使用。\n",
    "            valid_answers = []\n",
    "\n",
    "            context = example[\"context\"]\n",
    "            # 遍历与当前示例关联的所有特征。\n",
    "            for feature_index in feature_indices:\n",
    "                # 我们获取模型对这个特征的预测。\n",
    "                start_logits = all_start_logits[feature_index]\n",
    "                end_logits = all_end_logits[feature_index]\n",
    "                # 这将允许我们将logits中的某些位置映射到原始上下文中的文本跨度。\n",
    "                offset_mapping = features[feature_index][\"offset_mapping\"]\n",
    "\n",
    "                # 更新最小空预测。\n",
    "                cls_index = features[feature_index][\"input_ids\"].index(tokenizer.cls_token_id)\n",
    "                feature_null_score = start_logits[cls_index] + end_logits[cls_index]\n",
    "                if min_null_score is None or min_null_score < feature_null_score:\n",
    "                    min_null_score = feature_null_score\n",
    "\n",
    "                # 浏览所有的最佳开始和结束logits，为 `n_best_size` 个最佳选择。\n",
    "                start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "                end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "                for start_index in start_indexes:\n",
    "                    for end_index in end_indexes:\n",
    "                        # 不考虑超出范围的答案，原因是索引超出范围或对应于输入ID的部分不在上下文中。\n",
    "                        if (\n",
    "                                start_index >= len(offset_mapping)\n",
    "                                or end_index >= len(offset_mapping)\n",
    "                                or offset_mapping[start_index] is None\n",
    "                                or offset_mapping[end_index] is None\n",
    "                        ):\n",
    "                            continue\n",
    "                        # 不考虑长度小于0或大于max_answer_length的答案。\n",
    "                        if end_index < start_index or end_index - start_index + 1 > max_answer_length:\n",
    "                            continue\n",
    "\n",
    "                        start_char = offset_mapping[start_index][0]\n",
    "                        end_char = offset_mapping[end_index][1]\n",
    "                        valid_answers.append(\n",
    "                            {\n",
    "                                \"score\": start_logits[start_index] + end_logits[end_index],\n",
    "                                \"text\": context[start_char: end_char]\n",
    "                            }\n",
    "                        )\n",
    "\n",
    "            if len(valid_answers) > 0:\n",
    "                best_answer = sorted(valid_answers, key=lambda x: x[\"score\"], reverse=True)[0]\n",
    "            else:\n",
    "                # 在极少数情况下我们没有一个非空预测，我们创建一个假预测以避免失败。\n",
    "                best_answer = {\"text\": \"\", \"score\": 0.0}\n",
    "\n",
    "            # 选择我们的最终答案：最佳答案或空答案（仅适用于squad_v2）\n",
    "            if not squad_v2:\n",
    "                predictions[example[\"id\"]] = best_answer[\"text\"]\n",
    "            else:\n",
    "                answer = best_answer[\"text\"] if best_answer[\"score\"] > min_null_score else \"\"\n",
    "                predictions[example[\"id\"]] = answer\n",
    "\n",
    "        return predictions\n",
    "\n",
    "    final_predictions = postprocess_qa_predictions(datasets[\"validation\"], validation_features, raw_predictions.predictions)\n",
    "    metric = load(\"squad_v2\" if squad_v2 else \"squad\")\n",
    "\n",
    "    # 修改评估部分代码\n",
    "    if squad_v2:\n",
    "        formatted_predictions = [{\"id\": k, \"prediction_text\": v, \"no_answer_probability\": 0.0}\n",
    "                                 for k, v in final_predictions.items()]\n",
    "    else:\n",
    "        formatted_predictions = [{\"id\": k, \"prediction_text\": v}\n",
    "                                 for k, v in final_predictions.items()]\n",
    "\n",
    "    references = [{\"id\": ex[\"id\"], \"answers\": ex[\"answers\"]} for ex in datasets[\"validation\"]]\n",
    "\n",
    "    print(\"过滤无效样本\")\n",
    "    valid_references = [ref for ref in references if ref.get(\"answers\") and len(ref[\"answers\"][\"text\"]) > 0]\n",
    "    valid_pred_ids = [ref[\"id\"] for ref in valid_references]\n",
    "    valid_predictions = [pred for pred in formatted_predictions if pred[\"id\"] in valid_pred_ids]\n",
    "\n",
    "    print(\"执行评估\")\n",
    "    if len(valid_references) > 0:\n",
    "        metrics = metric.compute(predictions=valid_predictions, references=valid_references)\n",
    "        print(\"评估结果:\", metrics)\n",
    "    else:\n",
    "        print(\"警告：没有有效的参考答案用于评估\")\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    main()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e0f0be66-4943-451e-9c61-1c3304fcb396",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
