{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "8b88a474-55ad-4b26-aabf-f16c4b6616ed",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>title</th>\n",
       "      <th>context</th>\n",
       "      <th>question</th>\n",
       "      <th>answers</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>5731ab21b9d445190005e44f</td>\n",
       "      <td>Religion_in_ancient_Rome</td>\n",
       "      <td>The meaning and origin of many archaic festivals baffled even Rome's intellectual elite, but the more obscure they were, the greater the opportunity for reinvention and reinterpretation — a fact lost neither on Augustus in his program of religious reform, which often cloaked autocratic innovation, nor on his only rival as mythmaker of the era, Ovid. In his Fasti, a long-form poem covering Roman holidays from January to June, Ovid presents a unique look at Roman antiquarian lore, popular customs, and religious practice that is by turns imaginative, entertaining, high-minded, and scurrilous; not a priestly account, despite the speaker's pose as a vates or inspired poet-prophet, but a work of description, imagination and poetic etymology that reflects the broad humor and burlesque spirit of such venerable festivals as the Saturnalia, Consualia, and feast of Anna Perenna on the Ides of March, where Ovid treats the assassination of the newly deified Julius Caesar as utterly incidental to the festivities among the Roman people. But official calendars preserved from different times and places also show a flexibility in omitting or expanding events, indicating that there was no single static and authoritative calendar of required observances. In the later Empire under Christian rule, the new Christian festivals were incorporated into the existing framework of the Roman calendar, alongside at least some of the traditional festivals.</td>\n",
       "      <td>What poet wrote a long poem describing Roman religious holidays?</td>\n",
       "      <td>{'text': ['Ovid'], 'answer_start': [346]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>56e08b457aa994140058e5e3</td>\n",
       "      <td>Hydrogen</td>\n",
       "      <td>Hydrogen forms a vast array of compounds with carbon called the hydrocarbons, and an even vaster array with heteroatoms that, because of their general association with living things, are called organic compounds. The study of their properties is known as organic chemistry and their study in the context of living organisms is known as biochemistry. By some definitions, \"organic\" compounds are only required to contain carbon. However, most of them also contain hydrogen, and because it is the carbon-hydrogen bond which gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word \"organic\" in chemistry. Millions of hydrocarbons are known, and they are usually formed by complicated synthetic pathways, which seldom involve elementary hydrogen.</td>\n",
       "      <td>What is the form of hydrogen and carbon called?</td>\n",
       "      <td>{'text': ['hydrocarbons'], 'answer_start': [64]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>56cef65baab44d1400b88d36</td>\n",
       "      <td>Spectre_(2015_film)</td>\n",
       "      <td>Christopher Orr, writing in The Atlantic, also criticised the film, saying that Spectre \"backslides on virtually every [aspect]\". Lawrence Toppman of The Charlotte Observer called Craig's performance \"Bored, James Bored.\" Alyssa Rosenberg, writing for The Washington Post, stated that the film turned into \"a disappointingly conventional Bond film.\"</td>\n",
       "      <td>What adjective did Lawrence Toppman use to describe Craig's portrayal of James Bond?</td>\n",
       "      <td>{'text': ['Bored'], 'answer_start': [201]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>571a30bb10f8ca1400304f53</td>\n",
       "      <td>Seattle</td>\n",
       "      <td>King County Metro provides frequent stop bus service within the city and surrounding county, as well as a South Lake Union Streetcar line between the South Lake Union neighborhood and Westlake Center in downtown. Seattle is one of the few cities in North America whose bus fleet includes electric trolleybuses. Sound Transit currently provides an express bus service within the metropolitan area; two Sounder commuter rail lines between the suburbs and downtown; its Central Link light rail line, which opened in 2009, between downtown and Sea-Tac Airport gives the city its first rapid transit line that has intermediate stops within the city limits. Washington State Ferries, which manages the largest network of ferries in the United States and third largest in the world, connects Seattle to Bainbridge and Vashon Islands in Puget Sound and to Bremerton and Southworth on the Kitsap Peninsula.</td>\n",
       "      <td>To what two islands does the ferry service connect?</td>\n",
       "      <td>{'text': ['Bainbridge and Vashon'], 'answer_start': [796]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>570d2cb4fed7b91900d45cb5</td>\n",
       "      <td>Macintosh</td>\n",
       "      <td>In 1998, after the return of Steve Jobs, Apple consolidated its multiple consumer-level desktop models into the all-in-one iMac G3, which became a commercial success and revitalized the brand. Since their transition to Intel processors in 2006, the complete lineup is entirely based on said processors and associated systems. Its current lineup comprises three desktops (the all-in-one iMac, entry-level Mac mini, and the Mac Pro tower graphics workstation), and four laptops (the MacBook, MacBook Air, MacBook Pro, and MacBook Pro with Retina display). Its Xserve server was discontinued in 2011 in favor of the Mac Mini and Mac Pro.</td>\n",
       "      <td>What took the place of Mac's Xserve server?</td>\n",
       "      <td>{'text': ['Mac Mini and Mac Pro'], 'answer_start': [613]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>570af6876b8089140040f646</td>\n",
       "      <td>Videoconferencing</td>\n",
       "      <td>Technological developments by videoconferencing developers in the 2010s have extended the capabilities of video conferencing systems beyond the boardroom for use with hand-held mobile devices that combine the use of video, audio and on-screen drawing capabilities broadcasting in real-time over secure networks, independent of location. Mobile collaboration systems now allow multiple people in previously unreachable locations, such as workers on an off-shore oil rig, the ability to view and discuss issues with colleagues thousands of miles away. Traditional videoconferencing system manufacturers have begun providing mobile applications as well, such as those that allow for live and still image streaming.</td>\n",
       "      <td>What is one example of an application that videoconferencing manufacturers have begun to offer?</td>\n",
       "      <td>{'text': ['still image streaming'], 'answer_start': [689]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6</th>\n",
       "      <td>56e82d0100c9c71400d775eb</td>\n",
       "      <td>Dialect</td>\n",
       "      <td>Italy is home to a vast array of native regional minority languages, most of which are Romance-based and have their own local variants. These regional languages are often referred to colloquially or in non-linguistic circles as Italian \"dialects,\" or dialetti (standard Italian for \"dialects\"). However, the majority of the regional languages in Italy are in fact not actually \"dialects\" of standard Italian in the strict linguistic sense, as they are not derived from modern standard Italian but instead evolved locally from Vulgar Latin independent of standard Italian, with little to no influence from what is now known as \"standard Italian.\" They are therefore better classified as individual languages rather than \"dialects.\"</td>\n",
       "      <td>What are Italian dialects termed in the Italian language?</td>\n",
       "      <td>{'text': ['dialetti'], 'answer_start': [251]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7</th>\n",
       "      <td>56e147e6cd28a01900c6772b</td>\n",
       "      <td>Universal_Studios</td>\n",
       "      <td>The Universal Film Manufacturing Company was incorporated in New York on April 30, 1912. Laemmle, who emerged as president in July 1912, was the primary figure in the partnership with Dintenfass, Baumann, Kessel, Powers, Swanson, Horsley, and Brulatour. Eventually all would be bought out by Laemmle. The new Universal studio was a vertically integrated company, with movie production, distribution and exhibition venues all linked in the same corporate entity, the central element of the Studio system era.</td>\n",
       "      <td>Along with exhibition and distribution, what business did the Universal Film Manufacturing Company engage in?</td>\n",
       "      <td>{'text': ['movie production'], 'answer_start': [368]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>8</th>\n",
       "      <td>5731933a05b4da19006bd2d0</td>\n",
       "      <td>Steven_Spielberg</td>\n",
       "      <td>Spielberg's next film, Schindler's List, was based on the true story of Oskar Schindler, a man who risked his life to save 1,100 Jews from the Holocaust. Schindler's List earned Spielberg his first Academy Award for Best Director (it also won Best Picture). With the film a huge success at the box office, Spielberg used the profits to set up the Shoah Foundation, a non-profit organization that archives filmed testimony of Holocaust survivors. In 1997, the American Film Institute listed it among the 10 Greatest American Films ever Made (#9) which moved up to (#8) when the list was remade in 2007.</td>\n",
       "      <td>Whose life was 'Schindler's List' based on?</td>\n",
       "      <td>{'text': ['Oskar Schindler'], 'answer_start': [72]}</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>9</th>\n",
       "      <td>56de93f94396321400ee2a36</td>\n",
       "      <td>Arnold_Schwarzenegger</td>\n",
       "      <td>In 1985, Schwarzenegger appeared in \"Stop the Madness\", an anti-drug music video sponsored by the Reagan administration. He first came to wide public notice as a Republican during the 1988 presidential election, accompanying then-Vice President George H.W. Bush at a campaign rally.</td>\n",
       "      <td>In what presidential election year did Schwarzenegger make a name for himself as a prominent Republican?</td>\n",
       "      <td>{'text': ['1988'], 'answer_start': [184]}</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[CLS] how many wins does the notre dame men ' s basketball team have? [SEP] the men ' s basketball team has over 1, 600 wins, one of only 12 schools who have reached that mark, and have appeared in 28 ncaa tournaments. former player austin carr holds the record for most points scored in a single game of the tournament with 61. although the team has never won the ncaa tournament, they were named by the helms athletic foundation as national champions twice. the team has orchestrated a number of upsets of number one ranked teams, the most notable of which was ending ucla ' s record 88 - game winning streak in 1974. the team has beaten an additional eight number - one teams, and those nine wins rank second, to ucla ' s 10, all - time in wins against the top team. the team plays in newly renovated purcell pavilion ( within the edmund p. joyce center ), which reopened for the beginning of the 2009 – 2010 season. the team is coached by mike brey, who, as of the 2014 – 15 season, his fifteenth at notre dame, has achieved a 332 - 165 record. in 2009 they were invited to the nit, where they advanced to the semifinals but were beaten by penn state who went on and beat baylor in the championship. the 2010 – 11 team concluded its regular season ranked number seven in the country, with a record of 25 – 5, brey ' s fifth straight 20 - win season, and a second - place finish in the big east. during the 2014 - 15 season, the team went 32 - 6 and won the acc conference tournament, later advancing to the elite 8, where the fighting irish lost on a missed buzzer - beater against then undefeated kentucky. led by nba draft picks jerian grant and pat connaughton, the fighting irish beat the eventual national champion duke blue devils twice during the season. the 32 wins were [SEP]\n",
      "[CLS] how many wins does the notre dame men ' s basketball team have? [SEP] championship. the 2010 – 11 team concluded its regular season ranked number seven in the country, with a record of 25 – 5, brey ' s fifth straight 20 - win season, and a second - place finish in the big east. during the 2014 - 15 season, the team went 32 - 6 and won the acc conference tournament, later advancing to the elite 8, where the fighting irish lost on a missed buzzer - beater against then undefeated kentucky. led by nba draft picks jerian grant and pat connaughton, the fighting irish beat the eventual national champion duke blue devils twice during the season. the 32 wins were the most by the fighting irish team since 1908 - 09. [SEP]\n",
      "训练集数据集预处理！\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Some weights of DistilBertForQuestionAnswering were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['qa_outputs.bias', 'qa_outputs.weight']\n",
      "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始训练！\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='21' max='21' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [21/21 00:13, Epoch 3/3]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Epoch</th>\n",
       "      <th>Training Loss</th>\n",
       "      <th>Validation Loss</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>No log</td>\n",
       "      <td>5.752842</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>No log</td>\n",
       "      <td>5.620673</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>No log</td>\n",
       "      <td>5.560856</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "model_to_save:  None\n",
      "odict_keys(['loss', 'start_logits', 'end_logits'])\n",
      "torch.Size([16, 384]) torch.Size([16, 384])\n",
      "tensor([ 36, 155,  83,  45, 167,  67,   9, 108,   4,  24,  90,   9,  64,  17,\n",
      "         24, 152], device='cuda:0') tensor([ 45,  28,  57,  38,  98,  15,  93,  40,  38,  95,  85, 123,  18,  76,\n",
      "        123, 196], device='cuda:0')\n",
      "valid_answers:  [{'score': np.float32(2.5102084), 'text': ''}, {'score': np.float32(2.2008557), 'text': ''}, {'score': np.float32(2.187687), 'text': ''}, {'score': np.float32(2.1317492), 'text': ''}, {'score': np.float32(2.113037), 'text': ''}, {'score': np.float32(2.1039257), 'text': ''}, {'score': np.float32(2.098096), 'text': ''}, {'score': np.float32(2.0966508), 'text': ''}, {'score': np.float32(2.0718334), 'text': ''}, {'score': np.float32(2.0531664), 'text': ''}, {'score': np.float32(2.04404), 'text': ''}, {'score': np.float32(2.02927), 'text': ''}, {'score': np.float32(2.0268118), 'text': ''}, {'score': np.float32(2.0018184), 'text': ''}, {'score': np.float32(1.9809387), 'text': ''}, {'score': np.float32(1.9800663), 'text': ''}, {'score': np.float32(2.4268723), 'text': ''}, {'score': np.float32(2.1175194), 'text': ''}, {'score': np.float32(2.1043508), 'text': ''}, {'score': np.float32(2.0831914), 'text': ''}, {'score': np.float32(2.0484128), 'text': ''}, {'score': np.float32(2.029701), 'text': ''}, {'score': np.float32(2.0205896), 'text': ''}, {'score': np.float32(2.0147595), 'text': ''}, {'score': np.float32(2.0133147), 'text': ''}, {'score': np.float32(1.9884973), 'text': ''}, {'score': np.float32(1.9698304), 'text': ''}, {'score': np.float32(1.9607038), 'text': ''}, {'score': np.float32(1.9588996), 'text': ''}, {'score': np.float32(1.9459338), 'text': ''}, {'score': np.float32(1.9434757), 'text': ''}, {'score': np.float32(1.9184823), 'text': ''}, {'score': np.float32(1.8976026), 'text': ''}, {'score': np.float32(1.8967302), 'text': ''}, {'score': np.float32(2.419382), 'text': ''}, {'score': np.float32(2.1100292), 'text': ''}, {'score': np.float32(2.0409226), 'text': ''}, {'score': np.float32(2.0222108), 'text': ''}, {'score': np.float32(2.0130994), 'text': ''}, {'score': np.float32(2.0058246), 'text': ''}, {'score': np.float32(1.9810071), 'text': ''}, {'score': np.float32(1.9623402), 'text': ''}, {'score': np.float32(1.9532137), 'text': ''}, {'score': np.float32(1.9384437), 'text': ''}, {'score': np.float32(1.9359856), 'text': ''}, {'score': np.float32(1.9109921), 'text': ''}, {'score': np.float32(1.8901124), 'text': ''}, {'score': np.float32(1.88924), 'text': ''}, {'score': np.float32(2.347289), 'text': ''}, {'score': np.float32(2.0379364), 'text': ''}, {'score': np.float32(1.9688299), 'text': ''}, {'score': np.float32(1.950118), 'text': ''}, {'score': np.float32(1.9410065), 'text': ''}, {'score': np.float32(1.9089142), 'text': ''}, {'score': np.float32(1.8902473), 'text': ''}, {'score': np.float32(1.8811209), 'text': ''}, {'score': np.float32(1.8663507), 'text': ''}, {'score': np.float32(1.8638928), 'text': ''}, {'score': np.float32(1.8388993), 'text': ''}, {'score': np.float32(1.8180195), 'text': ''}, {'score': np.float32(1.817147), 'text': ''}, {'score': np.float32(2.329432), 'text': ''}, {'score': np.float32(2.0200791), 'text': ''}, {'score': np.float32(2.0069106), 'text': ''}, {'score': np.float32(1.9509727), 'text': ''}, {'score': np.float32(1.9322608), 'text': ''}, {'score': np.float32(1.9231493), 'text': ''}, {'score': np.float32(1.9173194), 'text': ''}, {'score': np.float32(1.9158745), 'text': ''}, {'score': np.float32(1.891057), 'text': ''}, {'score': np.float32(1.8723902), 'text': ''}, {'score': np.float32(1.8632636), 'text': ''}, {'score': np.float32(1.8484936), 'text': ''}, {'score': np.float32(1.8460355), 'text': ''}, {'score': np.float32(1.8210421), 'text': ''}, {'score': np.float32(1.8001623), 'text': ''}, {'score': np.float32(1.79929), 'text': ''}, {'score': np.float32(2.3290462), 'text': ''}, {'score': np.float32(2.0196934), 'text': ''}, {'score': np.float32(2.0065248), 'text': ''}, {'score': np.float32(1.9853654), 'text': ''}, {'score': np.float32(1.9505869), 'text': ''}, {'score': np.float32(1.931875), 'text': ''}, {'score': np.float32(1.9227636), 'text': ''}, {'score': np.float32(1.9169337), 'text': ''}, {'score': np.float32(1.9154887), 'text': ''}, {'score': np.float32(1.8906713), 'text': ''}, {'score': np.float32(1.8720044), 'text': ''}, {'score': np.float32(1.8628778), 'text': ''}, {'score': np.float32(1.8610736), 'text': ''}, {'score': np.float32(1.8481078), 'text': ''}, {'score': np.float32(1.8456497), 'text': ''}, {'score': np.float32(1.8391358), 'text': ''}, {'score': np.float32(1.8206563), 'text': ''}, {'score': np.float32(1.7997766), 'text': ''}, {'score': np.float32(1.7989042), 'text': ''}, {'score': np.float32(2.0084863), 'text': ''}, {'score': np.float32(1.9393797), 'text': ''}, {'score': np.float32(1.9206678), 'text': ''}, {'score': np.float32(1.879464), 'text': ''}, {'score': np.float32(1.8607972), 'text': ''}, {'score': np.float32(1.8516707), 'text': ''}, {'score': np.float32(1.8369005), 'text': ''}, {'score': np.float32(1.8344426), 'text': ''}, {'score': np.float32(1.8094491), 'text': ''}, {'score': np.float32(1.7885693), 'text': ''}, {'score': np.float32(1.7876968), 'text': ''}, {'score': np.float32(1.9362592), 'text': ''}, {'score': np.float32(1.8063285), 'text': ''}, {'score': np.float32(2.2534325), 'text': ''}, {'score': np.float32(1.9440798), 'text': ''}, {'score': np.float32(1.9309111), 'text': ''}, {'score': np.float32(1.9097517), 'text': ''}, {'score': np.float32(1.8749732), 'text': ''}, {'score': np.float32(1.8562613), 'text': ''}, {'score': np.float32(1.8471498), 'text': ''}, {'score': np.float32(1.8413199), 'text': ''}, {'score': np.float32(1.839875), 'text': ''}, {'score': np.float32(1.8150575), 'text': ''}, {'score': np.float32(1.7963907), 'text': ''}, {'score': np.float32(1.7872641), 'text': ''}, {'score': np.float32(1.7854599), 'text': ''}, {'score': np.float32(1.7724941), 'text': ''}, {'score': np.float32(1.770036), 'text': ''}, {'score': np.float32(1.763522), 'text': ''}, {'score': np.float32(1.7450426), 'text': ''}, {'score': np.float32(1.7241628), 'text': ''}, {'score': np.float32(1.7232904), 'text': ''}, {'score': np.float32(1.8514925), 'text': ''}, {'score': np.float32(1.782386), 'text': ''}, {'score': np.float32(1.763674), 'text': ''}, {'score': np.float32(1.7545626), 'text': ''}, {'score': np.float32(1.7224703), 'text': ''}, {'score': np.float32(1.7038034), 'text': ''}, {'score': np.float32(1.6946769), 'text': ''}, {'score': np.float32(1.6799068), 'text': ''}, {'score': np.float32(1.6774487), 'text': ''}, {'score': np.float32(1.6524553), 'text': ''}, {'score': np.float32(1.6315756), 'text': ''}, {'score': np.float32(1.6307032), 'text': ''}, {'score': np.float32(2.1384852), 'text': ''}, {'score': np.float32(1.8291324), 'text': ''}, {'score': np.float32(1.8159637), 'text': ''}, {'score': np.float32(1.7600259), 'text': ''}, {'score': np.float32(1.7413139), 'text': ''}, {'score': np.float32(1.7322025), 'text': ''}, {'score': np.float32(1.7249277), 'text': ''}, {'score': np.float32(1.7001102), 'text': ''}, {'score': np.float32(1.6814433), 'text': ''}, {'score': np.float32(1.6723168), 'text': ''}, {'score': np.float32(1.6575468), 'text': ''}, {'score': np.float32(1.6550887), 'text': ''}, {'score': np.float32(1.6300952), 'text': ''}, {'score': np.float32(1.6092155), 'text': ''}, {'score': np.float32(1.6083431), 'text': ''}, {'score': np.float32(1.7576617), 'text': ''}, {'score': np.float32(1.7389498), 'text': ''}, {'score': np.float32(1.697746), 'text': ''}, {'score': np.float32(1.6527245), 'text': ''}, {'score': np.float32(1.6277311), 'text': ''}, {'score': np.float32(1.6068513), 'text': ''}, {'score': np.float32(1.605979), 'text': ''}, {'score': np.float32(1.7126738), 'text': ''}, {'score': np.float32(1.6939619), 'text': ''}, {'score': np.float32(1.6527581), 'text': ''}, {'score': np.float32(1.6077366), 'text': ''}, {'score': np.float32(1.5827432), 'text': ''}, {'score': np.float32(1.5618634), 'text': ''}, {'score': np.float32(1.560991), 'text': ''}, {'score': np.float32(1.5755212), 'text': ''}, {'score': np.float32(1.7715083), 'text': ''}, {'score': np.float32(1.7024018), 'text': ''}, {'score': np.float32(1.6836898), 'text': ''}, {'score': np.float32(1.6424861), 'text': ''}, {'score': np.float32(1.5974646), 'text': ''}, {'score': np.float32(1.5724711), 'text': ''}, {'score': np.float32(1.5515914), 'text': ''}, {'score': np.float32(1.550719), 'text': ''}, {'score': np.float32(1.7682741), 'text': ''}, {'score': np.float32(1.6991675), 'text': ''}, {'score': np.float32(1.6804557), 'text': ''}, {'score': np.float32(1.639252), 'text': ''}, {'score': np.float32(1.5966884), 'text': ''}, {'score': np.float32(1.5942304), 'text': ''}, {'score': np.float32(1.569237), 'text': ''}, {'score': np.float32(1.5483572), 'text': ''}, {'score': np.float32(1.5474848), 'text': ''}]\n",
      "验证集数据预处理！\n"
     ]
    },
    {
     "data": {
      "text/html": [],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "raw_predictions:  PredictionOutput(predictions=(array([[ 0.4689319 , -0.24642754,  0.60204256, ..., -0.34912953,\n",
      "        -0.32343256,  0.06867407],\n",
      "       [ 0.46880892, -0.24776143,  0.62380624, ..., -0.3495983 ,\n",
      "        -0.32374918,  0.07039768],\n",
      "       [ 0.46969995,  0.05325471,  0.31301585, ..., -0.19213602,\n",
      "        -0.36468986, -0.44582927],\n",
      "       ...,\n",
      "       [ 0.5175707 ,  0.27730292,  0.3768006 , ...,  0.05047834,\n",
      "        -0.20127401, -0.16754065],\n",
      "       [ 0.49882066,  0.25371718,  0.7902193 , ..., -0.17615159,\n",
      "        -0.08139908, -0.02226727],\n",
      "       [ 0.50787926,  0.2546298 ,  0.30600184, ...,  0.05973658,\n",
      "        -0.17743918, -0.16140932]], shape=(10784, 384), dtype=float32), array([[ 0.06970772,  0.34900415,  0.07491194, ..., -0.1846932 ,\n",
      "        -0.2263232 , -0.25005674],\n",
      "       [ 0.07389306,  0.35054427,  0.12863247, ..., -0.17776643,\n",
      "        -0.21948072, -0.2495252 ],\n",
      "       [ 0.07198945,  0.38968647, -0.03279894, ..., -0.16480169,\n",
      "        -0.14806904, -0.11492264],\n",
      "       ...,\n",
      "       [ 0.22689986,  0.5684823 ,  0.37391093, ...,  0.09964543,\n",
      "        -0.10136092, -0.27775615],\n",
      "       [ 0.23142785,  0.37975264,  0.29255122, ..., -0.10504606,\n",
      "         0.0239583 ,  0.17678803],\n",
      "       [ 0.23033756,  0.51545686,  0.39523387, ...,  0.11884646,\n",
      "        -0.09937494, -0.27340633]], shape=(10784, 384), dtype=float32)), label_ids=None, metrics={'test_runtime': 109.9214, 'test_samples_per_second': 98.106, 'test_steps_per_second': 6.132})\n",
      "valid_answers:  [{'score': np.float32(2.5102084), 'text': 'season. The American Football Conference (AFC) champion'}, {'score': np.float32(2.4268723), 'text': 'NFL) for the 2015 season. The American Football Conference (AFC) champion'}, {'score': np.float32(2.419382), 'text': '(AFC) champion'}, {'score': np.float32(2.347289), 'text': 'champion'}, {'score': np.float32(2.329432), 'text': '2015 season. The American Football Conference (AFC) champion'}, {'score': np.float32(2.3290462), 'text': 'an American football game to determine the champion of the National Football League (NFL) for the 2015 season. The American Football Conference (AFC) champion'}, {'score': np.float32(2.187687), 'text': 'season. The'}, {'score': np.float32(2.1384852), 'text': 'The American Football Conference (AFC) champion'}, {'score': np.float32(2.1043508), 'text': 'NFL) for the 2015 season. The'}, {'score': np.float32(2.1039257), 'text': 'season. The American Football Conference (AFC) champion Denver Broncos defeated the'}, {'score': np.float32(2.098096), 'text': 'season'}, {'score': np.float32(2.0966508), 'text': 'season. The American Football Conference ('}, {'score': np.float32(2.0831914), 'text': 'NFL'}, {'score': np.float32(2.0531664), 'text': 'season. The American Football Conference (AFC) champion Denver Broncos defeated the National Football Conference (NFC)'}, {'score': np.float32(2.04404), 'text': 'season. The American Football Conference (AFC) champion Denver Broncos defeated the National'}, {'score': np.float32(2.0205896), 'text': 'NFL) for the 2015 season. The American Football Conference (AFC) champion Denver Broncos defeated the'}, {'score': np.float32(2.0147595), 'text': 'NFL) for the 2015 season'}, {'score': np.float32(2.0133147), 'text': 'NFL) for the 2015 season. The American Football Conference ('}, {'score': np.float32(2.0130994), 'text': '(AFC) champion Denver Broncos defeated the'}, {'score': np.float32(2.0084863), 'text': 'National Football Conference (NFC) champion Carolina Panthers 24–10 to earn their third Super Bowl title. The game was played on February 7,'}]\n",
      "{'text': ['Denver Broncos', 'Denver Broncos', 'Denver Broncos'], 'answer_start': [177, 177, 177]}\n",
      "正在后处理 10570 个示例的预测，这些预测分散在 10784 个特征中。\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 10570/10570 [00:28<00:00, 373.05it/s]\n",
      "Downloading builder script: 4.53kB [00:00, 8.73MB/s]\n",
      "Downloading extra modules: 3.32kB [00:00, 7.54MB/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "过滤无效样本\n",
      "执行评估\n",
      "评估结果: {'exact_match': 0.7000946073793756, 'f1': 7.745261307827987}\n"
     ]
    }
   ],
   "source": [
    "from datasets import ClassLabel, Sequence, load_dataset, Dataset\n",
    "# 旧代码（已废弃）\n",
    "# from datasets import load_metric\n",
    "# 新代码（推荐）\n",
    "from evaluate import load\n",
    "import random\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "from IPython.display import display, HTML\n",
    "import transformers\n",
    "from transformers import AutoTokenizer, AutoModelForQuestionAnswering, TrainingArguments, Trainer, default_data_collator\n",
    "import torch\n",
    "torch.cuda.empty_cache()  # 清理缓存\n",
    "import collections\n",
    "from tqdm.auto import tqdm\n",
    "import evaluate\n",
    "\n",
    "def main():\n",
    "    def show_random_elements(dataset, num_examples=10):\n",
    "        assert num_examples <= len(dataset), \"Can't pick more elements than there are in the dataset.\"\n",
    "        picks = []\n",
    "        for _ in range(num_examples):\n",
    "            pick = random.randint(0, len(dataset)-1)\n",
    "            while pick in picks:\n",
    "                pick = random.randint(0, len(dataset)-1)\n",
    "            picks.append(pick)\n",
    "\n",
    "        df = pd.DataFrame(dataset[picks])\n",
    "        for column, typ in dataset.features.items():\n",
    "            if isinstance(typ, ClassLabel):\n",
    "                df[column] = df[column].transform(lambda i: typ.names[i])\n",
    "            elif isinstance(typ, Sequence) and isinstance(typ.feature, ClassLabel):\n",
    "                df[column] = df[column].transform(lambda x: [typ.feature.names[i] for i in x])\n",
    "        display(HTML(df.to_html()))\n",
    "\n",
    "     # The maximum length of a feature (question and context)\n",
    "    max_length = 384\n",
    "    # The authorized overlap between two part of the context when splitting it is needed.\n",
    "    doc_stride = 128\n",
    "    squad_v2 = False\n",
    "    batch_size = 16\n",
    "    model_checkpoint = \"distilbert-base-uncased\"\n",
    "\n",
    "    # datasets = load_dataset(\"squad_v2\" if squad_v2 else \"squad\")\n",
    "    # 从本地路径加载SQuAD v2数据集\n",
    "    # datasets = load_dataset(\"D:/ideaSpace/MyPython/data/datasets/squad_v2\", trust_remote_code=True)\n",
    "    datasets = load_dataset(\"squad_v2\" if squad_v2 else \"squad\")\n",
    "    show_random_elements(datasets[\"train\"])\n",
    "\n",
    "    tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n",
    "    # 断言确保我们的 Tokenizers 使用的是 FastTokenizer（Rust 实现，速度和功能性上有一定优势）\n",
    "    assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)\n",
    "    pad_on_right = tokenizer.padding_side == \"right\"\n",
    "\n",
    "    for i, example in enumerate(datasets[\"train\"]):\n",
    "        if len(tokenizer(example[\"question\"], example[\"context\"])[\"input_ids\"]) > 384:\n",
    "            break\n",
    "    # 挑选出来超过384（最大长度）的数据样例\n",
    "    example = datasets[\"train\"][i]\n",
    "    \"\"\"关于截断的策略\n",
    "        直接截断超出部分: truncation=only_second\n",
    "        仅截断上下文（context），保留问题（question）：return_overflowing_tokens=True & 设置stride\"\"\"\n",
    "    tokenized_example = tokenizer(\n",
    "        example[\"question\"],\n",
    "        example[\"context\"],\n",
    "        max_length=max_length,\n",
    "        truncation=\"only_second\",\n",
    "        return_overflowing_tokens=True,\n",
    "        stride=doc_stride\n",
    "    )\n",
    "\n",
    "    for x in tokenized_example[\"input_ids\"][:2]:\n",
    "        print(tokenizer.decode(x))\n",
    "\n",
    "    def prepare_train_features(examples):\n",
    "        # 一些问题的左侧可能有很多空白字符，这对我们没有用，而且会导致上下文的截断失败\n",
    "        # （标记化的问题将占用大量空间）。因此，我们删除左侧的空白字符。\n",
    "        examples[\"question\"] = [q.lstrip() for q in examples[\"question\"]]\n",
    "\n",
    "        # 使用截断和填充对我们的示例进行标记化，但保留溢出部分，使用步幅（stride）。\n",
    "        # 当上下文很长时，这会导致一个示例可能提供多个特征，其中每个特征的上下文都与前一个特征的上下文有一些重叠。\n",
    "        tokenized_examples = tokenizer(\n",
    "            examples[\"question\" if pad_on_right else \"context\"],\n",
    "            examples[\"context\" if pad_on_right else \"question\"],\n",
    "            truncation=\"only_second\" if pad_on_right else \"only_first\",\n",
    "            max_length=max_length,\n",
    "            stride=doc_stride,\n",
    "            return_overflowing_tokens=True,\n",
    "            return_offsets_mapping=True,\n",
    "            padding=\"max_length\",\n",
    "        )\n",
    "\n",
    "        # 由于一个示例可能给我们提供多个特征（如果它具有很长的上下文），我们需要一个从特征到其对应示例的映射。这个键就提供了这个映射关系。\n",
    "        sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\n",
    "        # 偏移映射将为我们提供从令牌到原始上下文中的字符位置的映射。这将帮助我们计算开始位置和结束位置。\n",
    "        offset_mapping = tokenized_examples.pop(\"offset_mapping\")\n",
    "\n",
    "        # 让我们为这些示例进行标记！\n",
    "        tokenized_examples[\"start_positions\"] = []\n",
    "        tokenized_examples[\"end_positions\"] = []\n",
    "\n",
    "        for i, offsets in enumerate(offset_mapping):\n",
    "            # 我们将使用 CLS 特殊 token 的索引来标记不可能的答案。\n",
    "            input_ids = tokenized_examples[\"input_ids\"][i]\n",
    "            cls_index = input_ids.index(tokenizer.cls_token_id)\n",
    "\n",
    "            # 获取与该示例对应的序列（以了解上下文和问题是什么）。\n",
    "            sequence_ids = tokenized_examples.sequence_ids(i)\n",
    "\n",
    "            # 一个示例可以提供多个跨度，这是包含此文本跨度的示例的索引。\n",
    "            sample_index = sample_mapping[i]\n",
    "            answers = examples[\"answers\"][sample_index]\n",
    "            # 如果没有给出答案，则将cls_index设置为答案。\n",
    "            if len(answers[\"answer_start\"]) == 0:\n",
    "                tokenized_examples[\"start_positions\"].append(cls_index)\n",
    "                tokenized_examples[\"end_positions\"].append(cls_index)\n",
    "            else:\n",
    "                # 答案在文本中的开始和结束字符索引。\n",
    "                start_char = answers[\"answer_start\"][0]\n",
    "                end_char = start_char + len(answers[\"text\"][0])\n",
    "\n",
    "                # 当前跨度在文本中的开始令牌索引。\n",
    "                token_start_index = 0\n",
    "                while sequence_ids[token_start_index] != (1 if pad_on_right else 0):\n",
    "                    token_start_index += 1\n",
    "\n",
    "                # 当前跨度在文本中的结束令牌索引。\n",
    "                token_end_index = len(input_ids) - 1\n",
    "                while sequence_ids[token_end_index] != (1 if pad_on_right else 0):\n",
    "                    token_end_index -= 1\n",
    "\n",
    "                # 检测答案是否超出跨度（在这种情况下，该特征的标签将使用CLS索引）。\n",
    "                if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\n",
    "                    tokenized_examples[\"start_positions\"].append(cls_index)\n",
    "                    tokenized_examples[\"end_positions\"].append(cls_index)\n",
    "                else:\n",
    "                    # 否则，将token_start_index和token_end_index移到答案的两端。\n",
    "                    # 注意：如果答案是最后一个单词（边缘情况），我们可以在最后一个偏移之后继续。\n",
    "                    while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\n",
    "                        token_start_index += 1\n",
    "                    tokenized_examples[\"start_positions\"].append(token_start_index - 1)\n",
    "                    while offsets[token_end_index][1] >= end_char:\n",
    "                        token_end_index -= 1\n",
    "                    tokenized_examples[\"end_positions\"].append(token_end_index + 1)\n",
    "\n",
    "        return tokenized_examples\n",
    "\n",
    "    print(\"训练集数据集预处理！\")\n",
    "    tokenized_datasets = datasets.map(prepare_train_features,\n",
    "                                      batched=True,\n",
    "                                      remove_columns=datasets[\"train\"].column_names)\n",
    "\n",
    "    model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint)\n",
    "    model_dir = f\"models/distilbert-base-uncased-finetuned-squad\"\n",
    "\n",
    "    args = TrainingArguments(\n",
    "        output_dir=model_dir,\n",
    "        eval_strategy=\"epoch\",\n",
    "        learning_rate=2e-5,\n",
    "        per_device_train_batch_size=batch_size,\n",
    "        per_device_eval_batch_size=batch_size,\n",
    "        num_train_epochs=3,\n",
    "        weight_decay=0.01,\n",
    "        dataloader_pin_memory=False,  # 环境中没有GPU或者你不需要利用GPU加速数据加载，可以禁用pin_memory\n",
    "    )\n",
    "    data_collator = default_data_collator\n",
    "    tokenized_small_train_dataset = tokenized_datasets[\"train\"].shuffle(seed=42).select(range(100))\n",
    "    tokenized_small_eval_dataset = tokenized_datasets[\"validation\"].shuffle(seed=42).select(range(100))\n",
    "    trainer = Trainer(\n",
    "        model,\n",
    "        args,\n",
    "        train_dataset=tokenized_small_train_dataset,\n",
    "        eval_dataset=tokenized_small_eval_dataset,\n",
    "        data_collator=data_collator,\n",
    "        processing_class=tokenizer,\n",
    "    )\n",
    "    print(\"开始训练！\")\n",
    "    trainer.train()\n",
    "    model_to_save = trainer.save_model(model_dir)\n",
    "    print(\"model_to_save: \", model_to_save)\n",
    "\n",
    "    # 评估模型输出需要一些额外的处理：将模型的预测映射回上下文的部分。模型直接输出的是预测答案的起始位置和结束位置的logits\n",
    "    for batch in trainer.get_eval_dataloader():\n",
    "        break\n",
    "    batch = {k: v.to(trainer.args.device) for k, v in batch.items()}\n",
    "    with torch.no_grad():\n",
    "        output = trainer.model(**batch)\n",
    "    print(output.keys())\n",
    "    print(output.start_logits.shape, output.end_logits.shape)\n",
    "    print(output.start_logits.argmax(dim=-1), output.end_logits.argmax(dim=-1))\n",
    "\n",
    "    \"\"\"为了对答案进行分类，\n",
    "        将使用通过添加起始和结束logits获得的分数\n",
    "        设计一个名为n_best_size的超参数，限制不对所有可能的答案进行排序。\n",
    "        我们将选择起始和结束logits中的最佳索引，并收集这些预测的所有答案。\n",
    "        在检查每一个是否有效后，我们将按照其分数对它们进行排序，并保留最佳的答案。\"\"\"\n",
    "    n_best_size = 20\n",
    "    start_logits = output.start_logits[0].cpu().numpy()\n",
    "    end_logits = output.end_logits[0].cpu().numpy()\n",
    "    # 获取最佳的起始和结束位置的索引：\n",
    "    start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "    end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "    valid_answers = []\n",
    "    # 遍历起始位置和结束位置的索引组合\n",
    "    for start_index in start_indexes:\n",
    "        for end_index in end_indexes:\n",
    "            if start_index <= end_index:  # 需要进一步测试以检查答案是否在上下文中\n",
    "                valid_answers.append(\n",
    "                    {\n",
    "                        \"score\": start_logits[start_index] + end_logits[end_index],\n",
    "                        \"text\": \"\"  # 我们需要找到一种方法来获取与上下文中答案对应的原始子字符串\n",
    "                    }\n",
    "                )\n",
    "    print(\"valid_answers: \", valid_answers)\n",
    "\n",
    "    def prepare_validation_features(examples):\n",
    "        # 一些问题的左侧有很多空白，这些空白并不有用且会导致上下文截断失败（分词后的问题会占用很多空间）。\n",
    "        # 因此我们移除这些左侧空白\n",
    "        examples[\"question\"] = [q.lstrip() for q in examples[\"question\"]]\n",
    "\n",
    "        # 使用截断和可能的填充对我们的示例进行分词，但使用步长保留溢出的令牌。这导致一个长上下文的示例可能产生\n",
    "        # 几个特征，每个特征的上下文都会稍微与前一个特征的上下文重叠。\n",
    "        tokenized_examples = tokenizer(\n",
    "            examples[\"question\" if pad_on_right else \"context\"],\n",
    "            examples[\"context\" if pad_on_right else \"question\"],\n",
    "            truncation=\"only_second\" if pad_on_right else \"only_first\",\n",
    "            max_length=max_length,\n",
    "            stride=doc_stride,\n",
    "            return_overflowing_tokens=True,\n",
    "            return_offsets_mapping=True,\n",
    "            padding=\"max_length\",\n",
    "        )\n",
    "\n",
    "        # 由于一个示例在上下文很长时可能会产生几个特征，我们需要一个从特征映射到其对应示例的映射。这个键就是为了这个目的。\n",
    "        sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\n",
    "\n",
    "        # 我们保留产生这个特征的示例ID，并且会存储偏移映射。\n",
    "        tokenized_examples[\"example_id\"] = []\n",
    "\n",
    "        for i in range(len(tokenized_examples[\"input_ids\"])):\n",
    "            # 获取与该示例对应的序列（以了解哪些是上下文，哪些是问题）。\n",
    "            sequence_ids = tokenized_examples.sequence_ids(i)\n",
    "            context_index = 1 if pad_on_right else 0\n",
    "\n",
    "            # 一个示例可以产生几个文本段，这里是包含该文本段的示例的索引。\n",
    "            sample_index = sample_mapping[i]\n",
    "            tokenized_examples[\"example_id\"].append(examples[\"id\"][sample_index])\n",
    "\n",
    "            # 将不属于上下文的偏移映射设置为None，以便容易确定一个令牌位置是否属于上下文。\n",
    "            tokenized_examples[\"offset_mapping\"][i] = [\n",
    "                (o if sequence_ids[k] == context_index else None)\n",
    "                for k, o in enumerate(tokenized_examples[\"offset_mapping\"][i])\n",
    "            ]\n",
    "\n",
    "        return tokenized_examples\n",
    "\n",
    "    print(\"验证集数据预处理！\")\n",
    "    validation_features = datasets[\"validation\"].map(\n",
    "        prepare_validation_features,\n",
    "        batched=True,\n",
    "        remove_columns=datasets[\"validation\"].column_names\n",
    "    )\n",
    "\n",
    "    raw_predictions = trainer.predict(validation_features)\n",
    "    print(\"raw_predictions: \", raw_predictions)\n",
    "    # Trainer会隐藏模型不使用的列（在这里是example_id和offset_mapping，我们需要它们进行后处理），所以我们需要将它们重新设置回来：\n",
    "    validation_features.set_format(type=validation_features.format[\"type\"], columns=list(validation_features.features.keys()))\n",
    "\n",
    "    max_answer_length = 30\n",
    "    start_logits = output.start_logits[0].cpu().numpy()\n",
    "    end_logits = output.end_logits[0].cpu().numpy()\n",
    "    offset_mapping = validation_features[0][\"offset_mapping\"]\n",
    "\n",
    "    # 第一个特征来自第一个示例。对于更一般的情况，我们需要将example_id匹配到一个示例索引\n",
    "    context = datasets[\"validation\"][0][\"context\"]\n",
    "\n",
    "    # 收集最佳开始/结束逻辑的索引：\n",
    "    start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "    end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "    valid_answers = []\n",
    "    for start_index in start_indexes:\n",
    "        for end_index in end_indexes:\n",
    "            # 不考虑超出范围的答案，原因是索引超出范围或对应于输入ID的部分不在上下文中。\n",
    "            if (\n",
    "                    start_index >= len(offset_mapping)\n",
    "                    or end_index >= len(offset_mapping)\n",
    "                    or offset_mapping[start_index] is None\n",
    "                    or offset_mapping[end_index] is None\n",
    "            ):\n",
    "                continue\n",
    "            # 不考虑长度小于0或大于max_answer_length的答案。\n",
    "            if end_index < start_index or end_index - start_index + 1 > max_answer_length:\n",
    "                continue\n",
    "            if start_index <= end_index: # 我们需要细化这个测试，以检查答案是否在上下文中\n",
    "                start_char = offset_mapping[start_index][0]\n",
    "                end_char = offset_mapping[end_index][1]\n",
    "                valid_answers.append(\n",
    "                    {\n",
    "                        \"score\": start_logits[start_index] + end_logits[end_index],\n",
    "                        \"text\": context[start_char: end_char]\n",
    "                    }\n",
    "                )\n",
    "\n",
    "    valid_answers = sorted(valid_answers, key=lambda x: x[\"score\"], reverse=True)[:n_best_size]\n",
    "    print(\"valid_answers: \", valid_answers)\n",
    "    print(datasets[\"validation\"][0][\"answers\"])\n",
    "\n",
    "    examples = datasets[\"validation\"]\n",
    "    features = validation_features\n",
    "\n",
    "    example_id_to_index = {k: i for i, k in enumerate(examples[\"id\"])}\n",
    "    features_per_example = collections.defaultdict(list)\n",
    "    for i, feature in enumerate(features):\n",
    "        features_per_example[example_id_to_index[feature[\"example_id\"]]].append(i)\n",
    "\n",
    "\n",
    "    def postprocess_qa_predictions(examples, features, raw_predictions, n_best_size = 20, max_answer_length = 30):\n",
    "        all_start_logits, all_end_logits = raw_predictions\n",
    "        # 构建一个从示例到其对应特征的映射。\n",
    "        example_id_to_index = {k: i for i, k in enumerate(examples[\"id\"])}\n",
    "        features_per_example = collections.defaultdict(list)\n",
    "        for i, feature in enumerate(features):\n",
    "            features_per_example[example_id_to_index[feature[\"example_id\"]]].append(i)\n",
    "\n",
    "        # 我们需要填充的字典。\n",
    "        predictions = collections.OrderedDict()\n",
    "\n",
    "        # 日志记录。\n",
    "        print(f\"正在后处理 {len(examples)} 个示例的预测，这些预测分散在 {len(features)} 个特征中。\")\n",
    "\n",
    "        # 遍历所有示例！\n",
    "        for example_index, example in enumerate(tqdm(examples)):\n",
    "            # 这些是与当前示例关联的特征的索引。\n",
    "            feature_indices = features_per_example[example_index]\n",
    "\n",
    "            min_null_score = None # 仅在squad_v2为True时使用。\n",
    "            valid_answers = []\n",
    "\n",
    "            context = example[\"context\"]\n",
    "            # 遍历与当前示例关联的所有特征。\n",
    "            for feature_index in feature_indices:\n",
    "                # 我们获取模型对这个特征的预测。\n",
    "                start_logits = all_start_logits[feature_index]\n",
    "                end_logits = all_end_logits[feature_index]\n",
    "                # 这将允许我们将logits中的某些位置映射到原始上下文中的文本跨度。\n",
    "                offset_mapping = features[feature_index][\"offset_mapping\"]\n",
    "\n",
    "                # 更新最小空预测。\n",
    "                cls_index = features[feature_index][\"input_ids\"].index(tokenizer.cls_token_id)\n",
    "                feature_null_score = start_logits[cls_index] + end_logits[cls_index]\n",
    "                if min_null_score is None or min_null_score < feature_null_score:\n",
    "                    min_null_score = feature_null_score\n",
    "\n",
    "                # 浏览所有的最佳开始和结束logits，为 `n_best_size` 个最佳选择。\n",
    "                start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "                end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()\n",
    "                for start_index in start_indexes:\n",
    "                    for end_index in end_indexes:\n",
    "                        # 不考虑超出范围的答案，原因是索引超出范围或对应于输入ID的部分不在上下文中。\n",
    "                        if (\n",
    "                                start_index >= len(offset_mapping)\n",
    "                                or end_index >= len(offset_mapping)\n",
    "                                or offset_mapping[start_index] is None\n",
    "                                or offset_mapping[end_index] is None\n",
    "                        ):\n",
    "                            continue\n",
    "                        # 不考虑长度小于0或大于max_answer_length的答案。\n",
    "                        if end_index < start_index or end_index - start_index + 1 > max_answer_length:\n",
    "                            continue\n",
    "\n",
    "                        start_char = offset_mapping[start_index][0]\n",
    "                        end_char = offset_mapping[end_index][1]\n",
    "                        valid_answers.append(\n",
    "                            {\n",
    "                                \"score\": start_logits[start_index] + end_logits[end_index],\n",
    "                                \"text\": context[start_char: end_char]\n",
    "                            }\n",
    "                        )\n",
    "\n",
    "            if len(valid_answers) > 0:\n",
    "                best_answer = sorted(valid_answers, key=lambda x: x[\"score\"], reverse=True)[0]\n",
    "            else:\n",
    "                # 在极少数情况下我们没有一个非空预测，我们创建一个假预测以避免失败。\n",
    "                best_answer = {\"text\": \"\", \"score\": 0.0}\n",
    "\n",
    "            # 选择我们的最终答案：最佳答案或空答案（仅适用于squad_v2）\n",
    "            if not squad_v2:\n",
    "                predictions[example[\"id\"]] = best_answer[\"text\"]\n",
    "            else:\n",
    "                answer = best_answer[\"text\"] if best_answer[\"score\"] > min_null_score else \"\"\n",
    "                predictions[example[\"id\"]] = answer\n",
    "\n",
    "        return predictions\n",
    "\n",
    "    final_predictions = postprocess_qa_predictions(datasets[\"validation\"], validation_features, raw_predictions.predictions)\n",
    "    metric = load(\"squad_v2\" if squad_v2 else \"squad\")\n",
    "    # metric = evaluate.load(\"D:/ideaSpace/MyPython/data/metrics/squad_v2\") if squad_v2 else evaluate.load(\"D:/ideaSpace/MyPython/data/metrics/squad\")\n",
    "\n",
    "    # 修改评估部分代码\n",
    "    if squad_v2:\n",
    "        formatted_predictions = [{\"id\": k, \"prediction_text\": v, \"no_answer_probability\": 0.0}\n",
    "                                 for k, v in final_predictions.items()]\n",
    "    else:\n",
    "        formatted_predictions = [{\"id\": k, \"prediction_text\": v}\n",
    "                                 for k, v in final_predictions.items()]\n",
    "\n",
    "    references = [{\"id\": ex[\"id\"], \"answers\": ex[\"answers\"]} for ex in datasets[\"validation\"]]\n",
    "\n",
    "    print(\"过滤无效样本\")\n",
    "    valid_references = [ref for ref in references if ref.get(\"answers\") and len(ref[\"answers\"][\"text\"]) > 0]\n",
    "    valid_pred_ids = [ref[\"id\"] for ref in valid_references]\n",
    "    valid_predictions = [pred for pred in formatted_predictions if pred[\"id\"] in valid_pred_ids]\n",
    "\n",
    "    print(\"执行评估\")\n",
    "    if len(valid_references) > 0:\n",
    "        metrics = metric.compute(predictions=valid_predictions, references=valid_references)\n",
    "        print(\"评估结果:\", metrics)\n",
    "    else:\n",
    "        print(\"警告：没有有效的参考答案用于评估\")\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    main()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "913947f0-172e-446b-b629-11601b80798f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "开始基础训练 (3 epochs)...\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='189' max='189' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [189/189 02:21, Epoch 3/3]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Epoch</th>\n",
       "      <th>Training Loss</th>\n",
       "      <th>Validation Loss</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>3.716300</td>\n",
       "      <td>3.853903</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>3.074800</td>\n",
       "      <td>3.348928</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>2.251300</td>\n",
       "      <td>3.224374</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "开始扩展训练 (5 epochs)...\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='315' max='315' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [315/315 03:59, Epoch 5/5]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Epoch</th>\n",
       "      <th>Training Loss</th>\n",
       "      <th>Validation Loss</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>2.030400</td>\n",
       "      <td>2.990842</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>1.356300</td>\n",
       "      <td>3.033782</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>0.909400</td>\n",
       "      <td>3.284467</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>4</td>\n",
       "      <td>0.576700</td>\n",
       "      <td>3.434875</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>5</td>\n",
       "      <td>0.406300</td>\n",
       "      <td>3.489991</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "评估基础模型...\n"
     ]
    },
    {
     "data": {
      "text/html": [],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 3280.87it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "基础模型 (3 epochs) 评估结果: {'exact_match': 23.2, 'f1': 31.978466441349973}\n",
      "\n",
      "评估扩展模型...\n"
     ]
    },
    {
     "data": {
      "text/html": [],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 3270.92it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "扩展模型 (5 epochs) 评估结果: {'exact_match': 23.2, 'f1': 31.978466441349973}\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "from transformers import (\n",
    "    DistilBertForQuestionAnswering,\n",
    "    DistilBertTokenizerFast,\n",
    "    TrainingArguments,\n",
    "    Trainer,\n",
    "    default_data_collator\n",
    ")\n",
    "from datasets import load_dataset\n",
    "import evaluate\n",
    "# 新代码（推荐）\n",
    "from evaluate import load\n",
    "import numpy as np\n",
    "from tqdm.auto import tqdm\n",
    "\n",
    "def main():\n",
    "    squad_v2 = False\n",
    "\n",
    "    # 设置随机种子保证可重复性\n",
    "    torch.manual_seed(42)\n",
    "\n",
    "    # 加载数据集\n",
    "    dataset = load_dataset(\"squad_v2\" if squad_v2 else \"squad\")\n",
    "    # dataset = load_dataset(\"D:/ideaSpace/MyPython/data/datasets/squad_v2\", trust_remote_code=True)\n",
    "\n",
    "    # 加载tokenizer\n",
    "    model_path: str = \"models/distilbert-base-uncased-finetuned-squad\"\n",
    "    tokenizer = DistilBertTokenizerFast.from_pretrained(model_path)\n",
    "\n",
    "    # 预处理函数\n",
    "    def prepare_train_features(examples):\n",
    "        # 对问题和上下文进行tokenize\n",
    "        tokenized_examples = tokenizer(\n",
    "            examples[\"question\"],\n",
    "            examples[\"context\"],\n",
    "            truncation=\"only_second\",  # 只截断上下文，不截断问题\n",
    "            max_length=384,  # 最大长度\n",
    "            stride=128,  # 滑动窗口的步长\n",
    "            return_overflowing_tokens=True,  # 返回超出长度的部分\n",
    "            return_offsets_mapping=True,  # 返回字符级别的偏移量\n",
    "            padding=\"max_length\",  # 填充到最大长度\n",
    "        )\n",
    "\n",
    "        # 由于一个例子可能会被分成多个片段，我们需要映射到原始例子\n",
    "        sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\n",
    "        # 偏移量映射，用于定位答案的起始和结束位置\n",
    "        offset_mapping = tokenized_examples.pop(\"offset_mapping\")\n",
    "\n",
    "        # 标记答案的起始和结束位置\n",
    "        tokenized_examples[\"start_positions\"] = []\n",
    "        tokenized_examples[\"end_positions\"] = []\n",
    "\n",
    "        for i, offsets in enumerate(offset_mapping):\n",
    "            # 我们将使用 CLS 特殊 token 的索引来标记不可能的答案。\n",
    "            input_ids = tokenized_examples[\"input_ids\"][i]\n",
    "            cls_index = input_ids.index(tokenizer.cls_token_id)\n",
    "            # 获取与该示例对应的序列（以了解上下文和问题是什么）。\n",
    "            sequence_ids = tokenized_examples.sequence_ids(i)\n",
    "            # print(\"与该示例对应的序列：\", sequence_ids)\n",
    "            # 一个示例可以提供多个跨度，这是包含此文本跨度的示例的索引。\n",
    "            sample_index = sample_mapping[i]\n",
    "            answers = examples[\"answers\"][sample_index]\n",
    "            if len(answers[\"answer_start\"]) == 0:\n",
    "                tokenized_examples[\"start_positions\"].append(cls_index)\n",
    "                tokenized_examples[\"end_positions\"].append(cls_index)\n",
    "            else:\n",
    "                start_char = answers[\"answer_start\"][0]\n",
    "                end_char = start_char + len(answers[\"text\"][0])\n",
    "\n",
    "                # 找到答案的token起始和结束位置\n",
    "                sequence_ids = tokenized_examples.sequence_ids(i)\n",
    "                token_start_index = 0\n",
    "                while sequence_ids[token_start_index] != 1:  # 1表示上下文部分\n",
    "                    token_start_index += 1\n",
    "\n",
    "                token_end_index = len(input_ids) - 1\n",
    "                while sequence_ids[token_end_index] != 1:\n",
    "                    token_end_index -= 1\n",
    "\n",
    "                # 检测答案是否在上下文范围内\n",
    "                if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\n",
    "                    tokenized_examples[\"start_positions\"].append(0)\n",
    "                    tokenized_examples[\"end_positions\"].append(0)\n",
    "                else:\n",
    "                    # 否则找到答案的起始和结束token位置\n",
    "                    while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\n",
    "                        token_start_index += 1\n",
    "                    tokenized_examples[\"start_positions\"].append(token_start_index - 1)\n",
    "\n",
    "                    while offsets[token_end_index][1] >= end_char:\n",
    "                        token_end_index -= 1\n",
    "                    tokenized_examples[\"end_positions\"].append(token_end_index + 1)\n",
    "\n",
    "        # 确保保留offset_mapping\n",
    "        tokenized_examples[\"offset_mapping\"] = []\n",
    "        for i, offsets in enumerate(offset_mapping):\n",
    "            tokenized_examples[\"offset_mapping\"].append(offsets)\n",
    "\n",
    "        # 添加 example_id 映射\n",
    "        tokenized_examples[\"example_id\"] = []\n",
    "        for i in range(len(tokenized_examples[\"input_ids\"])):\n",
    "            sample_index = sample_mapping[i]\n",
    "            tokenized_examples[\"example_id\"].append(examples[\"id\"][sample_index])\n",
    "\n",
    "        return tokenized_examples\n",
    "\n",
    "    dataset[\"train\"] = dataset[\"train\"].shuffle(seed=42).select(range(1000))\n",
    "    # 预处理训练集和验证集\n",
    "    tokenized_train = dataset[\"train\"].map(\n",
    "        prepare_train_features,\n",
    "        batched=True,\n",
    "        remove_columns=dataset[\"train\"].column_names,\n",
    "        num_proc=4,\n",
    "    )\n",
    "    dataset[\"validation\"] = dataset[\"validation\"].shuffle(seed=42).select(range(1000))\n",
    "    tokenized_val = dataset[\"validation\"].map(\n",
    "        prepare_train_features,\n",
    "        batched=True,\n",
    "        remove_columns=dataset[\"validation\"].column_names,\n",
    "        num_proc=4,\n",
    "    )\n",
    "\n",
    "    # 加载模型\n",
    "    model = DistilBertForQuestionAnswering.from_pretrained(model_path)\n",
    "\n",
    "    # 训练参数 - 基础版本\n",
    "    training_args_base = TrainingArguments(\n",
    "        output_dir=\"./results_base\",\n",
    "        eval_strategy=\"epoch\",\n",
    "        learning_rate=3e-5,\n",
    "        per_device_train_batch_size=16,\n",
    "        per_device_eval_batch_size=16,\n",
    "        num_train_epochs=3,  # 基础训练epoch数\n",
    "        weight_decay=0.01,\n",
    "        logging_dir=\"./logs\",\n",
    "        logging_steps=10,\n",
    "        save_strategy=\"epoch\",\n",
    "        load_best_model_at_end=True,\n",
    "        report_to=\"tensorboard\",\n",
    "        dataloader_pin_memory=False,  # 环境中没有GPU或者你不需要利用GPU加速数据加载，可以禁用pin_memory\n",
    "    )\n",
    "\n",
    "    # 训练参数 - 增加训练步数版本\n",
    "    training_args_extended = TrainingArguments(\n",
    "        output_dir=\"./results_extended\",\n",
    "        eval_strategy=\"epoch\",\n",
    "        learning_rate=3e-5,\n",
    "        per_device_train_batch_size=16,\n",
    "        per_device_eval_batch_size=16,\n",
    "        num_train_epochs=5,  # 增加训练epoch数\n",
    "        weight_decay=0.01,\n",
    "        logging_dir=\"./logs_extended\",\n",
    "        logging_steps=10,\n",
    "        save_strategy=\"epoch\",\n",
    "        load_best_model_at_end=True,\n",
    "        report_to=\"tensorboard\",\n",
    "        dataloader_pin_memory=False,  # 环境中没有GPU或者你不需要利用GPU加速数据加载，可以禁用pin_memory\n",
    "    )\n",
    "\n",
    "    # 创建Trainer\n",
    "    trainer_base = Trainer(\n",
    "        model=model,\n",
    "        args=training_args_base,\n",
    "        train_dataset=tokenized_train,\n",
    "        eval_dataset=tokenized_val,\n",
    "        data_collator=default_data_collator,\n",
    "        processing_class=tokenizer,\n",
    "    )\n",
    "\n",
    "    trainer_extended = Trainer(\n",
    "        model=model,\n",
    "        args=training_args_extended,\n",
    "        train_dataset=tokenized_train,\n",
    "        eval_dataset=tokenized_val,\n",
    "        data_collator=default_data_collator,\n",
    "        processing_class=tokenizer,\n",
    "    )\n",
    "\n",
    "    print(\"开始基础训练 (3 epochs)...\")\n",
    "    trainer_base.train()\n",
    "\n",
    "    print(\"\\n开始扩展训练 (5 epochs)...\")\n",
    "    trainer_extended.train()\n",
    "\n",
    "    def evaluate_model(trainer, dataset, split=\"validation\"):\n",
    "        # 确保使用预处理后的数据\n",
    "        if split == \"train\":\n",
    "            features = tokenized_train\n",
    "        else:\n",
    "            features = tokenized_val\n",
    "\n",
    "        # 获取预测结果\n",
    "        predictions = trainer.predict(features)\n",
    "        start_logits, end_logits = predictions.predictions\n",
    "\n",
    "        # 加载评估指标\n",
    "        metric = load(\"squad_v2\" if squad_v2 else \"squad\")\n",
    "        # metric = evaluate.load(\"D:/ideaSpace/MyPython/data/metrics/squad_v2\") if squad_v2 else evaluate.load(\"D:/ideaSpace/MyPython/data/metrics/squad\")\n",
    "\n",
    "        # 准备评估输入\n",
    "        examples = dataset[split]\n",
    "\n",
    "        # 构建特征到示例的映射\n",
    "        feature_to_example = {}\n",
    "        for i, feature in enumerate(features):\n",
    "            if \"offset_mapping\" not in feature:\n",
    "                print(f\"警告: 特征 {i} 缺少offset_mapping字段\")\n",
    "                continue\n",
    "\n",
    "            feature_to_example[i] = {\n",
    "                \"example_id\": feature[\"example_id\"],\n",
    "                \"offset_mapping\": feature[\"offset_mapping\"]\n",
    "            }\n",
    "\n",
    "        final_predictions = {}\n",
    "        all_references = []\n",
    "        for example in tqdm(examples):\n",
    "            example_id = example[\"id\"]\n",
    "            context = example[\"context\"]\n",
    "            answers = []\n",
    "\n",
    "            # 找到所有对应的特征\n",
    "            feature_indices = [i for i, feat in feature_to_example.items()\n",
    "                               if feat[\"example_id\"] == example_id]\n",
    "\n",
    "            for feature_index in feature_indices:\n",
    "                start_logit = start_logits[feature_index]\n",
    "                end_logit = end_logits[feature_index]\n",
    "                offsets = feature_to_example[feature_index][\"offset_mapping\"]\n",
    "\n",
    "                # 获取最佳答案\n",
    "                start_index = np.argmax(start_logit)\n",
    "                end_index = np.argmax(end_logit)\n",
    "\n",
    "                # 检查答案是否在上下文范围内\n",
    "                if (start_index >= len(offsets)) or (end_index >= len(offsets)):\n",
    "                    continue\n",
    "\n",
    "                if offsets[start_index] is None or offsets[end_index] is None:\n",
    "                    continue\n",
    "\n",
    "                start_char = offsets[start_index][0]\n",
    "                end_char = offsets[end_index][1]\n",
    "                answer = context[start_char:end_char]\n",
    "                answers.append({\n",
    "                    \"text\": answer,\n",
    "                    \"logit_score\": start_logit[start_index] + end_logit[end_index]\n",
    "                })\n",
    "\n",
    "            # 选择得分最高的答案\n",
    "            if answers:\n",
    "                best_answer = max(answers, key=lambda x: x[\"logit_score\"])\n",
    "                final_predictions[example_id] = best_answer[\"text\"]\n",
    "            else:\n",
    "                final_predictions[example_id] = \"\"\n",
    "\n",
    "            # 准备参考答案\n",
    "            ref_answers = example[\"answers\"]\n",
    "            if len(ref_answers[\"answer_start\"]) == 0:\n",
    "                ref_answers = {\"text\": [\"\"], \"answer_start\": [0]}\n",
    "\n",
    "            all_references.append({\n",
    "                \"id\": example_id,\n",
    "                \"answers\": ref_answers\n",
    "            })\n",
    "\n",
    "        # 格式化预测和参考\n",
    "        formatted_predictions = [{\"id\": k, \"prediction_text\": v} for k, v in final_predictions.items()]\n",
    "        formatted_references = [{\"id\": x[\"id\"], \"answers\": x[\"answers\"]} for x in all_references]\n",
    "        # 确保ID匹配\n",
    "        common_ids = set(final_predictions.keys()) & {x[\"id\"] for x in all_references}\n",
    "        filtered_predictions = [x for x in formatted_predictions if x[\"id\"] in common_ids]\n",
    "        filtered_references = [x for x in formatted_references if x[\"id\"] in common_ids]\n",
    "        # 计算指标\n",
    "        try:\n",
    "            metrics = metric.compute(\n",
    "                predictions=filtered_predictions,\n",
    "                references=filtered_references\n",
    "            )\n",
    "        except Exception as e:\n",
    "            print(f\"计算指标时出错: {e}\")\n",
    "            metrics = {\"exact_match\": 0.0, \"f1\": 0.0}\n",
    "\n",
    "        return metrics\n",
    "\n",
    "    print(\"\\n评估基础模型...\")\n",
    "    # base_metrics = evaluate_model(trainer_base, dataset)\n",
    "    base_metrics = evaluate_model(trainer_base, dataset, \"validation\")  # 明确指定split\n",
    "    print(f\"基础模型 (3 epochs) 评估结果: {base_metrics}\")\n",
    "\n",
    "    print(\"\\n评估扩展模型...\")\n",
    "    # extended_metrics = evaluate_model(trainer_extended, dataset)\n",
    "    extended_metrics = evaluate_model(trainer_extended, dataset, \"validation\")\n",
    "    print(f\"扩展模型 (5 epochs) 评估结果: {extended_metrics}\")\n",
    "\n",
    "    # 保存模型\n",
    "    # trainer_base.save_model(\"./models/distilbert_squad_base\")\n",
    "    # trainer_extended.save_model(\"./models/distilbert_squad_extended\")\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    main()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e215d8b0-7fb9-4311-8f5d-3e25eca4cefd",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
