{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "9d9205c7-2995-48f0-93f6-f7aa103ea39a",
   "metadata": {},
   "source": [
    "# Hugging Face Transformers 微调训练入门\n",
    "\n",
    "本示例将介绍基于 Transformers 实现模型微调训练的主要流程，包括：\n",
    "\n",
    "- 数据集下载\n",
    "- 数据预处理\n",
    "- 训练超参数配置\n",
    "- 训练评估指标设置\n",
    "- 训练器基本介绍\n",
    "- 实战训练\n",
    "- 模型保存"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c1d71bee-4bbb-474b-9c24-d373e6612364",
   "metadata": {},
   "source": [
    "# 一、下载数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "4a8152c6-cdd5-40e2-b456-c02ea6eeba87",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"1\"\n",
    "\n",
    "from datasets import load_dataset\n",
    "\n",
    "dataset = load_dataset(\"yelp_review_full\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "e676588d-2872-4d90-bb4c-17fd0160d952",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "DatasetDict({\n",
       "    train: Dataset({\n",
       "        features: ['label', 'text'],\n",
       "        num_rows: 650000\n",
       "    })\n",
       "    test: Dataset({\n",
       "        features: ['label', 'text'],\n",
       "        num_rows: 50000\n",
       "    })\n",
       "})"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "8ff929c8-5a30-45ac-9fdf-0d9096f8a0b1",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'label': 4,\n",
       " 'text': \"dr. goldberg offers everything i look for in a general practitioner.  he's nice and easy to talk to without being patronizing; he's always on time in seeing his patients; he's affiliated with a top-notch hospital (nyu) which my parents have explained to me is very important in case something happens and you need surgery; and you can get referrals to see specialists without having to see him first.  really, what more do you need?  i'm sitting here trying to think of any complaints i have about him, but i'm really drawing a blank.\"}"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dataset[\"train\"][0]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "23871d30-ebe2-4b3f-a338-b7a48d69a5f6",
   "metadata": {},
   "source": [
    "### 可视化数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "d0628b09-aa07-4ba9-825d-9dd022696d2f",
   "metadata": {},
   "outputs": [],
   "source": [
    "import random\n",
    "import pandas as pd\n",
    "import datasets\n",
    "from IPython.display import display, HTML"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "9cee7b21-3150-47a4-8cff-4fd8f9e268e9",
   "metadata": {},
   "outputs": [],
   "source": [
    "def show_random_elements(dataset, num_examples=10):\n",
    "    assert num_examples <= len(dataset), \"Can't pick more elements than there are in the dataset.\"\n",
    "    picks = []\n",
    "    for _ in range(num_examples):\n",
    "        pick = random.randint(0, len(dataset)-1)\n",
    "        while pick in picks:\n",
    "            pick = random.randint(0, len(dataset)-1)\n",
    "        picks.append(pick)\n",
    "    \n",
    "    df = pd.DataFrame(dataset[picks])\n",
    "    for column, typ in dataset.features.items():\n",
    "        if isinstance(typ, datasets.ClassLabel):\n",
    "            df[column] = df[column].transform(lambda i: typ.names[i])\n",
    "    display(HTML(df.to_html()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "f9e3d413-bf6f-4df4-9419-b526a1d7ea02",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>label</th>\n",
       "      <th>text</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>4 stars</td>\n",
       "      <td>I stopped in for an oil change after calling ahead to make sure they weren't too busy. I was out within 30 minutes, just as promised. It was a good experience all around -- clean facility, competent service people, very fair price. The only thing I felt might be missing was a waiting room. For sure I'd  go back the next time I need service.</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>3 stars</td>\n",
       "      <td>I enjoyed the outside atmosphere with lushes trees, live chickens, and the lovely sitting wall. But I would only recommend this place for special occasions or times where you have all morning to grab a bite. The wait time was quoted at 30 minutes but it was really about 50 and the tables are not very comfortable once you're seated. You sit very close to the next table and the service is just ok- not spectacular. I would have enjoyed it more if they had additional shops or local stands out to look through! Beware of falling leaves while you eat though :p</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>3 stars</td>\n",
       "      <td>I usually end up here once a month or so for some reason because I find myself craving some veggies.  Lunch is pretty reasonable for around the $7 range per person, but I think dinner is a bit pricey at about $9 a pop.  Damn the stinking economy even makes getting a salad harder to do.  But this place is considered vegetarian heaven.\\n\\nWhen you first walk in, you see trays to the left or right of you.  The place is kind of like a cafeteria in the sense that you take a tray, a plate, and concoct your own salad with the fixins'.  They have a TON of stuff to choose from like spinach, romaine lettuce, pickles, peppers, cheese, croutons, even my fav alfalfa sprouts, and lots of other fresh things.  They have a few special or house recipe salads at the beginning of the line.  My fav is the Won Ton Chicken Happiness salad, I get it every time and it usually fits the bill.   I do get a little greedy sometimes though and take as much of the chicken as I can find because as you can tell, meat is sparce in this place.  But you can buy a little container of extra chicken for $1.25.  To me it's not worth it.  I just go and jack extra chicken form the Chicken Noodle Soup and cut it up to throw on my salad...ha ha.  They also usually have a decent house Caesar salad and a special salad that changes monthly according to one of their themes.  After you go down the line you and choose what you want, you get to the end and pay for your drinks and meal.  Then you get seated, or if it's not busy you can seat yourself.  \\n\\nThey also have a soup area with at least 5 or 6 different soups daily, with staples like Chicken Noodle, and Chile Con Carne.  Their Cream of Mushroom soup is great (I think it's only on certain days..Tuesday?) because it's got lots of fresh mushroom in it.  You can also make yourself a baked potato with butter, sour cream, cheese, bacon, green onions, and onions.  Not to mention they also have a bakery section that features breads, muffins, pizza bread (great with their killer Ranch), garlic bread, and even gooey chocolate cake or cobbler (dinner only).  They also got a fresh fruit bar, build your own sundae bar, and a pasta station with 3 diffferent selections to choose from.\\n\\nThe food here is pretty fresh and not bad at all.  If you enjoy a salad and soup you will like this place.  I usually find myself craving for a steak or a rotisserie chicken however after I eat here.  For some reason, not having a hunk of protein on my plate leaves me hungry.  Good thing all their food is fresh, otherwise this probably wouldn't be a great place to eat.  I can eat at least 2 plates though of their Won Ton Chicken Happiness salad.  AND at least 6-8 pieces of their pizza bread (only from the MIDDLE part...ha ha) with Ranch.  They got good Strawberry Lemonade too. \\n\\nTheir sundae bar ain't too bad even though it's standard fare from a soft serve machine.  You can make one with vanilla or chocolate ice cream, Oreo crumbles, sprinkles, chocolate, caramel, pineapple, and cherries.  They have these little, itty, bitty baby cones to put ice cream in (I guess this is to cater to the little ones).  I ALWAYS, ALWAYS try to fill one of these little cones up and it overflows, causing ice cream on my hand.  Never fails.  Maybe they can install something to make the ice cream come out a little slower?  I don't know.  \\n\\nEat here if you're craving a salad (a lot of us don't), enjoy freshness (at a bit of a price), and are vegan or vegetarian.  Actually, we all should enjoy freshness at a salad buffet.  Me, I like my salad with a hunk a meat on the side.</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>5 stars</td>\n",
       "      <td>Just tried this place for the first time tonight and we really liked it! The staff was very friendly. We ordered an appetizer and they were out of part of what it came with usually, so they substituted it with something even better. We got the orange chicken and Sing High fried rice, both of which were very good. We will be back!</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>2 star</td>\n",
       "      <td>I was excited to try this after my friend told me what great reviews it had on yelp.  I was not overly impressed.  The employees were kind and helpful.  I ordered a falafel sandwhich, stuffed grape leaves and babaganouj.  I did not care for the falafel as it had a cinnamon type flavoring in it.  I have eaten lots of falafel and this is the only one I have ever had that I didn't really care for.  The stuffed grape leaves were edible but tasted exactly like the ones I get out of a jar from Trader Joes.  The babaganouj was DELICIOUS though.   If you go I highly recommend you order this eggplant dip.  Yum!</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>2 star</td>\n",
       "      <td>I am reviewing the store, not the product here. I adore Apple, I have an: iPhone, iPad, MacBook Pro, etc. My old MacBook pro served me well, 6 years of kickass service but it had become so slow and I had about 23 seconds of battery life at a time, not good.\\n\\nI skipped on down down to the Mac store on Ste Catherine and it was a quiet night there. I had made an appointment at the Genius Bar so that I could get some tips on how to get all of my old files onto the new laptop that I was buying. 1500$ later and 4 different 'genius' sales people later, I had my new laptop purchased and I was heading upstairs to my genius appointment. They told me that they could in fact, not help me and that I would need to purchase the 80$ Mac One-to-One deal if I wanted help (seriously?) I made it clear that all I wanted was for them to transfer my stuff  from iPhoto and iTunes from my old Mac to my new Mac because I am a busy lady.\\n\\nFinally after telling me they would do it but that it would take 3 days, I walked out and said 'screw this' and I figured it out on my own, at home in like 2.5 minutes. Screw you Apple store, you were of zero help. I adore your products, but you should not be walking around calling yourself a 'genius' by any stretch of the imagination you money grabbing punk asses.</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6</th>\n",
       "      <td>1 star</td>\n",
       "      <td>Ugh. Cesspool of a college bar, the only place at which I was ever, as far as I can tell, overtly discriminated against on the basis of (perceived) sexual orientation.\\n\\nI stopped in for a quick happy hour drink with two friends, one a lesbian, one bisexual; all three of us \\\"look\\\" non-heterosexual to a certain extent. Anyway, we paid for this; the old guy tending bar pointedly ignored us, at one point literally ignoring our polite \\\"Excuse me\\\"s to wait on a cute blonde in the OTHERWISE EMPTY BAR. \\n\\nI am not eager to ever return.</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7</th>\n",
       "      <td>5 stars</td>\n",
       "      <td>Wow! What an unforgettable meal. The oysters with charisma were fantastic. Huge oysters, so fresh, prepared in three different ways. The rabbit ravioli appetizer to follow was equally fantastic. The balance of flavors was perfect. My boyfriend and I both ordered the surf and turf--their signature dish, or the Chasse et peche. This dish was so decadent and delicious. The staff is fantastic,knowledgable, and so warm and welcoming.</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>8</th>\n",
       "      <td>1 star</td>\n",
       "      <td>Pretty lame... went here for breakfast since the only other choice inside the hotel was a long line at the Starbucks.  The choices were either the $25 breakfast buffet, or off the menu.  I ended up getting the Cuban breakfast sandwich thinking that it would be interesting.  Well, it was bland and tasteless.  I don't think it was even roasted Cuban pork.  It tasted like a ham and cheese sandwich with a fried egg on top.  My wife got scrambled eggs and hash brown's, which was pretty standard.  Breakfast for two...$42.  I don't recommend this place... stand in line at Starbucks and get their breakfast sandwich,  or go across the street and see if there's anything better at the MGM!</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>9</th>\n",
       "      <td>5 stars</td>\n",
       "      <td>I've used this particular PC for both domestic and international shipping with absolutely no problems; I also go here regularly to have items faxed.  Without fail, the owners are friendly, helpful, and thorough; thanks to their thoroughness I was able to successfully dispute an item a particular business said they didn't receive via fax, which saved me time and money.</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "show_random_elements(dataset[\"train\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d7c8a80b-adaa-41d1-8f60-498c763ad6cb",
   "metadata": {},
   "source": [
    "## 1.1 预处理数据\n",
    "\n",
    "下载数据集到本地后，使用 Tokenizer 来处理文本，对于长度不等的输入数据，可以使用填充（padding）和截断（truncation）策略来处理。\n",
    "\n",
    "Datasets 的 map 方法，支持一次性在整个数据集上应用预处理函数。\n",
    "\n",
    "下面使用填充到最大长度的策略，处理整个数据集："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "5191e745-3281-4c36-a23f-1d70b09732a0",
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import AutoTokenizer\n",
    "\n",
    "tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\n",
    "\n",
    "\n",
    "def tokenize_function(examples):\n",
    "    return tokenizer(examples[\"text\"], padding=\"max_length\", truncation=True)\n",
    "\n",
    "\n",
    "tokenized_datasets = dataset.map(tokenize_function, batched=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "58dbd245-cef5-4bab-8748-df57c704e0cf",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>label</th>\n",
       "      <th>text</th>\n",
       "      <th>input_ids</th>\n",
       "      <th>token_type_ids</th>\n",
       "      <th>attention_mask</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>2 star</td>\n",
       "      <td>I hate to do this but My last visit here was horrible which was 45 min ago! Thats how bad it was that i had to yelp it right away... Food was like normal ive been going there for years now. But the service was horrible!!! Not once did we get ask for refills and i drink water like crazy! I asked for the manager and we talked but she seem to be so defenseive lke the owners on Hells Kitchen I couldnt believe it... = [ i hope this service changes around.</td>\n",
       "      <td>[101, 146, 4819, 1106, 1202, 1142, 1133, 1422, 1314, 3143, 1303, 1108, 9210, 1134, 1108, 2532, 11241, 2403, 106, 1337, 1116, 1293, 2213, 1122, 1108, 1115, 178, 1125, 1106, 6798, 1233, 1643, 1122, 1268, 1283, 119, 119, 119, 6702, 1108, 1176, 2999, 178, 2707, 1151, 1280, 1175, 1111, 1201, 1208, 119, 1252, 1103, 1555, 1108, 9210, 106, 106, 106, 1753, 1517, 1225, 1195, 1243, 2367, 1111, 1231, 18591, 1116, 1105, 178, 3668, 1447, 1176, 4523, 106, 146, 1455, 1111, 1103, 2618, 1105, 1195, 5029, 1133, 1131, 3166, 1106, 1129, 1177, 3948, 2109, 181, 2391, 1103, 5032, 1113, 5479, 1116, 18988, ...]</td>\n",
       "      <td>[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...]</td>\n",
       "      <td>[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...]</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>4 stars</td>\n",
       "      <td>I love this place. It's super organized and very clean...for a thrift shop. It's also one of the biggest thrift stores I've ever seen. This is a great spot for clothes, they have a huge selection. Plus, they have a great discount program for frequent shoppers and they give you a 20% off coupon if you make a donation. The only downside is that the prices are a bit high. You really have to work the coupons along with the daily 50% off color items to get a good deal.</td>\n",
       "      <td>[101, 146, 1567, 1142, 1282, 119, 1135, 112, 188, 7688, 3366, 1105, 1304, 4044, 119, 119, 119, 1111, 170, 24438, 17761, 4130, 119, 1135, 112, 188, 1145, 1141, 1104, 1103, 4583, 24438, 17761, 4822, 146, 112, 1396, 1518, 1562, 119, 1188, 1110, 170, 1632, 3205, 1111, 3459, 117, 1152, 1138, 170, 3321, 4557, 119, 8696, 117, 1152, 1138, 170, 1632, 23290, 1788, 1111, 6539, 4130, 6206, 1105, 1152, 1660, 1128, 170, 1406, 110, 1228, 8707, 1320, 1191, 1128, 1294, 170, 14324, 119, 1109, 1178, 1205, 5570, 1110, 1115, 1103, 7352, 1132, 170, 2113, 1344, 119, 1192, 1541, 1138, 1106, 1250, ...]</td>\n",
       "      <td>[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...]</td>\n",
       "      <td>[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...]</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "show_random_elements(tokenized_datasets[\"train\"], num_examples=2)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ce2b29f7-dc7c-47c0-bcbb-7b2ad7d3c4e3",
   "metadata": {},
   "source": [
    "## 1.2 数据抽样\n",
    "\n",
    "使用 1000 个数据样本，在 BERT 上演示小规模训练（基于 Pytorch Trainer）\n",
    "\n",
    "shuffle()函数会随机重新排列列的值。如果您希望对用于洗牌数据集的算法有更多控制，可以在此函数中指定generator参数来使用不同的numpy.random.Generator。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "b25306c5-078e-4700-a3ff-d21a5ecddd46",
   "metadata": {},
   "outputs": [],
   "source": [
    "small_train_dataset = tokenized_datasets[\"train\"].shuffle(seed=42).select(range(1000))\n",
    "\n",
    "small_eval_dataset = tokenized_datasets[\"test\"].shuffle(seed=42).select(range(1000))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0f41c360-4846-422d-a2a8-aebb85ce7f9f",
   "metadata": {},
   "source": [
    "# 二、微调训练配置"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "84cecbfd-e13e-4daa-88bd-8c2b11326191",
   "metadata": {},
   "source": [
    "## 2.1 加载 BERT 模型\n",
    "\n",
    "警告通知我们正在丢弃一些权重（vocab_transform 和 vocab_layer_norm 层），并随机初始化其他一些权重（pre_classifier 和 classifier 层）。\n",
    "\n",
    "在微调模型情况下是绝对正常的，因为我们正在删除用于预训练模型的掩码语言建模任务的头部，并用一个新的头部替换它，对于这个新头部，我们没有预训练的权重，所以库会警告我们在用它进行推理之前应该对这个模型进行微调，而这正是我们要做的事情。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "48065166-7782-4cc9-900d-4d48a5b1f531",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight']\n",
      "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
     ]
    }
   ],
   "source": [
    "from transformers import AutoModelForSequenceClassification\n",
    "\n",
    "model = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased\", num_labels=5)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fc93ad43-7ee0-4ab6-b252-b8a05cf9216e",
   "metadata": {},
   "source": [
    "## 2.2 训练超参数（TrainingArguments）\n",
    "\n",
    "完整配置参数与默认值：https://huggingface.co/docs/transformers/v4.36.1/en/main_classes/trainer#transformers.TrainingArguments\n",
    "\n",
    "源代码定义：https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/training_args.py#L161\n",
    "\n",
    "最重要配置：模型权重保存路径(output_dir)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "826dc46a-dc96-4429-8c01-09c40b827032",
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import TrainingArguments\n",
    "\n",
    "model_dir = \"./model/bert-base-cased-finetune-yelp\"\n",
    "\n",
    "# logging_steps 默认值为500，根据我们的训练数据和步长，将其设置为100\n",
    "training_args = TrainingArguments(output_dir=model_dir,\n",
    "                                  per_device_train_batch_size=16,\n",
    "                                  num_train_epochs=5,\n",
    "                                  logging_steps=100)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "43af2d92-6978-4e8d-800a-a64c384e2b60",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "TrainingArguments(\n",
      "_n_gpu=1,\n",
      "accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False},\n",
      "adafactor=False,\n",
      "adam_beta1=0.9,\n",
      "adam_beta2=0.999,\n",
      "adam_epsilon=1e-08,\n",
      "auto_find_batch_size=False,\n",
      "average_tokens_across_devices=False,\n",
      "batch_eval_metrics=False,\n",
      "bf16=False,\n",
      "bf16_full_eval=False,\n",
      "data_seed=None,\n",
      "dataloader_drop_last=False,\n",
      "dataloader_num_workers=0,\n",
      "dataloader_persistent_workers=False,\n",
      "dataloader_pin_memory=True,\n",
      "dataloader_prefetch_factor=None,\n",
      "ddp_backend=None,\n",
      "ddp_broadcast_buffers=None,\n",
      "ddp_bucket_cap_mb=None,\n",
      "ddp_find_unused_parameters=None,\n",
      "ddp_timeout=1800,\n",
      "debug=[],\n",
      "deepspeed=None,\n",
      "disable_tqdm=False,\n",
      "do_eval=False,\n",
      "do_predict=False,\n",
      "do_train=False,\n",
      "eval_accumulation_steps=None,\n",
      "eval_delay=0,\n",
      "eval_do_concat_batches=True,\n",
      "eval_on_start=False,\n",
      "eval_steps=None,\n",
      "eval_strategy=IntervalStrategy.NO,\n",
      "eval_use_gather_object=False,\n",
      "fp16=False,\n",
      "fp16_backend=auto,\n",
      "fp16_full_eval=False,\n",
      "fp16_opt_level=O1,\n",
      "fsdp=[],\n",
      "fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False},\n",
      "fsdp_min_num_params=0,\n",
      "fsdp_transformer_layer_cls_to_wrap=None,\n",
      "full_determinism=False,\n",
      "gradient_accumulation_steps=1,\n",
      "gradient_checkpointing=False,\n",
      "gradient_checkpointing_kwargs=None,\n",
      "greater_is_better=None,\n",
      "group_by_length=False,\n",
      "half_precision_backend=auto,\n",
      "hub_always_push=False,\n",
      "hub_model_id=None,\n",
      "hub_private_repo=None,\n",
      "hub_revision=None,\n",
      "hub_strategy=HubStrategy.EVERY_SAVE,\n",
      "hub_token=<HUB_TOKEN>,\n",
      "ignore_data_skip=False,\n",
      "include_for_metrics=[],\n",
      "include_inputs_for_metrics=False,\n",
      "include_num_input_tokens_seen=False,\n",
      "include_tokens_per_second=False,\n",
      "jit_mode_eval=False,\n",
      "label_names=None,\n",
      "label_smoothing_factor=0.0,\n",
      "learning_rate=5e-05,\n",
      "length_column_name=length,\n",
      "liger_kernel_config=None,\n",
      "load_best_model_at_end=False,\n",
      "local_rank=0,\n",
      "log_level=passive,\n",
      "log_level_replica=warning,\n",
      "log_on_each_node=True,\n",
      "logging_dir=./model/bert-base-cased-finetune-yelp/runs/Jul28_08-22-51_c62ca65d8985,\n",
      "logging_first_step=False,\n",
      "logging_nan_inf_filter=True,\n",
      "logging_steps=100,\n",
      "logging_strategy=IntervalStrategy.STEPS,\n",
      "lr_scheduler_kwargs={},\n",
      "lr_scheduler_type=SchedulerType.LINEAR,\n",
      "max_grad_norm=1.0,\n",
      "max_steps=-1,\n",
      "metric_for_best_model=None,\n",
      "mp_parameters=,\n",
      "neftune_noise_alpha=None,\n",
      "no_cuda=False,\n",
      "num_train_epochs=5,\n",
      "optim=OptimizerNames.ADAMW_TORCH,\n",
      "optim_args=None,\n",
      "optim_target_modules=None,\n",
      "output_dir=./model/bert-base-cased-finetune-yelp,\n",
      "overwrite_output_dir=False,\n",
      "past_index=-1,\n",
      "per_device_eval_batch_size=8,\n",
      "per_device_train_batch_size=16,\n",
      "prediction_loss_only=False,\n",
      "push_to_hub=False,\n",
      "push_to_hub_model_id=None,\n",
      "push_to_hub_organization=None,\n",
      "push_to_hub_token=<PUSH_TO_HUB_TOKEN>,\n",
      "ray_scope=last,\n",
      "remove_unused_columns=True,\n",
      "report_to=[],\n",
      "restore_callback_states_from_checkpoint=False,\n",
      "resume_from_checkpoint=None,\n",
      "run_name=./model/bert-base-cased-finetune-yelp,\n",
      "save_on_each_node=False,\n",
      "save_only_model=False,\n",
      "save_safetensors=True,\n",
      "save_steps=500,\n",
      "save_strategy=SaveStrategy.STEPS,\n",
      "save_total_limit=None,\n",
      "seed=42,\n",
      "skip_memory_metrics=True,\n",
      "tf32=None,\n",
      "torch_compile=False,\n",
      "torch_compile_backend=None,\n",
      "torch_compile_mode=None,\n",
      "torch_empty_cache_steps=None,\n",
      "torchdynamo=None,\n",
      "tpu_metrics_debug=False,\n",
      "tpu_num_cores=None,\n",
      "use_cpu=False,\n",
      "use_ipex=False,\n",
      "use_legacy_prediction_loop=False,\n",
      "use_liger_kernel=False,\n",
      "use_mps_device=False,\n",
      "warmup_ratio=0.0,\n",
      "warmup_steps=0,\n",
      "weight_decay=0.0,\n",
      ")\n"
     ]
    }
   ],
   "source": [
    "# 完整的超参数配置\n",
    "print(training_args)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dd7e8e7c-bc38-4bed-82df-1393f70f25a7",
   "metadata": {},
   "source": [
    "# 三、训练过程中的指标评估（Evaluate)\n",
    "\n",
    "Hugging Face Evaluate 库 支持使用一行代码，获得数十种不同领域（自然语言处理、计算机视觉、强化学习等）的评估方法。 当前支持 完整评估指标：https://huggingface.co/evaluate-metric\n",
    "\n",
    "训练器（Trainer）在训练过程中不会自动评估模型性能。因此，我们需要向训练器传递一个函数来计算和报告指标。\n",
    "\n",
    "Evaluate库提供了一个简单的准确率函数，您可以使用evaluate.load函数加载"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "ddbb6699-9a36-4a4e-8635-5dbd8c444221",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import evaluate\n",
    "\n",
    "metric = evaluate.load(\"accuracy\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "39b7dfa3-2168-4aad-b583-efb08ddfb40c",
   "metadata": {},
   "source": [
    "接着，调用 compute 函数来计算预测的准确率。\n",
    "\n",
    "在将预测传递给 compute 函数之前，我们需要将 logits 转换为预测值（所有Transformers 模型都返回 logits）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "1448c11a-23cf-4c34-98ba-21ec5236050e",
   "metadata": {},
   "outputs": [],
   "source": [
    "def compute_metrics(eval_pred):\n",
    "    logits, labels = eval_pred\n",
    "    predictions = np.argmax(logits, axis=-1)\n",
    "    return metric.compute(predictions=predictions, references=labels)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ac0452b4-cda5-4cad-b57d-457824bad285",
   "metadata": {},
   "source": [
    "## 3.1 训练过程指标监控\n",
    "\n",
    "通常，为了监控训练过程中的评估指标变化，我们可以在TrainingArguments指定evaluation_strategy参数，以便在 epoch 结束时报告评估指标。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "decce7c3-7f86-4e39-be23-96a90d3eb574",
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import TrainingArguments, Trainer\n",
    "\n",
    "training_args = TrainingArguments(output_dir=model_dir,\n",
    "                                  eval_strategy=\"epoch\", \n",
    "                                  per_device_train_batch_size=16,\n",
    "                                  num_train_epochs=3,\n",
    "                                  logging_steps=30)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d469ce65-bb09-4481-8cca-a63eb8ef44d5",
   "metadata": {},
   "source": [
    "# 四、开始训练\n",
    "\n",
    "实例化训练器（Trainer）\n",
    "kernel version 版本问题：暂不影响本示例代码运行"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "3c6ea0e9-3234-4e1f-9796-c84e4d88bc22",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "from transformers import Trainer, TrainingArguments\n",
    "trainer = Trainer(\n",
    "    model=model,\n",
    "    args=training_args,\n",
    "    train_dataset=small_train_dataset,\n",
    "    eval_dataset=small_eval_dataset,\n",
    "    compute_metrics=compute_metrics\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e6cfc1a1-4aa9-4b87-ace8-52f81d6183ab",
   "metadata": {},
   "source": [
    "## 4.1 使用 nvidia-smi 查看 GPU 使用\n",
    "\n",
    "为了实时查看GPU使用情况，可以使用 watch 指令实现轮询：watch -n 1 nvidia-smi:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "e61f9990-2d56-41da-a44c-6f9272e7cedc",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Mon Jul 28 08:22:55 2025       \n",
      "+-----------------------------------------------------------------------------------------+\n",
      "| NVIDIA-SMI 550.127.05             Driver Version: 550.127.05     CUDA Version: 12.4     |\n",
      "|-----------------------------------------+------------------------+----------------------+\n",
      "| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |\n",
      "| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |\n",
      "|                                         |                        |               MIG M. |\n",
      "|=========================================+========================+======================|\n",
      "|   0  NVIDIA GeForce RTX 4090        Off |   00000000:2E:00.0 Off |                  Off |\n",
      "| 45%   29C    P8              6W /  450W |   24190MiB /  24564MiB |      0%      Default |\n",
      "|                                         |                        |                  N/A |\n",
      "+-----------------------------------------+------------------------+----------------------+\n",
      "|   1  NVIDIA GeForce RTX 4090        Off |   00000000:3A:00.0 Off |                  Off |\n",
      "| 45%   28C    P2             19W /  450W |     859MiB /  24564MiB |      0%      Default |\n",
      "|                                         |                        |                  N/A |\n",
      "+-----------------------------------------+------------------------+----------------------+\n",
      "|   2  NVIDIA GeForce RTX 4090        Off |   00000000:3B:00.0 Off |                  Off |\n",
      "| 45%   29C    P8             13W /  450W |    1655MiB /  24564MiB |      0%      Default |\n",
      "|                                         |                        |                  N/A |\n",
      "+-----------------------------------------+------------------------+----------------------+\n",
      "|   3  NVIDIA GeForce RTX 4090        Off |   00000000:3C:00.0 Off |                  Off |\n",
      "| 44%   29C    P8              9W /  450W |   24081MiB /  24564MiB |      0%      Default |\n",
      "|                                         |                        |                  N/A |\n",
      "+-----------------------------------------+------------------------+----------------------+\n",
      "|   4  NVIDIA GeForce RTX 4090        Off |   00000000:AD:00.0 Off |                  Off |\n",
      "| 44%   31C    P8             16W /  450W |   23867MiB /  24564MiB |      0%      Default |\n",
      "|                                         |                        |                  N/A |\n",
      "+-----------------------------------------+------------------------+----------------------+\n",
      "|   5  NVIDIA GeForce RTX 4090        Off |   00000000:AE:00.0 Off |                  Off |\n",
      "| 44%   30C    P8             10W /  450W |   20889MiB /  24564MiB |      0%      Default |\n",
      "|                                         |                        |                  N/A |\n",
      "+-----------------------------------------+------------------------+----------------------+\n",
      "|   6  NVIDIA GeForce RTX 4090        Off |   00000000:BD:00.0 Off |                  Off |\n",
      "| 43%   26C    P8              9W /  450W |   23267MiB /  24564MiB |      0%      Default |\n",
      "|                                         |                        |                  N/A |\n",
      "+-----------------------------------------+------------------------+----------------------+\n",
      "|   7  NVIDIA GeForce RTX 4090        Off |   00000000:BE:00.0 Off |                  Off |\n",
      "| 44%   28C    P8              6W /  450W |   23267MiB /  24564MiB |      0%      Default |\n",
      "|                                         |                        |                  N/A |\n",
      "+-----------------------------------------+------------------------+----------------------+\n",
      "                                                                                         \n",
      "+-----------------------------------------------------------------------------------------+\n",
      "| Processes:                                                                              |\n",
      "|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |\n",
      "|        ID   ID                                                               Usage      |\n",
      "|=========================================================================================|\n",
      "+-----------------------------------------------------------------------------------------+\n"
     ]
    }
   ],
   "source": [
    "!nvidia-smi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "7e91cc3e-5e80-4d42-9a04-1fb4a45dca46",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='189' max='189' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [189/189 00:43, Epoch 3/3]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Epoch</th>\n",
       "      <th>Training Loss</th>\n",
       "      <th>Validation Loss</th>\n",
       "      <th>Accuracy</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>1.362000</td>\n",
       "      <td>1.135202</td>\n",
       "      <td>0.507000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>1.012800</td>\n",
       "      <td>0.992836</td>\n",
       "      <td>0.567000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>0.666000</td>\n",
       "      <td>0.967813</td>\n",
       "      <td>0.606000</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "TrainOutput(global_step=189, training_loss=1.0699586187090193, metrics={'train_runtime': 44.011, 'train_samples_per_second': 68.165, 'train_steps_per_second': 4.294, 'total_flos': 789354427392000.0, 'train_loss': 1.0699586187090193, 'epoch': 3.0})"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "trainer.train()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "731e307c-840d-45e0-877c-a13f5b7a088e",
   "metadata": {},
   "outputs": [],
   "source": [
    "small_test_dataset = tokenized_datasets[\"test\"].shuffle(seed=64).select(range(1000))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "67a82281-3222-4f55-9559-ec9a8dfe9628",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='125' max='125' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [125/125 00:03]\n",
       "    </div>\n",
       "    "
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "{'eval_loss': 1.01746666431427,\n",
       " 'eval_accuracy': 0.576,\n",
       " 'eval_runtime': 3.6944,\n",
       " 'eval_samples_per_second': 270.678,\n",
       " 'eval_steps_per_second': 33.835,\n",
       " 'epoch': 3.0}"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "trainer.evaluate(small_test_dataset)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eee81d30-e5e3-4145-b985-78eaea544276",
   "metadata": {},
   "source": [
    "## 4.2 保存模型和训练状态\n",
    "\n",
    "- 使用 trainer.save_model 方法保存模型，后续可以通过 from_pretrained() 方法重新加载\n",
    "- 使用 trainer.save_state 方法保存训练状态"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "96295318-400b-49e1-a82d-34bf94cb0b88",
   "metadata": {},
   "outputs": [],
   "source": [
    "trainer.save_model(model_dir)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "1b312d80-1684-4213-b979-2700a3b7d752",
   "metadata": {},
   "outputs": [],
   "source": [
    "# trainer.save_state()\n",
    "trainer.model.save_pretrained(\"./\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e9136366-7da8-4b32-a958-9ff54f466b74",
   "metadata": {},
   "source": [
    "# 五、使用全量的数据进行训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "bfea572a-d094-42fb-b505-93310e31527d",
   "metadata": {},
   "outputs": [],
   "source": [
    "all_train_dataset = tokenized_datasets[\"train\"].shuffle(seed=42)\n",
    "all_eval_dataset = tokenized_datasets[\"test\"].shuffle(seed=42)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "12f75d8c-8f11-4ff3-8d65-a341e67d8182",
   "metadata": {},
   "outputs": [],
   "source": [
    "training_args = TrainingArguments(output_dir=model_dir,\n",
    "                                  eval_strategy=\"epoch\", \n",
    "                                  per_device_train_batch_size=16,\n",
    "                                  num_train_epochs=3,\n",
    "                                  logging_steps=30)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "f92f7591-e838-4ac2-a485-3441a8ecd9bd",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "from transformers import Trainer, TrainingArguments\n",
    "trainer = Trainer(\n",
    "    model=model,\n",
    "    args=training_args,\n",
    "    train_dataset=all_train_dataset,\n",
    "    eval_dataset=all_eval_dataset,\n",
    "    compute_metrics=compute_metrics\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "d5463e53-7f95-4140-93c2-ffed7a705485",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='121875' max='121875' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [121875/121875 5:57:47, Epoch 3/3]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Epoch</th>\n",
       "      <th>Training Loss</th>\n",
       "      <th>Validation Loss</th>\n",
       "      <th>Accuracy</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>0.805700</td>\n",
       "      <td>0.749012</td>\n",
       "      <td>0.672300</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>0.727500</td>\n",
       "      <td>0.726184</td>\n",
       "      <td>0.682880</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>0.643200</td>\n",
       "      <td>0.731320</td>\n",
       "      <td>0.691180</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "IOPub message rate exceeded.\n",
      "The Jupyter server will temporarily stop sending output\n",
      "to the client in order to avoid crashing it.\n",
      "To change this limit, set the config variable\n",
      "`--ServerApp.iopub_msg_rate_limit`.\n",
      "\n",
      "Current values:\n",
      "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
      "ServerApp.rate_limit_window=3.0 (secs)\n",
      "\n",
      "IOPub message rate exceeded.\n",
      "The Jupyter server will temporarily stop sending output\n",
      "to the client in order to avoid crashing it.\n",
      "To change this limit, set the config variable\n",
      "`--ServerApp.iopub_msg_rate_limit`.\n",
      "\n",
      "Current values:\n",
      "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
      "ServerApp.rate_limit_window=3.0 (secs)\n",
      "\n",
      "IOPub message rate exceeded.\n",
      "The Jupyter server will temporarily stop sending output\n",
      "to the client in order to avoid crashing it.\n",
      "To change this limit, set the config variable\n",
      "`--ServerApp.iopub_msg_rate_limit`.\n",
      "\n",
      "Current values:\n",
      "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
      "ServerApp.rate_limit_window=3.0 (secs)\n",
      "\n",
      "IOPub message rate exceeded.\n",
      "The Jupyter server will temporarily stop sending output\n",
      "to the client in order to avoid crashing it.\n",
      "To change this limit, set the config variable\n",
      "`--ServerApp.iopub_msg_rate_limit`.\n",
      "\n",
      "Current values:\n",
      "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
      "ServerApp.rate_limit_window=3.0 (secs)\n",
      "\n",
      "IOPub message rate exceeded.\n",
      "The Jupyter server will temporarily stop sending output\n",
      "to the client in order to avoid crashing it.\n",
      "To change this limit, set the config variable\n",
      "`--ServerApp.iopub_msg_rate_limit`.\n",
      "\n",
      "Current values:\n",
      "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
      "ServerApp.rate_limit_window=3.0 (secs)\n",
      "\n",
      "IOPub message rate exceeded.\n",
      "The Jupyter server will temporarily stop sending output\n",
      "to the client in order to avoid crashing it.\n",
      "To change this limit, set the config variable\n",
      "`--ServerApp.iopub_msg_rate_limit`.\n",
      "\n",
      "Current values:\n",
      "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
      "ServerApp.rate_limit_window=3.0 (secs)\n",
      "\n",
      "IOPub message rate exceeded.\n",
      "The Jupyter server will temporarily stop sending output\n",
      "to the client in order to avoid crashing it.\n",
      "To change this limit, set the config variable\n",
      "`--ServerApp.iopub_msg_rate_limit`.\n",
      "\n",
      "Current values:\n",
      "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
      "ServerApp.rate_limit_window=3.0 (secs)\n",
      "\n",
      "IOPub message rate exceeded.\n",
      "The Jupyter server will temporarily stop sending output\n",
      "to the client in order to avoid crashing it.\n",
      "To change this limit, set the config variable\n",
      "`--ServerApp.iopub_msg_rate_limit`.\n",
      "\n",
      "Current values:\n",
      "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
      "ServerApp.rate_limit_window=3.0 (secs)\n",
      "\n",
      "IOPub message rate exceeded.\n",
      "The Jupyter server will temporarily stop sending output\n",
      "to the client in order to avoid crashing it.\n",
      "To change this limit, set the config variable\n",
      "`--ServerApp.iopub_msg_rate_limit`.\n",
      "\n",
      "Current values:\n",
      "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
      "ServerApp.rate_limit_window=3.0 (secs)\n",
      "\n",
      "IOPub message rate exceeded.\n",
      "The Jupyter server will temporarily stop sending output\n",
      "to the client in order to avoid crashing it.\n",
      "To change this limit, set the config variable\n",
      "`--ServerApp.iopub_msg_rate_limit`.\n",
      "\n",
      "Current values:\n",
      "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
      "ServerApp.rate_limit_window=3.0 (secs)\n",
      "\n",
      "IOPub message rate exceeded.\n",
      "The Jupyter server will temporarily stop sending output\n",
      "to the client in order to avoid crashing it.\n",
      "To change this limit, set the config variable\n",
      "`--ServerApp.iopub_msg_rate_limit`.\n",
      "\n",
      "Current values:\n",
      "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
      "ServerApp.rate_limit_window=3.0 (secs)\n",
      "\n",
      "IOPub message rate exceeded.\n",
      "The Jupyter server will temporarily stop sending output\n",
      "to the client in order to avoid crashing it.\n",
      "To change this limit, set the config variable\n",
      "`--ServerApp.iopub_msg_rate_limit`.\n",
      "\n",
      "Current values:\n",
      "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
      "ServerApp.rate_limit_window=3.0 (secs)\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "TrainOutput(global_step=121875, training_loss=0.7149184527196639, metrics={'train_runtime': 21467.2336, 'train_samples_per_second': 90.836, 'train_steps_per_second': 5.677, 'total_flos': 5.130803778048e+17, 'train_loss': 0.7149184527196639, 'epoch': 3.0})"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "trainer.train()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "6f10b430-4c37-41ba-9216-b7b45c8785bf",
   "metadata": {},
   "outputs": [],
   "source": [
    "all_test_dataset = tokenized_datasets[\"test\"].shuffle(seed=64)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "c59f8613-f2f7-40c8-8457-884da377a255",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='6250' max='6250' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [6250/6250 03:06]\n",
       "    </div>\n",
       "    "
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "{'eval_loss': 0.7313200831413269,\n",
       " 'eval_accuracy': 0.69118,\n",
       " 'eval_runtime': 186.9421,\n",
       " 'eval_samples_per_second': 267.462,\n",
       " 'eval_steps_per_second': 33.433,\n",
       " 'epoch': 3.0}"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "trainer.evaluate(all_test_dataset)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "66f696af-b215-46ac-b505-90a4cbde6dd2",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
