yqzhangjx commited on
Commit
76d705d
1 Parent(s): ef8247a

Upload fine-tune-quickstart.ipynb

Browse files
Files changed (1) hide show
  1. fine-tune-quickstart.ipynb +1032 -0
fine-tune-quickstart.ipynb ADDED
@@ -0,0 +1,1032 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "90c6730f-5d76-450b-9788-ec883d024f57",
6
+ "metadata": {},
7
+ "source": [
8
+ "# Hugging Face Transformers 微调训练入门\n",
9
+ "\n",
10
+ "本示例将介绍基于 Transformers 实现模型微调训练的主要流程,包括:\n",
11
+ "- 数据集下载\n",
12
+ "- 数据预处理\n",
13
+ "- 训练超参数配置\n",
14
+ "- 训练评估指标设置\n",
15
+ "- 训练器基本介绍\n",
16
+ "- 实战训练\n",
17
+ "- 模型保存"
18
+ ]
19
+ },
20
+ {
21
+ "cell_type": "markdown",
22
+ "id": "aa0b1e12-1921-4438-8d5d-9760a629dcfe",
23
+ "metadata": {},
24
+ "source": [
25
+ "## YelpReviewFull 数据集\n",
26
+ "\n",
27
+ "**Hugging Face 数据集:[ YelpReviewFull ](https://huggingface.co/datasets/yelp_review_full)**\n",
28
+ "\n",
29
+ "### 数据集摘要\n",
30
+ "\n",
31
+ "Yelp评论数据集包括来自Yelp的评论。它是从Yelp Dataset Challenge 2015数据中提取的。\n",
32
+ "\n",
33
+ "### 支持的任务和排行榜\n",
34
+ "文本分类、情感分类:该数据集主要用于文本分类:给定文本,预测情感。\n",
35
+ "\n",
36
+ "### 语言\n",
37
+ "这些评论主要以英语编写。\n",
38
+ "\n",
39
+ "### 数据集结构\n",
40
+ "\n",
41
+ "#### 数据实例\n",
42
+ "一个典型的数据点包括文本和相应的标签。\n",
43
+ "\n",
44
+ "来自YelpReviewFull测试集的示例如下:\n",
45
+ "\n",
46
+ "```json\n",
47
+ "{\n",
48
+ " 'label': 0,\n",
49
+ " 'text': 'I got \\'new\\' tires from them and within two weeks got a flat. I took my car to a local mechanic to see if i could get the hole patched, but they said the reason I had a flat was because the previous patch had blown - WAIT, WHAT? I just got the tire and never needed to have it patched? This was supposed to be a new tire. \\\\nI took the tire over to Flynn\\'s and they told me that someone punctured my tire, then tried to patch it. So there are resentful tire slashers? I find that very unlikely. After arguing with the guy and telling him that his logic was far fetched he said he\\'d give me a new tire \\\\\"this time\\\\\". \\\\nI will never go back to Flynn\\'s b/c of the way this guy treated me and the simple fact that they gave me a used tire!'\n",
50
+ "}\n",
51
+ "```\n",
52
+ "\n",
53
+ "#### 数据字段\n",
54
+ "\n",
55
+ "- 'text': 评论文本使用双引号(\")转义,任何内部双引号都通过2个双引号(\"\")转义。换行符使用反斜杠后跟一个 \"n\" 字符转义,即 \"\\n\"。\n",
56
+ "- 'label': 对应于评论的分数(介于1和5之间)。\n",
57
+ "\n",
58
+ "#### 数据拆分\n",
59
+ "\n",
60
+ "Yelp评论完整星级数据集是通过随机选取每个1到5星评论的130,000个训练样本和10,000个测试样本构建的。总共有650,000个训练样本和50,000个测试样本。\n",
61
+ "\n",
62
+ "## 下载数据集"
63
+ ]
64
+ },
65
+ {
66
+ "cell_type": "code",
67
+ "execution_count": 1,
68
+ "id": "9aa8ae5e-f57b-40cb-b929-16b172eed9a2",
69
+ "metadata": {
70
+ "editable": true,
71
+ "slideshow": {
72
+ "slide_type": ""
73
+ },
74
+ "tags": []
75
+ },
76
+ "outputs": [
77
+ {
78
+ "data": {
79
+ "text/plain": [
80
+ "3"
81
+ ]
82
+ },
83
+ "execution_count": 1,
84
+ "metadata": {},
85
+ "output_type": "execute_result"
86
+ }
87
+ ],
88
+ "source": [
89
+ "import os\n",
90
+ "os.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\" # see issue #152\n",
91
+ "os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"1,2,3\"\n",
92
+ "\n",
93
+ "import torch\n",
94
+ "torch.cuda.device_count()"
95
+ ]
96
+ },
97
+ {
98
+ "cell_type": "code",
99
+ "execution_count": 2,
100
+ "id": "bbf72d6c-7ea5-4ee1-969a-c5060b9cb2d4",
101
+ "metadata": {},
102
+ "outputs": [
103
+ {
104
+ "name": "stderr",
105
+ "output_type": "stream",
106
+ "text": [
107
+ "/usr/local/lib/python3.9/dist-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
108
+ " from .autonotebook import tqdm as notebook_tqdm\n"
109
+ ]
110
+ }
111
+ ],
112
+ "source": [
113
+ "from datasets import load_dataset\n",
114
+ "\n",
115
+ "dataset = load_dataset(\"yelp_review_full\")"
116
+ ]
117
+ },
118
+ {
119
+ "cell_type": "code",
120
+ "execution_count": 3,
121
+ "id": "ec6fc806-1395-42dd-8121-a6e98a95cf01",
122
+ "metadata": {},
123
+ "outputs": [
124
+ {
125
+ "data": {
126
+ "text/plain": [
127
+ "DatasetDict({\n",
128
+ " train: Dataset({\n",
129
+ " features: ['label', 'text'],\n",
130
+ " num_rows: 650000\n",
131
+ " })\n",
132
+ " test: Dataset({\n",
133
+ " features: ['label', 'text'],\n",
134
+ " num_rows: 50000\n",
135
+ " })\n",
136
+ "})"
137
+ ]
138
+ },
139
+ "execution_count": 3,
140
+ "metadata": {},
141
+ "output_type": "execute_result"
142
+ }
143
+ ],
144
+ "source": [
145
+ "dataset"
146
+ ]
147
+ },
148
+ {
149
+ "cell_type": "code",
150
+ "execution_count": 4,
151
+ "id": "c94ad529-1604-48bd-8c8d-aa2f3bca6200",
152
+ "metadata": {},
153
+ "outputs": [
154
+ {
155
+ "data": {
156
+ "text/plain": [
157
+ "{'label': 0,\n",
158
+ " 'text': \"Owning a driving range inside the city limits is like a license to print money. I don't think I ask much out of a driving range. Decent mats, clean balls and accessible hours. Hell you need even less people now with the advent of the machine that doles out the balls. This place has none of them. It is april and there are no grass tees yet. BTW they opened for the season this week although it has been golfing weather for a month. The mats look like the carpet at my 107 year old aunt Irene's house. Worn and thread bare. Let's talk about the hours. This place is equipped with lights yet they only sell buckets of balls until 730. It is still light out. Finally lets you have the pit to hit into. When I arrived I wasn't sure if this was a driving range or an excavation site for a mastodon or a strip mining operation. There is no grass on the range. Just mud. Makes it a good tool to figure out how far you actually are hitting the ball. Oh, they are cash only also.\\\\n\\\\nBottom line, this place sucks. The best hope is that the owner sells it to someone that actually wants to make money and service golfers in Pittsburgh.\"}"
159
+ ]
160
+ },
161
+ "execution_count": 4,
162
+ "metadata": {},
163
+ "output_type": "execute_result"
164
+ }
165
+ ],
166
+ "source": [
167
+ "dataset[\"train\"][10]"
168
+ ]
169
+ },
170
+ {
171
+ "cell_type": "code",
172
+ "execution_count": 5,
173
+ "id": "6dc45997-e391-456f-b0b9-d3193b0f6a9d",
174
+ "metadata": {},
175
+ "outputs": [],
176
+ "source": [
177
+ "import random\n",
178
+ "import pandas as pd\n",
179
+ "import datasets\n",
180
+ "from IPython.display import display, HTML"
181
+ ]
182
+ },
183
+ {
184
+ "cell_type": "code",
185
+ "execution_count": 6,
186
+ "id": "9e2ecebb-d5d1-456d-967c-842a79fdd622",
187
+ "metadata": {},
188
+ "outputs": [],
189
+ "source": [
190
+ "def show_random_elements(dataset, num_examples=10):\n",
191
+ " assert num_examples <= len(dataset), \"Can't pick more elements than there are in the dataset.\"\n",
192
+ " picks = []\n",
193
+ " for _ in range(num_examples):\n",
194
+ " pick = random.randint(0, len(dataset)-1)\n",
195
+ " while pick in picks:\n",
196
+ " pick = random.randint(0, len(dataset)-1)\n",
197
+ " picks.append(pick)\n",
198
+ " \n",
199
+ " df = pd.DataFrame(dataset[picks])\n",
200
+ " for column, typ in dataset.features.items():\n",
201
+ " if isinstance(typ, datasets.ClassLabel):\n",
202
+ " df[column] = df[column].transform(lambda i: typ.names[i])\n",
203
+ " display(HTML(df.to_html()))"
204
+ ]
205
+ },
206
+ {
207
+ "cell_type": "code",
208
+ "execution_count": 7,
209
+ "id": "1af560b6-7d21-499e-9b82-114be371a98a",
210
+ "metadata": {},
211
+ "outputs": [
212
+ {
213
+ "data": {
214
+ "text/html": [
215
+ "<table border=\"1\" class=\"dataframe\">\n",
216
+ " <thead>\n",
217
+ " <tr style=\"text-align: right;\">\n",
218
+ " <th></th>\n",
219
+ " <th>label</th>\n",
220
+ " <th>text</th>\n",
221
+ " </tr>\n",
222
+ " </thead>\n",
223
+ " <tbody>\n",
224
+ " <tr>\n",
225
+ " <th>0</th>\n",
226
+ " <td>4 stars</td>\n",
227
+ " <td>Great pizza and Sweet Chili wings in The Brewer's Cafe. My wife and I ordered their pizza. I have tried MANY pizzas. My wife and I were pleasantly surprised! My only complaints about this place would be that the menu is just a tad too small. Another tip: Anything fried comes out PIPING HOT!!! ;-)</td>\n",
228
+ " </tr>\n",
229
+ " <tr>\n",
230
+ " <th>1</th>\n",
231
+ " <td>2 star</td>\n",
232
+ " <td>This is the exact reason why Yelp at times is not a credible resource. You see legitimate users with legitimate reviews frequently get filtered, and you see 4-5 star ratings for restaurants that provide a low quality product. \\n\\nFor years, I kept reading on Yelp that \\\"Blueberry Hill\\\" has the \\\"greatest hand breaded chicken fried steak in all of Las Vegas\\\". There was nothing true about that statement. The chicken fried steak was nothing more than the processed precooked version that you get at Costco and Smart N Final. All they do is reheat it and serve. I was very disappointed on account of all the good reviews on yelp about how the steak was a \\\"real sirloin that was hand battered\\\". \\n\\nThe side salad was merely lettuce with 1 ring of red onion, served in a beaten up plastic bowl. No tomatoes, no cheese, no croutons...nothing. The Ranch dressing tasted like water mixed with mayonnaise. \\n\\nThe biscuit that came with our steaks tasted like garlic bread. \\n\\nThe corn bread that came with our steak never came out. The server forgot to bring it out. We never got it. \\n\\nOn a positive note. The Buser was very hard working and made up for the poor quality service that the server provided. The server was not rude, but she was very forgetful and never came around one time to check on us. You need to flag down the server to get her attention. The Buser compensated for the lack of service by coming around to refill our coffee and water. The server was usually found sitting at the counter and reading a book.\\n\\nWhen I asked the server for Tabasco, she forgot. 10 minutes later I asked her again and she finally got it. When I asked the server for Jam, she forgot. When I asked her for saltine crackers, she forgot. She did bring out the check immediately though before we even touched our food. \\n\\nIn conclusion. The bill came out quicker than the food. The buser was the hardest working guy here. The food was processed and low quality. The prices were pretty high. Yelp has failed us all.</td>\n",
233
+ " </tr>\n",
234
+ " <tr>\n",
235
+ " <th>2</th>\n",
236
+ " <td>2 star</td>\n",
237
+ " <td>For the $18 ticket price, this aquarium was way smaller than I expected. I wish we had taken our time checking things out. We sped through each exhibit thinking there would be more impressive things ahead, but we reached the end quickly and thought, \\\"Is that it?\\\" My nieces had a blast, but they are 4 and 3 and were also amazed by the monorail.\\n\\nThe jellyfish tank and the stingray pool were cool, but nothing stood out to make this place a must-do. One of the employees was more interested in talking about the Lakers instead of the piranhas.\\n\\nIf you're in the area and you need to entertain some kids, it might be ok. If not, skip it and hit the pool instead.</td>\n",
238
+ " </tr>\n",
239
+ " <tr>\n",
240
+ " <th>3</th>\n",
241
+ " <td>4 stars</td>\n",
242
+ " <td>This was a last minute, drive by decision that fit our timing on a late weekday afternoon. What a wonderful surprise! Our timing must have been good, cuz we got right in with no wait.\\n\\nWe got in on the All You Can Eat sushi lunch for 20 bucks. The rolls were really good and the staff was super attentive. \\n\\nTiger roll, Mount Charelston roll - thumbs up. The spicy scallop? Seems to be a different standard out west, and not my favorite Tsu Kasa style. The tuna was sublime. Seaweed salad - the perfect serving size. Shrimp Tempura - amazingly perfect. Tried the Ikura (Salmon Roe) but realized I hadn't quite evolved that far.\\n\\nI would go back to this place again and again.</td>\n",
243
+ " </tr>\n",
244
+ " <tr>\n",
245
+ " <th>4</th>\n",
246
+ " <td>5 stars</td>\n",
247
+ " <td>Not sure why people are writing bad reviews over things out of the clubs control. Going over a few things \\n\\n1.) the most impressive part of this club is the DJs. I've been here twice and both times every dj playing was spot on. \\n\\n2) I personally like the dress code. An upscale gay bar on the strip is much needed. \\n\\n3) I do wish it wasn't in ballys, it's a bit of an annoyance being right in the middle of the chaos of the strip but overall was easy to get to. \\n\\n4) best advice is to try and get on the guest list. Drinks aren't cheap but this is vegas. Find me a cocktail in a nightclub on the strip under 10 bucks and we can say this place is expensive \\n\\n\\nKeep up the great work!</td>\n",
248
+ " </tr>\n",
249
+ " <tr>\n",
250
+ " <th>5</th>\n",
251
+ " <td>5 stars</td>\n",
252
+ " <td>Love the juice bar! Try the kaleaid. They now have bottled cold pressed juice. It's expensive, but so good.</td>\n",
253
+ " </tr>\n",
254
+ " <tr>\n",
255
+ " <th>6</th>\n",
256
+ " <td>3 stars</td>\n",
257
+ " <td>As someone who doesn't drink I am left to base my review on the ambiance and food. Despite being in a South Scottsdale strip mall the crowd is fun and engaging. As has been mentioned by the majority of the other reviews, the food isn't really the draw of this place. Still a decent way to pass an evening with friends watching games.</td>\n",
258
+ " </tr>\n",
259
+ " <tr>\n",
260
+ " <th>7</th>\n",
261
+ " <td>4 stars</td>\n",
262
+ " <td>One of my favorite places for Indian food. Their lunch buffet selection is good and all of their food is delicious! Staff is very friendly as well.</td>\n",
263
+ " </tr>\n",
264
+ " <tr>\n",
265
+ " <th>8</th>\n",
266
+ " <td>4 stars</td>\n",
267
+ " <td>I loved my sandwich. The Bobbie tastes like Thanksgiving! It's turkey, w/ stuffing, and cranberry on fresh bread. For reals! Extra yum! ...The two dudes that made our sandwiches were cool.</td>\n",
268
+ " </tr>\n",
269
+ " <tr>\n",
270
+ " <th>9</th>\n",
271
+ " <td>1 star</td>\n",
272
+ " <td>Do Not Go here They Just Butcherd my Cuticles the guy didn't speak any English and the owners wife was yelling at me because I was upset with my service.. Then she locked me in the place so I called the police on her and her husband.. They Tryed to Chase Me And Then Locked Me inside.. And I even payed her in full and she didn't even finish her job Because she was yelling so loud in the place.. DO NOT WASTE YOUR TIME AND MONEY..LESSON LEARNED NEVER TO GO TO PLACES LIKE THIS EVER AGAIN..</td>\n",
273
+ " </tr>\n",
274
+ " </tbody>\n",
275
+ "</table>"
276
+ ],
277
+ "text/plain": [
278
+ "<IPython.core.display.HTML object>"
279
+ ]
280
+ },
281
+ "metadata": {},
282
+ "output_type": "display_data"
283
+ }
284
+ ],
285
+ "source": [
286
+ "show_random_elements(dataset[\"train\"])"
287
+ ]
288
+ },
289
+ {
290
+ "cell_type": "markdown",
291
+ "id": "c9df7cd0-23cd-458f-b2b5-f025c3b9fe62",
292
+ "metadata": {},
293
+ "source": [
294
+ "## 预处理数据\n",
295
+ "\n",
296
+ "下载数据集到本地后,使用 Tokenizer 来处理文本,对于长度不等的输入数据,可以使用填充(padding)和截断(truncation)策略来处理。\n",
297
+ "\n",
298
+ "Datasets 的 `map` 方法,支持一次性在整个数据集上应用预处理函数。\n",
299
+ "\n",
300
+ "下面使用填充到最大长度的策略,处理整个数据集:"
301
+ ]
302
+ },
303
+ {
304
+ "cell_type": "code",
305
+ "execution_count": 8,
306
+ "id": "8bf2b342-e1dd-4ab6-ad57-28eb2513ae38",
307
+ "metadata": {},
308
+ "outputs": [],
309
+ "source": [
310
+ "from transformers import AutoTokenizer\n",
311
+ "\n",
312
+ "tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\n",
313
+ "\n",
314
+ "\n",
315
+ "def tokenize_function(examples):\n",
316
+ " return tokenizer(examples[\"text\"], padding=\"max_length\", truncation=True)\n",
317
+ "\n",
318
+ "\n",
319
+ "tokenized_datasets = dataset.map(tokenize_function, batched=True)"
320
+ ]
321
+ },
322
+ {
323
+ "cell_type": "code",
324
+ "execution_count": 9,
325
+ "id": "47a415a8-cd15-4a8c-851b-9b4740ef8271",
326
+ "metadata": {},
327
+ "outputs": [
328
+ {
329
+ "data": {
330
+ "text/html": [
331
+ "<table border=\"1\" class=\"dataframe\">\n",
332
+ " <thead>\n",
333
+ " <tr style=\"text-align: right;\">\n",
334
+ " <th></th>\n",
335
+ " <th>label</th>\n",
336
+ " <th>text</th>\n",
337
+ " <th>input_ids</th>\n",
338
+ " <th>token_type_ids</th>\n",
339
+ " <th>attention_mask</th>\n",
340
+ " </tr>\n",
341
+ " </thead>\n",
342
+ " <tbody>\n",
343
+ " <tr>\n",
344
+ " <th>0</th>\n",
345
+ " <td>3 stars</td>\n",
346
+ " <td>Visited Hillstone to give them another try a couple weeks ago. Finally had the time to sit down and write an update.\\n\\nAs usual the food was great. The service, while better this occasion was still a bit off. Two examples:\\n\\n1) Length of time for wait staff to revisit our table for refreshments was fairly lengthy in comparison to the old Houston's.\\n\\n2) When asked about the sauce used in the Thai Tuna Roll, the waitress looked very blank and then said it was in the roll. She replied, tuna and some sort of Thai sauce. Noooo kidding.\\n\\nI will give this place another chance and another star because I really like their food. I will more than likely return. However, I will update with new reviews until it's back to the place I remember.</td>\n",
347
+ " <td>[101, 159, 26868, 1906, 5377, 4793, 1106, 1660, 1172, 1330, 2222, 170, 2337, 2277, 2403, 119, 4428, 1125, 1103, 1159, 1106, 3465, 1205, 1105, 3593, 1126, 11984, 119, 165, 183, 165, 183, 23390, 4400, 1103, 2094, 1108, 1632, 119, 1109, 1555, 117, 1229, 1618, 1142, 6116, 1108, 1253, 170, 2113, 1228, 119, 1960, 5136, 131, 165, 183, 165, 183, 1475, 114, 16758, 1104, 1159, 1111, 3074, 2546, 1106, 1231, 9356, 2875, 1412, 1952, 1111, 1231, 2087, 21298, 4385, 1108, 6751, 12628, 1107, 7577, 1106, 1103, 1385, 4666, 112, 188, 119, 165, 183, 165, 183, 1477, 114, 1332, 1455, 1164, 1103, ...]</td>\n",
348
+ " <td>[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...]</td>\n",
349
+ " <td>[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...]</td>\n",
350
+ " </tr>\n",
351
+ " </tbody>\n",
352
+ "</table>"
353
+ ],
354
+ "text/plain": [
355
+ "<IPython.core.display.HTML object>"
356
+ ]
357
+ },
358
+ "metadata": {},
359
+ "output_type": "display_data"
360
+ }
361
+ ],
362
+ "source": [
363
+ "show_random_elements(tokenized_datasets[\"train\"], num_examples=1)"
364
+ ]
365
+ },
366
+ {
367
+ "cell_type": "markdown",
368
+ "id": "1c33d153-f729-4f04-972c-a764c1cbbb8b",
369
+ "metadata": {},
370
+ "source": [
371
+ "### 数据抽样\n",
372
+ "\n",
373
+ "使用 1000 个数据样本,在 BERT 上演示小规模训练(基于 Pytorch Trainer)\n",
374
+ "\n",
375
+ "`shuffle()`函数会随机重新排列列的值。如果您希望对用于洗牌数据集的算法有更多控制,可以在此函数中指定generator参数来使用不同的numpy.random.Generator。"
376
+ ]
377
+ },
378
+ {
379
+ "cell_type": "code",
380
+ "execution_count": 10,
381
+ "id": "a17317d8-3c6a-467f-843d-87491f600db1",
382
+ "metadata": {},
383
+ "outputs": [],
384
+ "source": [
385
+ "# small_train_dataset = tokenized_datasets[\"train\"].shuffle(seed=42).select(range(1000))\n",
386
+ "# small_eval_dataset = tokenized_datasets[\"test\"].shuffle(seed=42).select(range(1000))"
387
+ ]
388
+ },
389
+ {
390
+ "cell_type": "markdown",
391
+ "id": "d3b65d63-2d3a-4a56-bc31-6e88a29e9dec",
392
+ "metadata": {},
393
+ "source": [
394
+ "## 微调训练配置\n",
395
+ "\n",
396
+ "### 加载 BERT 模型\n",
397
+ "\n",
398
+ "警告通知我们正在丢弃一些权重(`vocab_transform` 和 `vocab_layer_norm` 层),并随机初始化其他一些权重(`pre_classifier` 和 `classifier` 层)。在微调模型情况下是绝对正常的,因为我们正在删除用于预训练模型的掩码语言建模任务的头部,并用一个新的头部替换它,对于这个新头部,我们没有预训练的权重,所以库会警告我们在用它进行推理之前应该对这个模型进行微调,而这正是我们要做的事情。"
399
+ ]
400
+ },
401
+ {
402
+ "cell_type": "code",
403
+ "execution_count": 11,
404
+ "id": "4d2af4df-abd4-4a4b-94b6-b0e7375304ed",
405
+ "metadata": {},
406
+ "outputs": [
407
+ {
408
+ "name": "stderr",
409
+ "output_type": "stream",
410
+ "text": [
411
+ "Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.weight', 'classifier.bias']\n",
412
+ "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
413
+ ]
414
+ }
415
+ ],
416
+ "source": [
417
+ "from transformers import AutoModelForSequenceClassification\n",
418
+ "\n",
419
+ "model = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased\", num_labels=5)"
420
+ ]
421
+ },
422
+ {
423
+ "cell_type": "markdown",
424
+ "id": "b44014df-b52c-4c72-9e9f-54424725a473",
425
+ "metadata": {},
426
+ "source": [
427
+ "### 训练超参数(TrainingArguments)\n",
428
+ "\n",
429
+ "完整配置参数与默认值:https://huggingface.co/docs/transformers/v4.36.1/en/main_classes/trainer#transformers.TrainingArguments\n",
430
+ "\n",
431
+ "源代码定义:https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/training_args.py#L161\n",
432
+ "\n",
433
+ "**最重要配置:模型权重保存路径(output_dir)**"
434
+ ]
435
+ },
436
+ {
437
+ "cell_type": "code",
438
+ "execution_count": 12,
439
+ "id": "98c01d5c-de72-4ff0-b11d-e07ac5346888",
440
+ "metadata": {},
441
+ "outputs": [],
442
+ "source": [
443
+ "# from transformers import TrainingArguments\n",
444
+ "\n",
445
+ "# model_dir = \"models/bert-base-cased\"\n",
446
+ "\n",
447
+ "# # logging_steps 默认值为500,根据我们的训练数据和步长,将其设置为100\n",
448
+ "# training_args = TrainingArguments(output_dir=f\"{model_dir}/test_trainer\",\n",
449
+ "# logging_dir=f\"{model_dir}/test_trainer/runs\",\n",
450
+ "# logging_steps=100)\n",
451
+ "# # 完整的超参数配置\n",
452
+ "# print(training_args)"
453
+ ]
454
+ },
455
+ {
456
+ "cell_type": "code",
457
+ "execution_count": null,
458
+ "id": "0ce03480-3aaa-48ea-a0c6-a177b8d8e34f",
459
+ "metadata": {
460
+ "collapsed": true,
461
+ "jupyter": {
462
+ "outputs_hidden": true
463
+ }
464
+ },
465
+ "outputs": [],
466
+ "source": []
467
+ },
468
+ {
469
+ "cell_type": "markdown",
470
+ "id": "7ebd3365-d359-4ab4-a300-4717590cc240",
471
+ "metadata": {},
472
+ "source": [
473
+ "### 训练过程中的指标评估(Evaluate)\n",
474
+ "\n",
475
+ "**[Hugging Face Evaluate 库](https://huggingface.co/docs/evaluate/index)** 支持使用一行代码,获得数十种不同领域(自然语言处理、计算机视觉、强化学习等)的评估方法。 当前支持 **完整评估指标:https://huggingface.co/evaluate-metric**\n",
476
+ "\n",
477
+ "训练器(Trainer)在训练过程中不会自动评估模型性能。因此,我们需要向训练器传递一个函数来计算和报告指标。 \n",
478
+ "\n",
479
+ "Evaluate库提供了一个简单的准确率函数,您可以使用`evaluate.load`函数加载"
480
+ ]
481
+ },
482
+ {
483
+ "cell_type": "code",
484
+ "execution_count": 13,
485
+ "id": "2a8ef138-5bf2-41e5-8c68-df8e11f4e98f",
486
+ "metadata": {},
487
+ "outputs": [],
488
+ "source": [
489
+ "import numpy as np\n",
490
+ "import evaluate\n",
491
+ "\n",
492
+ "metric = evaluate.load(\"accuracy\")"
493
+ ]
494
+ },
495
+ {
496
+ "cell_type": "markdown",
497
+ "id": "70d406c0-56d0-4a54-9c6c-e126ab7f5254",
498
+ "metadata": {},
499
+ "source": [
500
+ "\n",
501
+ "接着,调用 `compute` 函数来计算预测的准确率。\n",
502
+ "\n",
503
+ "在将预测传递给 compute 函数之前,我们需要将 logits 转换为预测值(**所有Transformers 模型都返回 logits**)。"
504
+ ]
505
+ },
506
+ {
507
+ "cell_type": "code",
508
+ "execution_count": 14,
509
+ "id": "f46d2e59-1ebf-43d2-bc86-6b57a4d24d19",
510
+ "metadata": {},
511
+ "outputs": [],
512
+ "source": [
513
+ "def compute_metrics(eval_pred):\n",
514
+ " logits, labels = eval_pred\n",
515
+ " predictions = np.argmax(logits, axis=-1)\n",
516
+ " return metric.compute(predictions=predictions, references=labels)"
517
+ ]
518
+ },
519
+ {
520
+ "cell_type": "markdown",
521
+ "id": "e2feba67-9ca9-4793-9a15-3eaa426df2a1",
522
+ "metadata": {},
523
+ "source": [
524
+ "#### 训练过程指标监控\n",
525
+ "\n",
526
+ "通常,为了监控训练过程中的评估指标变化,我们可以在`TrainingArguments`指定`evaluation_strategy`参数,以便在 epoch 结束时报告评估指标。"
527
+ ]
528
+ },
529
+ {
530
+ "cell_type": "code",
531
+ "execution_count": 62,
532
+ "id": "afaaee18-4986-4e39-8ad9-b8d413ab4cd1",
533
+ "metadata": {
534
+ "editable": true,
535
+ "slideshow": {
536
+ "slide_type": ""
537
+ },
538
+ "tags": []
539
+ },
540
+ "outputs": [],
541
+ "source": [
542
+ "from transformers import TrainingArguments, Trainer\n",
543
+ "model_dir = \"models/bert-base-cased\"\n",
544
+ "batch_size = 14\n",
545
+ "\n",
546
+ "training_args = TrainingArguments(\n",
547
+ " output_dir=f\"{model_dir}/test_trainer\",\n",
548
+ " evaluation_strategy=\"epoch\", \n",
549
+ " logging_dir=f\"{model_dir}/test_trainer/runs\",\n",
550
+ " logging_steps=500,\n",
551
+ " save_total_limit=3,\n",
552
+ " per_device_train_batch_size=batch_size,\n",
553
+ " per_device_eval_batch_size=batch_size,\n",
554
+ ")"
555
+ ]
556
+ },
557
+ {
558
+ "cell_type": "markdown",
559
+ "id": "d47d6981-e444-4c0f-a7cb-dd7f2ba8df12",
560
+ "metadata": {
561
+ "editable": true,
562
+ "slideshow": {
563
+ "slide_type": ""
564
+ },
565
+ "tags": []
566
+ },
567
+ "source": [
568
+ "## 开始训练\n",
569
+ "\n",
570
+ "### 实例化训练器(Trainer)\n",
571
+ "\n",
572
+ "`kernel version` 版本问题:暂不影响本示例代码运行"
573
+ ]
574
+ },
575
+ {
576
+ "cell_type": "code",
577
+ "execution_count": 63,
578
+ "id": "ca1d12ac-89dc-4c30-8282-f859724c0062",
579
+ "metadata": {
580
+ "editable": true,
581
+ "slideshow": {
582
+ "slide_type": ""
583
+ },
584
+ "tags": []
585
+ },
586
+ "outputs": [],
587
+ "source": [
588
+ "small_train_dataset = tokenized_datasets[\"train\"].shuffle(seed=42).select(range(1000))\n",
589
+ "small_eval_dataset = tokenized_datasets[\"test\"].shuffle(seed=42).select(range(1000))\n",
590
+ "\n",
591
+ "trainer = Trainer(\n",
592
+ " model=model,\n",
593
+ " args=training_args,\n",
594
+ " train_dataset=tokenized_datasets[\"train\"],\n",
595
+ " eval_dataset=small_eval_dataset,\n",
596
+ " compute_metrics=compute_metrics,\n",
597
+ ")"
598
+ ]
599
+ },
600
+ {
601
+ "cell_type": "code",
602
+ "execution_count": 64,
603
+ "id": "9b3c069d-a0dc-4f43-aea0-6cb8799643f3",
604
+ "metadata": {},
605
+ "outputs": [],
606
+ "source": [
607
+ "# trainer.args"
608
+ ]
609
+ },
610
+ {
611
+ "cell_type": "code",
612
+ "execution_count": null,
613
+ "id": "449eb845-cff7-40ba-8915-38de79248840",
614
+ "metadata": {},
615
+ "outputs": [],
616
+ "source": []
617
+ },
618
+ {
619
+ "cell_type": "markdown",
620
+ "id": "a833e0db-1168-4a3c-8b75-bfdcef8c5157",
621
+ "metadata": {},
622
+ "source": [
623
+ "## 使用 nvidia-smi 查看 GPU 使用\n",
624
+ "\n",
625
+ "为了实时查看GPU使用情况,可以使用 `watch` 指令实现轮询:`watch -n 1 nvidia-smi`:\n",
626
+ "\n",
627
+ "```shell\n",
628
+ "Every 1.0s: nvidia-smi Wed Dec 20 14:37:41 2023\n",
629
+ "\n",
630
+ "Wed Dec 20 14:37:41 2023\n",
631
+ "+---------------------------------------------------------------------------------------+\n",
632
+ "| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |\n",
633
+ "|-----------------------------------------+----------------------+----------------------+\n",
634
+ "| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
635
+ "| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |\n",
636
+ "| | | MIG M. |\n",
637
+ "|=========================================+======================+======================|\n",
638
+ "| 0 Tesla T4 Off | 00000000:00:0D.0 Off | 0 |\n",
639
+ "| N/A 64C P0 69W / 70W | 6665MiB / 15360MiB | 98% Default |\n",
640
+ "| | | N/A |\n",
641
+ "+-----------------------------------------+----------------------+----------------------+\n",
642
+ "\n",
643
+ "+---------------------------------------------------------------------------------------+\n",
644
+ "| Processes: |\n",
645
+ "| GPU GI CI PID Type Process name GPU Memory |\n",
646
+ "| ID ID Usage |\n",
647
+ "|=======================================================================================|\n",
648
+ "| 0 N/A N/A 18395 C /root/miniconda3/bin/python 6660MiB |\n",
649
+ "+---------------------------------------------------------------------------------------+\n",
650
+ "```"
651
+ ]
652
+ },
653
+ {
654
+ "cell_type": "code",
655
+ "execution_count": 65,
656
+ "id": "accfe921-471d-481a-96da-c491cdebad0c",
657
+ "metadata": {
658
+ "editable": true,
659
+ "slideshow": {
660
+ "slide_type": ""
661
+ },
662
+ "tags": []
663
+ },
664
+ "outputs": [
665
+ {
666
+ "data": {
667
+ "text/html": [
668
+ "\n",
669
+ " <div>\n",
670
+ " \n",
671
+ " <progress value='46431' max='46431' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
672
+ " [46431/46431 17:33:04, Epoch 3/3]\n",
673
+ " </div>\n",
674
+ " <table border=\"1\" class=\"dataframe\">\n",
675
+ " <thead>\n",
676
+ " <tr style=\"text-align: left;\">\n",
677
+ " <th>Epoch</th>\n",
678
+ " <th>Training Loss</th>\n",
679
+ " <th>Validation Loss</th>\n",
680
+ " <th>Accuracy</th>\n",
681
+ " </tr>\n",
682
+ " </thead>\n",
683
+ " <tbody>\n",
684
+ " <tr>\n",
685
+ " <td>1</td>\n",
686
+ " <td>0.727000</td>\n",
687
+ " <td>0.694410</td>\n",
688
+ " <td>0.703000</td>\n",
689
+ " </tr>\n",
690
+ " <tr>\n",
691
+ " <td>2</td>\n",
692
+ " <td>0.633200</td>\n",
693
+ " <td>0.691635</td>\n",
694
+ " <td>0.710000</td>\n",
695
+ " </tr>\n",
696
+ " <tr>\n",
697
+ " <td>3</td>\n",
698
+ " <td>0.528600</td>\n",
699
+ " <td>0.732436</td>\n",
700
+ " <td>0.711000</td>\n",
701
+ " </tr>\n",
702
+ " </tbody>\n",
703
+ "</table><p>"
704
+ ],
705
+ "text/plain": [
706
+ "<IPython.core.display.HTML object>"
707
+ ]
708
+ },
709
+ "metadata": {},
710
+ "output_type": "display_data"
711
+ },
712
+ {
713
+ "name": "stderr",
714
+ "output_type": "stream",
715
+ "text": [
716
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
717
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
718
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
719
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
720
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
721
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
722
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
723
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
724
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
725
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
726
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
727
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
728
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
729
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
730
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
731
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
732
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
733
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
734
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
735
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
736
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
737
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
738
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
739
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
740
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
741
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
742
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
743
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
744
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
745
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
746
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
747
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
748
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
749
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
750
+ "IOPub message rate exceeded.\n",
751
+ "The Jupyter server will temporarily stop sending output\n",
752
+ "to the client in order to avoid crashing it.\n",
753
+ "To change this limit, set the config variable\n",
754
+ "`--ServerApp.iopub_msg_rate_limit`.\n",
755
+ "\n",
756
+ "Current values:\n",
757
+ "ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
758
+ "ServerApp.rate_limit_window=3.0 (secs)\n",
759
+ "\n",
760
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
761
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
762
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
763
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
764
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
765
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
766
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
767
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
768
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
769
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
770
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
771
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
772
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
773
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
774
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
775
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
776
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
777
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
778
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
779
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
780
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
781
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
782
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
783
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
784
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
785
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
786
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
787
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
788
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
789
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
790
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
791
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
792
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
793
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
794
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
795
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
796
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
797
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
798
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
799
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
800
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
801
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
802
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
803
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
804
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
805
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
806
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
807
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
808
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
809
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
810
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
811
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
812
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
813
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
814
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
815
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
816
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
817
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
818
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
819
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
820
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
821
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
822
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
823
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
824
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
825
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
826
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
827
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
828
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
829
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
830
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
831
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
832
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
833
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
834
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
835
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
836
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
837
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
838
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
839
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n",
840
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
841
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n"
842
+ ]
843
+ },
844
+ {
845
+ "data": {
846
+ "text/plain": [
847
+ "TrainOutput(global_step=46431, training_loss=0.6399251460484753, metrics={'train_runtime': 63185.9462, 'train_samples_per_second': 30.861, 'train_steps_per_second': 0.735, 'total_flos': 5.130803778048e+17, 'train_loss': 0.6399251460484753, 'epoch': 3.0})"
848
+ ]
849
+ },
850
+ "execution_count": 65,
851
+ "metadata": {},
852
+ "output_type": "execute_result"
853
+ }
854
+ ],
855
+ "source": [
856
+ "trainer.train(False)"
857
+ ]
858
+ },
859
+ {
860
+ "cell_type": "code",
861
+ "execution_count": 66,
862
+ "id": "6d581099-37a4-4470-b051-1ada38554089",
863
+ "metadata": {
864
+ "editable": true,
865
+ "slideshow": {
866
+ "slide_type": ""
867
+ },
868
+ "tags": []
869
+ },
870
+ "outputs": [],
871
+ "source": [
872
+ "small_test_dataset = tokenized_datasets[\"test\"].shuffle(seed=64).select(range(100))"
873
+ ]
874
+ },
875
+ {
876
+ "cell_type": "code",
877
+ "execution_count": 67,
878
+ "id": "ffb47eab-1370-491e-8a84-6d5347a350b2",
879
+ "metadata": {},
880
+ "outputs": [
881
+ {
882
+ "name": "stderr",
883
+ "output_type": "stream",
884
+ "text": [
885
+ "/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
886
+ " warnings.warn('Was asked to gather along dimension 0, but all '\n"
887
+ ]
888
+ },
889
+ {
890
+ "data": {
891
+ "text/html": [
892
+ "\n",
893
+ " <div>\n",
894
+ " \n",
895
+ " <progress value='3' max='3' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
896
+ " [3/3 00:00]\n",
897
+ " </div>\n",
898
+ " "
899
+ ],
900
+ "text/plain": [
901
+ "<IPython.core.display.HTML object>"
902
+ ]
903
+ },
904
+ "metadata": {},
905
+ "output_type": "display_data"
906
+ },
907
+ {
908
+ "data": {
909
+ "text/plain": [
910
+ "{'eval_loss': 0.8645618557929993,\n",
911
+ " 'eval_accuracy': 0.65,\n",
912
+ " 'eval_runtime': 1.3182,\n",
913
+ " 'eval_samples_per_second': 75.861,\n",
914
+ " 'eval_steps_per_second': 2.276,\n",
915
+ " 'epoch': 3.0}"
916
+ ]
917
+ },
918
+ "execution_count": 67,
919
+ "metadata": {},
920
+ "output_type": "execute_result"
921
+ }
922
+ ],
923
+ "source": [
924
+ "trainer.evaluate(small_test_dataset)"
925
+ ]
926
+ },
927
+ {
928
+ "cell_type": "markdown",
929
+ "id": "27a55686-7c43-4ab8-a5cd-0e77f14c7c52",
930
+ "metadata": {},
931
+ "source": [
932
+ "### 保存模型和训练状态\n",
933
+ "\n",
934
+ "- 使用 `trainer.save_model` 方法保存模型,后续可以通过 from_pretrained() 方法重新加载\n",
935
+ "- 使用 `trainer.save_state` 方法保存训练状态"
936
+ ]
937
+ },
938
+ {
939
+ "cell_type": "code",
940
+ "execution_count": 68,
941
+ "id": "ad0cbc14-9ef7-450f-a1a3-4f92b6486f41",
942
+ "metadata": {},
943
+ "outputs": [],
944
+ "source": [
945
+ "trainer.save_model(f\"{model_dir}/finetuned-trainer\")"
946
+ ]
947
+ },
948
+ {
949
+ "cell_type": "code",
950
+ "execution_count": 69,
951
+ "id": "badf5868-2847-439d-a73e-42d1cca67b5e",
952
+ "metadata": {},
953
+ "outputs": [],
954
+ "source": [
955
+ "trainer.save_state()"
956
+ ]
957
+ },
958
+ {
959
+ "cell_type": "markdown",
960
+ "id": "61828934-01da-4fc3-9e75-8d754c25dfbc",
961
+ "metadata": {},
962
+ "source": [
963
+ "## Homework: 使用完整的 YelpReviewFull 数据集训练,对比看 Acc 最高能到多少"
964
+ ]
965
+ },
966
+ {
967
+ "cell_type": "code",
968
+ "execution_count": 74,
969
+ "id": "6ee2580a-7a5a-46ae-a28b-b41e9e838eb1",
970
+ "metadata": {},
971
+ "outputs": [
972
+ {
973
+ "name": "stderr",
974
+ "output_type": "stream",
975
+ "text": [
976
+ "model.safetensors: 100%|██████████| 433M/433M [00:15<00:00, 28.0MB/s] \n"
977
+ ]
978
+ },
979
+ {
980
+ "data": {
981
+ "text/plain": [
982
+ "CommitInfo(commit_url='https://huggingface.co/yqzhangjx/bert-base-cased-for-yelp/commit/ef8247a2eb2c3e93a70f0198591833256f6d197c', commit_message='Upload BertForSequenceClassification', commit_description='', oid='ef8247a2eb2c3e93a70f0198591833256f6d197c', pr_url=None, pr_revision=None, pr_num=None)"
983
+ ]
984
+ },
985
+ "execution_count": 74,
986
+ "metadata": {},
987
+ "output_type": "execute_result"
988
+ }
989
+ ],
990
+ "source": [
991
+ "model.push_to_hub(\"yqzhangjx/bert-base-cased-for-yelp\", token=\"XXX\")"
992
+ ]
993
+ },
994
+ {
995
+ "cell_type": "code",
996
+ "execution_count": null,
997
+ "id": "478f8d8e-2597-4a6c-a84c-d66e3d231e1d",
998
+ "metadata": {},
999
+ "outputs": [],
1000
+ "source": []
1001
+ },
1002
+ {
1003
+ "cell_type": "code",
1004
+ "execution_count": null,
1005
+ "id": "561af3da-d720-4478-99de-b72d7419fb37",
1006
+ "metadata": {},
1007
+ "outputs": [],
1008
+ "source": []
1009
+ }
1010
+ ],
1011
+ "metadata": {
1012
+ "kernelspec": {
1013
+ "display_name": "Python 3 (ipykernel)",
1014
+ "language": "python",
1015
+ "name": "python3"
1016
+ },
1017
+ "language_info": {
1018
+ "codemirror_mode": {
1019
+ "name": "ipython",
1020
+ "version": 3
1021
+ },
1022
+ "file_extension": ".py",
1023
+ "mimetype": "text/x-python",
1024
+ "name": "python",
1025
+ "nbconvert_exporter": "python",
1026
+ "pygments_lexer": "ipython3",
1027
+ "version": "3.9.18"
1028
+ }
1029
+ },
1030
+ "nbformat": 4,
1031
+ "nbformat_minor": 5
1032
+ }