arjunpatel commited on
Commit
91a9511
1 Parent(s): c96bae9

minor updates and example script from huggingface

Browse files
language_modeling-tf.ipynb ADDED
@@ -0,0 +1,1745 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {
6
+ "id": "X4cRE8IbIrIV"
7
+ },
8
+ "source": [
9
+ "If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. Uncomment the following cell and run it."
10
+ ]
11
+ },
12
+ {
13
+ "cell_type": "code",
14
+ "execution_count": 1,
15
+ "metadata": {
16
+ "colab": {
17
+ "base_uri": "https://localhost:8080/",
18
+ "height": 1000
19
+ },
20
+ "id": "MOsHUjgdIrIW",
21
+ "outputId": "f84a093e-147f-470e-aad9-80fb51193c8e"
22
+ },
23
+ "outputs": [],
24
+ "source": [
25
+ "#! pip install transformers\n",
26
+ "#! pip install datasets\n",
27
+ "#! pip install huggingface_hub"
28
+ ]
29
+ },
30
+ {
31
+ "cell_type": "markdown",
32
+ "metadata": {},
33
+ "source": [
34
+ "If you're opening this notebook locally, make sure your environment has an install from the latest version of those libraries.\n",
35
+ "\n",
36
+ "To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.\n",
37
+ "\n",
38
+ "First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then run the following cell and input your token:"
39
+ ]
40
+ },
41
+ {
42
+ "cell_type": "code",
43
+ "execution_count": 2,
44
+ "metadata": {},
45
+ "outputs": [
46
+ {
47
+ "data": {
48
+ "application/vnd.jupyter.widget-view+json": {
49
+ "model_id": "9dbff25b935149db8796a354c89fdcc3",
50
+ "version_major": 2,
51
+ "version_minor": 0
52
+ },
53
+ "text/plain": [
54
+ "VBox(children=(HTML(value='<center>\\n<img src=https://huggingface.co/front/assets/huggingface_logo-noborder.sv…"
55
+ ]
56
+ },
57
+ "metadata": {},
58
+ "output_type": "display_data"
59
+ }
60
+ ],
61
+ "source": [
62
+ "from huggingface_hub import notebook_login\n",
63
+ "\n",
64
+ "notebook_login()"
65
+ ]
66
+ },
67
+ {
68
+ "cell_type": "markdown",
69
+ "metadata": {},
70
+ "source": [
71
+ "Then you need to install Git-LFS and setup Git if you haven't already. Uncomment the following instructions and adapt with your name and email:"
72
+ ]
73
+ },
74
+ {
75
+ "cell_type": "code",
76
+ "execution_count": null,
77
+ "metadata": {},
78
+ "outputs": [],
79
+ "source": [
80
+ "# !apt install git-lfs\n",
81
+ "# !git config --global user.email \"you@example.com\"\n",
82
+ "# !git config --global user.name \"Your Name\""
83
+ ]
84
+ },
85
+ {
86
+ "cell_type": "markdown",
87
+ "metadata": {},
88
+ "source": [
89
+ "Make sure your version of Transformers is at least 4.16.0 since some of the functionality we use was only introduced in that version."
90
+ ]
91
+ },
92
+ {
93
+ "cell_type": "code",
94
+ "execution_count": 3,
95
+ "metadata": {},
96
+ "outputs": [
97
+ {
98
+ "name": "stdout",
99
+ "output_type": "stream",
100
+ "text": [
101
+ "4.18.0\n"
102
+ ]
103
+ }
104
+ ],
105
+ "source": [
106
+ "import transformers\n",
107
+ "\n",
108
+ "print(transformers.__version__)"
109
+ ]
110
+ },
111
+ {
112
+ "cell_type": "markdown",
113
+ "metadata": {
114
+ "id": "HFASsisvIrIb"
115
+ },
116
+ "source": [
117
+ "You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling)."
118
+ ]
119
+ },
120
+ {
121
+ "cell_type": "markdown",
122
+ "metadata": {
123
+ "id": "a3KD3WXU3l-O"
124
+ },
125
+ "source": [
126
+ "# Fine-tuning a language model"
127
+ ]
128
+ },
129
+ {
130
+ "cell_type": "markdown",
131
+ "metadata": {
132
+ "id": "JAscNNUD3l-P"
133
+ },
134
+ "source": [
135
+ "In this notebook, we'll see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model on a language modeling task. We will cover two types of language modeling tasks which are:\n",
136
+ "\n",
137
+ "- Causal language modeling: the model has to predict the next token in the sentence (so the labels are the same as the inputs shifted to the right). To make sure the model does not cheat, its attention computations are masked so that tokens cannot attend to tokens to their right, as this would result in label leakage.\n",
138
+ "\n",
139
+ "![Widget inference representing the causal language modeling task](images/causal_language_modeling.png)\n",
140
+ "\n",
141
+ "- Masked language modeling: the model has to predict some tokens that are masked in the input. It still has access to the whole sentence, so it can use the tokens before and after the masked tokens to predict their value.\n",
142
+ "\n",
143
+ "![Widget inference representing the masked language modeling task](images/masked_language_modeling.png)\n",
144
+ "\n",
145
+ "We will see how to easily load and preprocess the dataset for each one of those tasks, and how to use Keras to fine-tune a model on it.\n",
146
+ "\n",
147
+ "A script version of this notebook you can directly run on a distributed environment or on TPU is available in our [examples folder](https://github.com/huggingface/transformers/tree/master/examples)."
148
+ ]
149
+ },
150
+ {
151
+ "cell_type": "markdown",
152
+ "metadata": {
153
+ "id": "1r_n9OWV3l-Q"
154
+ },
155
+ "source": [
156
+ "## Preparing the dataset"
157
+ ]
158
+ },
159
+ {
160
+ "cell_type": "markdown",
161
+ "metadata": {
162
+ "id": "kswRMhPc3l-Q"
163
+ },
164
+ "source": [
165
+ "For each of those tasks, we will use the [Wikitext 2]() dataset as an example. You can load it very easily with the 🤗 Datasets library."
166
+ ]
167
+ },
168
+ {
169
+ "cell_type": "code",
170
+ "execution_count": 4,
171
+ "metadata": {
172
+ "id": "n2ZRs1cL3l-R",
173
+ "outputId": "11151c56-be90-4d11-e7df-db85e745ca5c"
174
+ },
175
+ "outputs": [
176
+ {
177
+ "name": "stderr",
178
+ "output_type": "stream",
179
+ "text": [
180
+ "Reusing dataset wikitext (/Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126)\n"
181
+ ]
182
+ },
183
+ {
184
+ "data": {
185
+ "application/vnd.jupyter.widget-view+json": {
186
+ "model_id": "c4380fbf717e4b7aa0c6a7512335950c",
187
+ "version_major": 2,
188
+ "version_minor": 0
189
+ },
190
+ "text/plain": [
191
+ " 0%| | 0/3 [00:00<?, ?it/s]"
192
+ ]
193
+ },
194
+ "metadata": {},
195
+ "output_type": "display_data"
196
+ }
197
+ ],
198
+ "source": [
199
+ "from datasets import load_dataset\n",
200
+ "\n",
201
+ "datasets = load_dataset(\"wikitext\", \"wikitext-2-raw-v1\")"
202
+ ]
203
+ },
204
+ {
205
+ "cell_type": "markdown",
206
+ "metadata": {
207
+ "id": "f1-9jepM3l-W"
208
+ },
209
+ "source": [
210
+ "You can replace the dataset above with any dataset hosted on [the hub](https://huggingface.co/datasets) or use your own files. Just uncomment the following cell and replace the paths with your own input files:"
211
+ ]
212
+ },
213
+ {
214
+ "cell_type": "code",
215
+ "execution_count": 5,
216
+ "metadata": {
217
+ "id": "uxSaGa_l3l-W"
218
+ },
219
+ "outputs": [],
220
+ "source": [
221
+ "# datasets = load_dataset(\"text\", data_files={\"train\": path_to_train.txt, \"validation\": path_to_validation.txt}"
222
+ ]
223
+ },
224
+ {
225
+ "cell_type": "markdown",
226
+ "metadata": {
227
+ "id": "jY1SwIrY3l-a"
228
+ },
229
+ "source": [
230
+ "You can also load datasets from a csv or a JSON file, see the [full documentation](https://huggingface.co/docs/datasets/loading_datasets.html#from-local-files) for more information."
231
+ ]
232
+ },
233
+ {
234
+ "cell_type": "markdown",
235
+ "metadata": {
236
+ "id": "u3EtYfeHIrIz"
237
+ },
238
+ "source": [
239
+ "To access an actual element, you need to select a split first, then give an index:"
240
+ ]
241
+ },
242
+ {
243
+ "cell_type": "code",
244
+ "execution_count": 6,
245
+ "metadata": {
246
+ "id": "X6HrpprwIrIz",
247
+ "outputId": "d7670bc0-42e4-4c09-8a6a-5c018ded7d95"
248
+ },
249
+ "outputs": [
250
+ {
251
+ "data": {
252
+ "text/plain": [
253
+ "{'text': ' The game \\'s battle system , the BliTZ system , is carried over directly from Valkyira Chronicles . During missions , players select each unit using a top @-@ down perspective of the battlefield map : once a character is selected , the player moves the character around the battlefield in third @-@ person . A character can only act once per @-@ turn , but characters can be granted multiple turns at the expense of other characters \\' turns . Each character has a field and distance of movement limited by their Action Gauge . Up to nine characters can be assigned to a single mission . During gameplay , characters will call out if something happens to them , such as their health points ( HP ) getting low or being knocked out by enemy attacks . Each character has specific \" Potentials \" , skills unique to each character . They are divided into \" Personal Potential \" , which are innate skills that remain unaltered unless otherwise dictated by the story and can either help or impede a character , and \" Battle Potentials \" , which are grown throughout the game and always grant boons to a character . To learn Battle Potentials , each character has a unique \" Masters Table \" , a grid @-@ based skill table that can be used to acquire and link different skills . Characters also have Special Abilities that grant them temporary boosts on the battlefield : Kurt can activate \" Direct Command \" and move around the battlefield without depleting his Action Point gauge , the character Reila can shift into her \" Valkyria Form \" and become invincible , while Imca can target multiple enemy units with her heavy weapon . \\n'}"
254
+ ]
255
+ },
256
+ "execution_count": 6,
257
+ "metadata": {},
258
+ "output_type": "execute_result"
259
+ }
260
+ ],
261
+ "source": [
262
+ "datasets[\"train\"][10]"
263
+ ]
264
+ },
265
+ {
266
+ "cell_type": "markdown",
267
+ "metadata": {
268
+ "id": "WHUmphG3IrI3"
269
+ },
270
+ "source": [
271
+ "To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset."
272
+ ]
273
+ },
274
+ {
275
+ "cell_type": "code",
276
+ "execution_count": 7,
277
+ "metadata": {
278
+ "id": "ur5sNUcZ3l-g"
279
+ },
280
+ "outputs": [],
281
+ "source": [
282
+ "from datasets import ClassLabel\n",
283
+ "import random\n",
284
+ "import pandas as pd\n",
285
+ "from IPython.display import display, HTML\n",
286
+ "\n",
287
+ "\n",
288
+ "def show_random_elements(dataset, num_examples=10):\n",
289
+ " assert num_examples <= len(\n",
290
+ " dataset\n",
291
+ " ), \"Can't pick more elements than there are in the dataset.\"\n",
292
+ " picks = []\n",
293
+ " for _ in range(num_examples):\n",
294
+ " pick = random.randint(0, len(dataset) - 1)\n",
295
+ " while pick in picks:\n",
296
+ " pick = random.randint(0, len(dataset) - 1)\n",
297
+ " picks.append(pick)\n",
298
+ "\n",
299
+ " df = pd.DataFrame(dataset[picks])\n",
300
+ " for column, typ in dataset.features.items():\n",
301
+ " if isinstance(typ, ClassLabel):\n",
302
+ " df[column] = df[column].transform(lambda i: typ.names[i])\n",
303
+ " display(HTML(df.to_html()))"
304
+ ]
305
+ },
306
+ {
307
+ "cell_type": "code",
308
+ "execution_count": 8,
309
+ "metadata": {
310
+ "id": "1Uk8NROQ3l-k",
311
+ "outputId": "a822dcec-51e3-4dba-c73c-dba9e0301726"
312
+ },
313
+ "outputs": [
314
+ {
315
+ "data": {
316
+ "text/html": [
317
+ "<table border=\"1\" class=\"dataframe\">\n",
318
+ " <thead>\n",
319
+ " <tr style=\"text-align: right;\">\n",
320
+ " <th></th>\n",
321
+ " <th>text</th>\n",
322
+ " </tr>\n",
323
+ " </thead>\n",
324
+ " <tbody>\n",
325
+ " <tr>\n",
326
+ " <th>0</th>\n",
327
+ " <td>Today , Lady Rosebery is a mere footnote in the long history of her husband 's family , rather as Consuelo Vanderbilt is regarded in the Spencer @-@ Churchill family . Her husband , once one of the \" most celebrated figures in Britain , \" is a minor figure in British history . Thus , Hannah , Countess of Rosebery , in her day celebrated in the worlds of politics , philanthropy , and high society , is largely unknown and forgotten . \\n</td>\n",
328
+ " </tr>\n",
329
+ " <tr>\n",
330
+ " <th>1</th>\n",
331
+ " <td>Agujaceratops - ( Texas , USA ) \\n</td>\n",
332
+ " </tr>\n",
333
+ " <tr>\n",
334
+ " <th>2</th>\n",
335
+ " <td>The city of Galveston is situated on Galveston Island , a barrier island off the Texas Gulf coast near the mainland coast . Made up of mostly sand @-@ sized particles and smaller amounts of finer mud sediments and larger gravel @-@ sized sediments , the island is unstable , affected by water and weather , and can shift its boundaries through erosion . \\n</td>\n",
336
+ " </tr>\n",
337
+ " <tr>\n",
338
+ " <th>3</th>\n",
339
+ " <td>Although ceratopsians are generally considered herbivorous , a few paleontologists , such as Darren Naish and Mark Witton , have speculated online that at least some ceratopsians may have been opportunistically omnivorous . \\n</td>\n",
340
+ " </tr>\n",
341
+ " <tr>\n",
342
+ " <th>4</th>\n",
343
+ " <td></td>\n",
344
+ " </tr>\n",
345
+ " <tr>\n",
346
+ " <th>5</th>\n",
347
+ " <td></td>\n",
348
+ " </tr>\n",
349
+ " <tr>\n",
350
+ " <th>6</th>\n",
351
+ " <td>= = = Menu , coup and North Vietnamese offensive = = = \\n</td>\n",
352
+ " </tr>\n",
353
+ " <tr>\n",
354
+ " <th>7</th>\n",
355
+ " <td>It was for his leadership and bravery during these actions that Andrew was awarded the Victoria Cross ( VC ) at the age of 20 . The citation read as follows : \\n</td>\n",
356
+ " </tr>\n",
357
+ " <tr>\n",
358
+ " <th>8</th>\n",
359
+ " <td>= = Death of Clement XIII = = \\n</td>\n",
360
+ " </tr>\n",
361
+ " <tr>\n",
362
+ " <th>9</th>\n",
363
+ " <td>= = = In the media = = = \\n</td>\n",
364
+ " </tr>\n",
365
+ " </tbody>\n",
366
+ "</table>"
367
+ ],
368
+ "text/plain": [
369
+ "<IPython.core.display.HTML object>"
370
+ ]
371
+ },
372
+ "metadata": {},
373
+ "output_type": "display_data"
374
+ }
375
+ ],
376
+ "source": [
377
+ "show_random_elements(datasets[\"train\"])"
378
+ ]
379
+ },
380
+ {
381
+ "cell_type": "markdown",
382
+ "metadata": {
383
+ "id": "CKerdF353l-o"
384
+ },
385
+ "source": [
386
+ "As we can see, some of the texts are a full paragraph of a Wikipedia article while others are just titles or empty lines."
387
+ ]
388
+ },
389
+ {
390
+ "cell_type": "markdown",
391
+ "metadata": {
392
+ "id": "JEA1ju653l-p"
393
+ },
394
+ "source": [
395
+ "## Causal Language modeling"
396
+ ]
397
+ },
398
+ {
399
+ "cell_type": "markdown",
400
+ "metadata": {
401
+ "id": "v5GTGKZS3l-q"
402
+ },
403
+ "source": [
404
+ "For causal language modeling (CLM) we are going to take all the texts in our dataset, tokenize them and concatenate them. Then we will split them into examples of a fixed sequence length. This way the model will receive chunks of contiguous text that may look like:\n",
405
+ "```\n",
406
+ "part of text 1\n",
407
+ "```\n",
408
+ "or \n",
409
+ "```\n",
410
+ "end of text 1 [BOS_TOKEN] beginning of text 2\n",
411
+ "```\n",
412
+ "depending on whether they span multiple original texts or not. The labels will be the same as the inputs, shifted to the right.\n",
413
+ "\n",
414
+ "We will use the [`distilgpt2`](https://huggingface.co/distilgpt2) model for this example. You can pick any of the checkpoints listed [here](https://huggingface.co/models?filter=causal-lm) instead:"
415
+ ]
416
+ },
417
+ {
418
+ "cell_type": "code",
419
+ "execution_count": 9,
420
+ "metadata": {
421
+ "id": "-WGBCO343l-q"
422
+ },
423
+ "outputs": [],
424
+ "source": [
425
+ "model_checkpoint = \"distilgpt2\"\n"
426
+ ]
427
+ },
428
+ {
429
+ "cell_type": "markdown",
430
+ "metadata": {
431
+ "id": "5io6fY_d3l-u"
432
+ },
433
+ "source": [
434
+ "To tokenize all our texts with the same vocabulary that was used when training the model, we have to download a pretrained tokenizer. This is all done by the `AutoTokenizer` class:"
435
+ ]
436
+ },
437
+ {
438
+ "cell_type": "code",
439
+ "execution_count": 10,
440
+ "metadata": {
441
+ "id": "iAYlS40Z3l-v"
442
+ },
443
+ "outputs": [],
444
+ "source": [
445
+ "from transformers import AutoTokenizer\n",
446
+ "\n",
447
+ "tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)"
448
+ ]
449
+ },
450
+ {
451
+ "cell_type": "markdown",
452
+ "metadata": {
453
+ "id": "rpOiBrJ13l-y"
454
+ },
455
+ "source": [
456
+ "We can now call the tokenizer on all our texts. This is very simple, using the [`map`](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map) method from the Datasets library. First we define a function that calls the tokenizer on our texts:"
457
+ ]
458
+ },
459
+ {
460
+ "cell_type": "code",
461
+ "execution_count": 11,
462
+ "metadata": {
463
+ "id": "lS2m25YM3l-z"
464
+ },
465
+ "outputs": [],
466
+ "source": [
467
+ "def tokenize_function(examples):\n",
468
+ " return tokenizer(examples[\"text\"])"
469
+ ]
470
+ },
471
+ {
472
+ "cell_type": "markdown",
473
+ "metadata": {
474
+ "id": "M9xVAa3s3l-2"
475
+ },
476
+ "source": [
477
+ "Then we apply it to all the splits in our `datasets` object, using `batched=True` and 4 processes to speed up the preprocessing. We won't need the `text` column afterward, so we discard it."
478
+ ]
479
+ },
480
+ {
481
+ "cell_type": "code",
482
+ "execution_count": 12,
483
+ "metadata": {
484
+ "id": "NVAO0H8u3l-3",
485
+ "outputId": "30d88b8a-e353-4e13-f709-8e5e06ef747b"
486
+ },
487
+ "outputs": [
488
+ {
489
+ "name": "stdout",
490
+ "output_type": "stream",
491
+ "text": [
492
+ " "
493
+ ]
494
+ },
495
+ {
496
+ "name": "stderr",
497
+ "output_type": "stream",
498
+ "text": [
499
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-61391423a2766fc9.arrow\n"
500
+ ]
501
+ },
502
+ {
503
+ "name": "stdout",
504
+ "output_type": "stream",
505
+ "text": [
506
+ " "
507
+ ]
508
+ },
509
+ {
510
+ "name": "stderr",
511
+ "output_type": "stream",
512
+ "text": [
513
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-5ceac15e651919d2.arrow\n"
514
+ ]
515
+ },
516
+ {
517
+ "name": "stdout",
518
+ "output_type": "stream",
519
+ "text": [
520
+ " "
521
+ ]
522
+ },
523
+ {
524
+ "name": "stderr",
525
+ "output_type": "stream",
526
+ "text": [
527
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-b81e39451b6b2f7e.arrow\n"
528
+ ]
529
+ },
530
+ {
531
+ "name": "stdout",
532
+ "output_type": "stream",
533
+ "text": [
534
+ " "
535
+ ]
536
+ },
537
+ {
538
+ "name": "stderr",
539
+ "output_type": "stream",
540
+ "text": [
541
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-1bcda98ae382df67.arrow\n"
542
+ ]
543
+ },
544
+ {
545
+ "name": "stdout",
546
+ "output_type": "stream",
547
+ "text": [
548
+ " "
549
+ ]
550
+ },
551
+ {
552
+ "name": "stderr",
553
+ "output_type": "stream",
554
+ "text": [
555
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-fa4442bf92b4768b.arrow\n"
556
+ ]
557
+ },
558
+ {
559
+ "name": "stdout",
560
+ "output_type": "stream",
561
+ "text": [
562
+ " "
563
+ ]
564
+ },
565
+ {
566
+ "name": "stderr",
567
+ "output_type": "stream",
568
+ "text": [
569
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-aa2a4366053b507c.arrow\n"
570
+ ]
571
+ },
572
+ {
573
+ "name": "stdout",
574
+ "output_type": "stream",
575
+ "text": [
576
+ " "
577
+ ]
578
+ },
579
+ {
580
+ "name": "stderr",
581
+ "output_type": "stream",
582
+ "text": [
583
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-962e11e2efef61ea.arrow\n"
584
+ ]
585
+ },
586
+ {
587
+ "name": "stdout",
588
+ "output_type": "stream",
589
+ "text": [
590
+ " "
591
+ ]
592
+ },
593
+ {
594
+ "name": "stderr",
595
+ "output_type": "stream",
596
+ "text": [
597
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-9a86568f88be8e85.arrow\n"
598
+ ]
599
+ },
600
+ {
601
+ "name": "stdout",
602
+ "output_type": "stream",
603
+ "text": [
604
+ " "
605
+ ]
606
+ },
607
+ {
608
+ "name": "stderr",
609
+ "output_type": "stream",
610
+ "text": [
611
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-9f392100036d7e36.arrow\n"
612
+ ]
613
+ },
614
+ {
615
+ "name": "stdout",
616
+ "output_type": "stream",
617
+ "text": [
618
+ " "
619
+ ]
620
+ },
621
+ {
622
+ "name": "stderr",
623
+ "output_type": "stream",
624
+ "text": [
625
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-bcce0e8f19f73037.arrow\n"
626
+ ]
627
+ },
628
+ {
629
+ "name": "stdout",
630
+ "output_type": "stream",
631
+ "text": [
632
+ " "
633
+ ]
634
+ },
635
+ {
636
+ "name": "stderr",
637
+ "output_type": "stream",
638
+ "text": [
639
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-9776e9836e6e1ee0.arrow\n"
640
+ ]
641
+ },
642
+ {
643
+ "name": "stdout",
644
+ "output_type": "stream",
645
+ "text": [
646
+ " "
647
+ ]
648
+ },
649
+ {
650
+ "name": "stderr",
651
+ "output_type": "stream",
652
+ "text": [
653
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-55693ec29a40f3cb.arrow\n"
654
+ ]
655
+ }
656
+ ],
657
+ "source": [
658
+ "tokenized_datasets = datasets.map(\n",
659
+ " tokenize_function, batched=True, num_proc=4, remove_columns=[\"text\"]\n",
660
+ ")"
661
+ ]
662
+ },
663
+ {
664
+ "cell_type": "markdown",
665
+ "metadata": {
666
+ "id": "8qik3J_C3l-7"
667
+ },
668
+ "source": [
669
+ "If we now look at an element of our datasets, we will see the text have been replaced by the `input_ids` the model will need:"
670
+ ]
671
+ },
672
+ {
673
+ "cell_type": "code",
674
+ "execution_count": 13,
675
+ "metadata": {
676
+ "id": "nYv_mcKk3l-7",
677
+ "outputId": "8334734c-0f86-4e18-ec17-4216a2d5dd18"
678
+ },
679
+ "outputs": [
680
+ {
681
+ "data": {
682
+ "text/plain": [
683
+ "{'input_ids': [796, 569, 18354, 7496, 17740, 6711, 796, 220, 198],\n",
684
+ " 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}"
685
+ ]
686
+ },
687
+ "execution_count": 13,
688
+ "metadata": {},
689
+ "output_type": "execute_result"
690
+ }
691
+ ],
692
+ "source": [
693
+ "tokenized_datasets[\"train\"][1]"
694
+ ]
695
+ },
696
+ {
697
+ "cell_type": "markdown",
698
+ "metadata": {
699
+ "id": "obvgcXda3l--"
700
+ },
701
+ "source": [
702
+ "Now for the harder part: We need to concatenate all our texts together, and then split the result into chunks of a fixed size, which we will call `block_size`. To do this, we will use the `map` method again, with the option `batched=True`. When we use `batched=True`, the function we pass to `map()` will be passed multiple inputs at once, allowing us to group them into more or fewer examples than we had in the input. This allows us to create our new fixed-length samples.\n",
703
+ "\n",
704
+ "We can use any `block_size` up to the the maximum length our model was pretrained with, which for models in the `gpt2` family is usually something in the range 512-1024. This might be a bit too big to fit in your GPU RAM, though, so let's use something a bit smaller: 128."
705
+ ]
706
+ },
707
+ {
708
+ "cell_type": "code",
709
+ "execution_count": 14,
710
+ "metadata": {
711
+ "id": "DVHs5aCA3l-_"
712
+ },
713
+ "outputs": [],
714
+ "source": [
715
+ "# block_size = tokenizer.model_max_length\n",
716
+ "block_size = 128"
717
+ ]
718
+ },
719
+ {
720
+ "cell_type": "markdown",
721
+ "metadata": {
722
+ "id": "RpNfGiMw3l_A"
723
+ },
724
+ "source": [
725
+ "Then we write the preprocessing function that will group our texts:"
726
+ ]
727
+ },
728
+ {
729
+ "cell_type": "code",
730
+ "execution_count": 15,
731
+ "metadata": {
732
+ "id": "iaAJy5Hu3l_B"
733
+ },
734
+ "outputs": [],
735
+ "source": [
736
+ "def group_texts(examples):\n",
737
+ " # Concatenate all texts.\n",
738
+ " concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\n",
739
+ " total_length = len(concatenated_examples[list(examples.keys())[0]])\n",
740
+ " # We drop the small remainder, though you could add padding instead if the model supports it\n",
741
+ " # In this, as in all things, we advise you to follow your heart\n",
742
+ " total_length = (total_length // block_size) * block_size\n",
743
+ " # Split by chunks of max_len.\n",
744
+ " result = {\n",
745
+ " k: [t[i : i + block_size] for i in range(0, total_length, block_size)]\n",
746
+ " for k, t in concatenated_examples.items()\n",
747
+ " }\n",
748
+ " result[\"labels\"] = result[\"input_ids\"].copy()\n",
749
+ " return result"
750
+ ]
751
+ },
752
+ {
753
+ "cell_type": "markdown",
754
+ "metadata": {
755
+ "id": "LGJWXtNv3l_C"
756
+ },
757
+ "source": [
758
+ "Note that we duplicate the inputs for our labels, without shifting them, even though we told you the labels need to be shifted! This is because CausalLM models in the 🤗 Transformers library automatically apply right-shifting to the inputs, so we don't need to do it manually.\n",
759
+ "\n",
760
+ "Also note that by default, the `map` method will send a batch of 1,000 examples to be treated by the preprocessing function. So here, we will drop the remainder to make the concatenated tokenized texts a multiple of `block_size` every 1,000 examples. You can adjust this behavior by passing a higher batch size (which will also be processed slower). You can also speed-up the preprocessing by using multiprocessing:"
761
+ ]
762
+ },
763
+ {
764
+ "cell_type": "code",
765
+ "execution_count": 16,
766
+ "metadata": {
767
+ "id": "gXUSfBrq3l_C",
768
+ "outputId": "34e55885-3d8f-4f05-cbdb-706ce56a25f8"
769
+ },
770
+ "outputs": [
771
+ {
772
+ "name": "stdout",
773
+ "output_type": "stream",
774
+ "text": [
775
+ " "
776
+ ]
777
+ },
778
+ {
779
+ "name": "stderr",
780
+ "output_type": "stream",
781
+ "text": [
782
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-7b71dd2271728f79.arrow\n"
783
+ ]
784
+ },
785
+ {
786
+ "name": "stdout",
787
+ "output_type": "stream",
788
+ "text": [
789
+ " "
790
+ ]
791
+ },
792
+ {
793
+ "name": "stderr",
794
+ "output_type": "stream",
795
+ "text": [
796
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-cee53a8f6793ac14.arrow\n"
797
+ ]
798
+ },
799
+ {
800
+ "name": "stdout",
801
+ "output_type": "stream",
802
+ "text": [
803
+ " "
804
+ ]
805
+ },
806
+ {
807
+ "name": "stderr",
808
+ "output_type": "stream",
809
+ "text": [
810
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-03de660721d6e90f.arrow\n"
811
+ ]
812
+ },
813
+ {
814
+ "name": "stdout",
815
+ "output_type": "stream",
816
+ "text": [
817
+ " "
818
+ ]
819
+ },
820
+ {
821
+ "name": "stderr",
822
+ "output_type": "stream",
823
+ "text": [
824
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-1aa9f24edffd33bf.arrow\n"
825
+ ]
826
+ },
827
+ {
828
+ "name": "stdout",
829
+ "output_type": "stream",
830
+ "text": [
831
+ " "
832
+ ]
833
+ },
834
+ {
835
+ "name": "stderr",
836
+ "output_type": "stream",
837
+ "text": [
838
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-be9266f35a58e0d1.arrow\n"
839
+ ]
840
+ },
841
+ {
842
+ "name": "stdout",
843
+ "output_type": "stream",
844
+ "text": [
845
+ " "
846
+ ]
847
+ },
848
+ {
849
+ "name": "stderr",
850
+ "output_type": "stream",
851
+ "text": [
852
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-a6201b62855b0506.arrow\n"
853
+ ]
854
+ },
855
+ {
856
+ "name": "stdout",
857
+ "output_type": "stream",
858
+ "text": [
859
+ " "
860
+ ]
861
+ },
862
+ {
863
+ "name": "stderr",
864
+ "output_type": "stream",
865
+ "text": [
866
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-f208c1a35aa5450a.arrow\n"
867
+ ]
868
+ },
869
+ {
870
+ "name": "stdout",
871
+ "output_type": "stream",
872
+ "text": [
873
+ " "
874
+ ]
875
+ },
876
+ {
877
+ "name": "stderr",
878
+ "output_type": "stream",
879
+ "text": [
880
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-233fc6217e931151.arrow\n"
881
+ ]
882
+ },
883
+ {
884
+ "name": "stdout",
885
+ "output_type": "stream",
886
+ "text": [
887
+ " "
888
+ ]
889
+ },
890
+ {
891
+ "name": "stderr",
892
+ "output_type": "stream",
893
+ "text": [
894
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-865d7a7e5760a6af.arrow\n"
895
+ ]
896
+ },
897
+ {
898
+ "name": "stdout",
899
+ "output_type": "stream",
900
+ "text": [
901
+ " "
902
+ ]
903
+ },
904
+ {
905
+ "name": "stderr",
906
+ "output_type": "stream",
907
+ "text": [
908
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-996d57e28a0c3daa.arrow\n"
909
+ ]
910
+ },
911
+ {
912
+ "name": "stdout",
913
+ "output_type": "stream",
914
+ "text": [
915
+ " "
916
+ ]
917
+ },
918
+ {
919
+ "name": "stderr",
920
+ "output_type": "stream",
921
+ "text": [
922
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-2b587ab7ed92bd6d.arrow\n"
923
+ ]
924
+ },
925
+ {
926
+ "name": "stdout",
927
+ "output_type": "stream",
928
+ "text": [
929
+ " "
930
+ ]
931
+ },
932
+ {
933
+ "name": "stderr",
934
+ "output_type": "stream",
935
+ "text": [
936
+ "Loading cached processed dataset at /Users/ArjunPatel/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-81447c56f742f510.arrow\n"
937
+ ]
938
+ }
939
+ ],
940
+ "source": [
941
+ "lm_datasets = tokenized_datasets.map(\n",
942
+ " group_texts,\n",
943
+ " batched=True,\n",
944
+ " batch_size=1000,\n",
945
+ " num_proc=4,\n",
946
+ ")"
947
+ ]
948
+ },
949
+ {
950
+ "cell_type": "markdown",
951
+ "metadata": {
952
+ "id": "6n84V8Gc3l_G"
953
+ },
954
+ "source": [
955
+ "And we can check our datasets have changed: now the samples contain chunks of `block_size` contiguous tokens, potentially spanning several of our original texts."
956
+ ]
957
+ },
958
+ {
959
+ "cell_type": "code",
960
+ "execution_count": 17,
961
+ "metadata": {
962
+ "id": "hTeGCLl_3l_G",
963
+ "outputId": "ab381a07-f92e-4b14-f7b6-e4af5513d7c4"
964
+ },
965
+ "outputs": [
966
+ {
967
+ "data": {
968
+ "text/plain": [
969
+ "' game and follows the \" Nameless \", a penal military unit serving the nation of Gallia during the Second Europan War who perform secret black operations and are pitted against the Imperial unit \" Calamaty Raven \". \\n The game began development in 2010, carrying over a large portion of the work done on Valkyria Chronicles II. While it retained the standard features of the series, it also underwent multiple adjustments, such as making the game more forgiving for series newcomers. Character designer Raita Honjou and composer Hitoshi Sakimoto both returned from previous entries, along with Valkyria Chronicles II director Takeshi Oz'"
970
+ ]
971
+ },
972
+ "execution_count": 17,
973
+ "metadata": {},
974
+ "output_type": "execute_result"
975
+ }
976
+ ],
977
+ "source": [
978
+ "tokenizer.decode(lm_datasets[\"train\"][1][\"input_ids\"])"
979
+ ]
980
+ },
981
+ {
982
+ "cell_type": "markdown",
983
+ "metadata": {
984
+ "id": "iEmeQ7Xm3l_H"
985
+ },
986
+ "source": [
987
+ "Now that the data has been cleaned, we're ready to initialize our model:"
988
+ ]
989
+ },
990
+ {
991
+ "cell_type": "code",
992
+ "execution_count": 18,
993
+ "metadata": {
994
+ "id": "sPqQA3TT3l_I"
995
+ },
996
+ "outputs": [
997
+ {
998
+ "data": {
999
+ "application/vnd.jupyter.widget-view+json": {
1000
+ "model_id": "ff73baa0c0764c60846c0dd310506dfc",
1001
+ "version_major": 2,
1002
+ "version_minor": 0
1003
+ },
1004
+ "text/plain": [
1005
+ "Downloading: 0%| | 0.00/313M [00:00<?, ?B/s]"
1006
+ ]
1007
+ },
1008
+ "metadata": {},
1009
+ "output_type": "display_data"
1010
+ },
1011
+ {
1012
+ "name": "stderr",
1013
+ "output_type": "stream",
1014
+ "text": [
1015
+ "2022-05-09 20:46:18.219552: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA\n",
1016
+ "To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
1017
+ "2022-05-09 20:46:18.230340: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.\n",
1018
+ "All model checkpoint layers were used when initializing TFGPT2LMHeadModel.\n",
1019
+ "\n",
1020
+ "All the layers of TFGPT2LMHeadModel were initialized from the model checkpoint at distilgpt2.\n",
1021
+ "If your task is similar to the task the model of the checkpoint was trained on, you can already use TFGPT2LMHeadModel for predictions without further training.\n"
1022
+ ]
1023
+ }
1024
+ ],
1025
+ "source": [
1026
+ "from transformers import TFAutoModelForCausalLM\n",
1027
+ "\n",
1028
+ "model = TFAutoModelForCausalLM.from_pretrained(model_checkpoint)"
1029
+ ]
1030
+ },
1031
+ {
1032
+ "cell_type": "markdown",
1033
+ "metadata": {
1034
+ "id": "VyPQTOF_3l_J"
1035
+ },
1036
+ "source": [
1037
+ "Once we've done that, it's time for our optimizer! We can initialize our `AdamWeightDecay` optimizer directly, or we can use the `create_optimizer` function to generate an `AdamWeightDecay` optimizer with a learning rate schedule. In this case, we'll just stick with a constant learning rate for simplicity, so let's just use `AdamWeightDecay`."
1038
+ ]
1039
+ },
1040
+ {
1041
+ "cell_type": "code",
1042
+ "execution_count": 19,
1043
+ "metadata": {
1044
+ "id": "jElf8LJ33l_K"
1045
+ },
1046
+ "outputs": [],
1047
+ "source": [
1048
+ "from transformers import create_optimizer, AdamWeightDecay"
1049
+ ]
1050
+ },
1051
+ {
1052
+ "cell_type": "code",
1053
+ "execution_count": 20,
1054
+ "metadata": {
1055
+ "id": "YbSwEhQ63l_L"
1056
+ },
1057
+ "outputs": [
1058
+ {
1059
+ "name": "stderr",
1060
+ "output_type": "stream",
1061
+ "text": [
1062
+ "/Users/ArjunPatel/.local/lib/python3.7/site-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.\n",
1063
+ " super(Adam, self).__init__(name, **kwargs)\n"
1064
+ ]
1065
+ }
1066
+ ],
1067
+ "source": [
1068
+ "optimizer = AdamWeightDecay(lr=2e-5, weight_decay_rate=0.01)"
1069
+ ]
1070
+ },
1071
+ {
1072
+ "cell_type": "markdown",
1073
+ "metadata": {},
1074
+ "source": [
1075
+ "Note that most models on the Hub compute loss internally, so we actually don't have to specify anything there! Leaving the loss field blank will cause the model to read the `loss` head as its loss value.\n",
1076
+ "\n",
1077
+ "This is an unusual quirk of TensorFlow models in 🤗 Transformers, so it's worth elaborating on in a little more detail. All 🤗 Transformers models are capable of computing an appropriate loss for their task internally (for example, a CausalLM model will use a cross-entropy loss). To do this, the labels must be provided in the input dict (or equivalently, in the `columns` argument to `to_tf_dataset()`), so that they are visible to the model during the forward pass.\n",
1078
+ "\n",
1079
+ "This is quite different from the standard Keras way of handling losses, where labels are passed separately and not visible to the main body of the model, and loss is handled by a function that the user passes to `compile()`, which uses the model outputs and the label to compute a loss value.\n",
1080
+ "\n",
1081
+ "The approach we take is that if the user does not pass a loss to `compile()`, the model will assume you want the **internal** loss. If you are doing this, you should make sure that the labels column(s) are included in the **input dict** or in the `columns` argument to `to_tf_dataset`.\n",
1082
+ "\n",
1083
+ "If you want to use your own loss, that is of course possible too! If you do this, you should make sure your labels column(s) are passed like normal labels, either as the **second argument** to `model.fit()`, or in the `label_cols` argument to `to_tf_dataset`. "
1084
+ ]
1085
+ },
1086
+ {
1087
+ "cell_type": "code",
1088
+ "execution_count": 22,
1089
+ "metadata": {},
1090
+ "outputs": [
1091
+ {
1092
+ "name": "stderr",
1093
+ "output_type": "stream",
1094
+ "text": [
1095
+ "No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour, please pass a loss argument, or explicitly pass `loss=None` if you do not want your model to compute a loss.\n"
1096
+ ]
1097
+ }
1098
+ ],
1099
+ "source": [
1100
+ "import tensorflow as tf\n",
1101
+ "\n",
1102
+ "model.compile(optimizer=optimizer)"
1103
+ ]
1104
+ },
1105
+ {
1106
+ "cell_type": "markdown",
1107
+ "metadata": {
1108
+ "id": "sZRbT9ui3l_N"
1109
+ },
1110
+ "source": [
1111
+ "Next, we convert our datasets to `tf.data.Dataset`, which Keras understands natively. `Dataset` objects have a built-in method for this. Because all our inputs are the same length, no padding is required, so we can use the DefaultDataCollator. Note that our data collators are designed to work for multiple frameworks, so ensure you set the `return_tensors='tf'` argument to get Tensorflow tensors out - you don't want to accidentally get a load of `torch.Tensor` objects in the middle of your nice TF code!"
1112
+ ]
1113
+ },
1114
+ {
1115
+ "cell_type": "code",
1116
+ "execution_count": 23,
1117
+ "metadata": {
1118
+ "id": "OEuqwIra3l_N"
1119
+ },
1120
+ "outputs": [
1121
+ {
1122
+ "name": "stdout",
1123
+ "output_type": "stream",
1124
+ "text": [
1125
+ "WARNING:tensorflow:AutoGraph could not transform <function TensorflowDatasetMixin.to_tf_dataset.<locals>.fetch_function at 0x7fd6e30aa830> and will run it as-is.\n",
1126
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1127
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1128
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1129
+ "WARNING: AutoGraph could not transform <function TensorflowDatasetMixin.to_tf_dataset.<locals>.fetch_function at 0x7fd6e30aa830> and will run it as-is.\n",
1130
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1131
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1132
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1133
+ "WARNING:tensorflow:AutoGraph could not transform <function TensorflowDatasetMixin.to_tf_dataset.<locals>.ensure_shapes at 0x7fd6f2d50f80> and will run it as-is.\n",
1134
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1135
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1136
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1137
+ "WARNING: AutoGraph could not transform <function TensorflowDatasetMixin.to_tf_dataset.<locals>.ensure_shapes at 0x7fd6f2d50f80> and will run it as-is.\n",
1138
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1139
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1140
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1141
+ "WARNING:tensorflow:AutoGraph could not transform <function TensorflowDatasetMixin.to_tf_dataset.<locals>.fetch_function at 0x7fd6e30ddd40> and will run it as-is.\n",
1142
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1143
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1144
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1145
+ "WARNING: AutoGraph could not transform <function TensorflowDatasetMixin.to_tf_dataset.<locals>.fetch_function at 0x7fd6e30ddd40> and will run it as-is.\n",
1146
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1147
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1148
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1149
+ "WARNING:tensorflow:AutoGraph could not transform <function TensorflowDatasetMixin.to_tf_dataset.<locals>.ensure_shapes at 0x7fd6e2b88050> and will run it as-is.\n",
1150
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1151
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1152
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1153
+ "WARNING: AutoGraph could not transform <function TensorflowDatasetMixin.to_tf_dataset.<locals>.ensure_shapes at 0x7fd6e2b88050> and will run it as-is.\n",
1154
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1155
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1156
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n"
1157
+ ]
1158
+ }
1159
+ ],
1160
+ "source": [
1161
+ "from transformers import DefaultDataCollator\n",
1162
+ "\n",
1163
+ "data_collator = DefaultDataCollator(return_tensors=\"tf\")\n",
1164
+ "\n",
1165
+ "train_set = lm_datasets[\"train\"].to_tf_dataset(\n",
1166
+ " columns=[\"attention_mask\", \"input_ids\", \"labels\"],\n",
1167
+ " shuffle=True,\n",
1168
+ " batch_size=16,\n",
1169
+ " collate_fn=data_collator,\n",
1170
+ ")\n",
1171
+ "validation_set = lm_datasets[\"validation\"].to_tf_dataset(\n",
1172
+ " columns=[\"attention_mask\", \"input_ids\", \"labels\"],\n",
1173
+ " shuffle=False,\n",
1174
+ " batch_size=16,\n",
1175
+ " collate_fn=data_collator,\n",
1176
+ ")"
1177
+ ]
1178
+ },
1179
+ {
1180
+ "cell_type": "markdown",
1181
+ "metadata": {
1182
+ "id": "6Vvz34Td3l_O"
1183
+ },
1184
+ "source": [
1185
+ "Now we can train our model. We can also add a callback to sync up our model with the Hub - this allows us to resume training from other machines and even test the model's inference quality midway through training! If you don't want to do this, simply remove the callbacks argument in the call to `fit()`. "
1186
+ ]
1187
+ },
1188
+ {
1189
+ "cell_type": "code",
1190
+ "execution_count": 24,
1191
+ "metadata": {
1192
+ "id": "NyZvu_MF3l_P",
1193
+ "outputId": "b69d0931-7f1f-4f2d-fdb8-09d37c7418bb"
1194
+ },
1195
+ "outputs": [
1196
+ {
1197
+ "name": "stderr",
1198
+ "output_type": "stream",
1199
+ "text": [
1200
+ "Cloning https://huggingface.co/arjunpatel/distilgpt2-finetuned-wikitext2 into local empty directory.\n"
1201
+ ]
1202
+ },
1203
+ {
1204
+ "name": "stdout",
1205
+ "output_type": "stream",
1206
+ "text": [
1207
+ "WARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x7fd6e30fb200> and will run it as-is.\n",
1208
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1209
+ "Cause: 'arguments' object has no attribute 'posonlyargs'\n",
1210
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1211
+ "WARNING: AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x7fd6e30fb200> and will run it as-is.\n",
1212
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1213
+ "Cause: 'arguments' object has no attribute 'posonlyargs'\n",
1214
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1215
+ "WARNING:tensorflow:AutoGraph could not transform <bound method TFGPT2LMHeadModel.call of <transformers.models.gpt2.modeling_tf_gpt2.TFGPT2LMHeadModel object at 0x7fd72024f990>> and will run it as-is.\n",
1216
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1217
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1218
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1219
+ "WARNING: AutoGraph could not transform <bound method TFGPT2LMHeadModel.call of <transformers.models.gpt2.modeling_tf_gpt2.TFGPT2LMHeadModel object at 0x7fd72024f990>> and will run it as-is.\n",
1220
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1221
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1222
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1223
+ "WARNING:tensorflow:AutoGraph could not transform <bound method TFGPT2MainLayer.call of <transformers.models.gpt2.modeling_tf_gpt2.TFGPT2MainLayer object at 0x7fd7203394d0>> and will run it as-is.\n",
1224
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1225
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1226
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1227
+ "WARNING: AutoGraph could not transform <bound method TFGPT2MainLayer.call of <transformers.models.gpt2.modeling_tf_gpt2.TFGPT2MainLayer object at 0x7fd7203394d0>> and will run it as-is.\n",
1228
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1229
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1230
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1231
+ "WARNING:tensorflow:AutoGraph could not transform <bound method TFSharedEmbeddings.call of <transformers.modeling_tf_utils.TFSharedEmbeddings object at 0x7fd720332e10>> and will run it as-is.\n",
1232
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1233
+ "Cause: 'arguments' object has no attribute 'posonlyargs'\n",
1234
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1235
+ "WARNING: AutoGraph could not transform <bound method TFSharedEmbeddings.call of <transformers.modeling_tf_utils.TFSharedEmbeddings object at 0x7fd720332e10>> and will run it as-is.\n",
1236
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1237
+ "Cause: 'arguments' object has no attribute 'posonlyargs'\n",
1238
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1239
+ "WARNING:tensorflow:AutoGraph could not transform <bound method TFBlock.call of <transformers.models.gpt2.modeling_tf_gpt2.TFBlock object at 0x7fd7203bd690>> and will run it as-is.\n",
1240
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1241
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1242
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1243
+ "WARNING: AutoGraph could not transform <bound method TFBlock.call of <transformers.models.gpt2.modeling_tf_gpt2.TFBlock object at 0x7fd7203bd690>> and will run it as-is.\n",
1244
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1245
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1246
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1247
+ "WARNING:tensorflow:AutoGraph could not transform <bound method TFAttention.call of <transformers.models.gpt2.modeling_tf_gpt2.TFAttention object at 0x7fd7203bd990>> and will run it as-is.\n",
1248
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1249
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1250
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1251
+ "WARNING: AutoGraph could not transform <bound method TFAttention.call of <transformers.models.gpt2.modeling_tf_gpt2.TFAttention object at 0x7fd7203bd990>> and will run it as-is.\n",
1252
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1253
+ "Cause: module 'gast' has no attribute 'Constant'\n",
1254
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1255
+ "WARNING:tensorflow:AutoGraph could not transform <bound method TFConv1D.call of <transformers.modeling_tf_utils.TFConv1D object at 0x7fd7203bd6d0>> and will run it as-is.\n",
1256
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1257
+ "Cause: 'arguments' object has no attribute 'posonlyargs'\n",
1258
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1259
+ "WARNING: AutoGraph could not transform <bound method TFConv1D.call of <transformers.modeling_tf_utils.TFConv1D object at 0x7fd7203bd6d0>> and will run it as-is.\n",
1260
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1261
+ "Cause: 'arguments' object has no attribute 'posonlyargs'\n",
1262
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1263
+ "WARNING:tensorflow:AutoGraph could not transform <bound method TFMLP.call of <transformers.models.gpt2.modeling_tf_gpt2.TFMLP object at 0x7fd7204c2a10>> and will run it as-is.\n",
1264
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1265
+ "Cause: 'arguments' object has no attribute 'posonlyargs'\n",
1266
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1267
+ "WARNING: AutoGraph could not transform <bound method TFMLP.call of <transformers.models.gpt2.modeling_tf_gpt2.TFMLP object at 0x7fd7204c2a10>> and will run it as-is.\n",
1268
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1269
+ "Cause: 'arguments' object has no attribute 'posonlyargs'\n",
1270
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1271
+ "WARNING:tensorflow:AutoGraph could not transform <function dummy_loss at 0x7fd7202ce710> and will run it as-is.\n",
1272
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1273
+ "Cause: 'arguments' object has no attribute 'posonlyargs'\n",
1274
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1275
+ "WARNING: AutoGraph could not transform <function dummy_loss at 0x7fd7202ce710> and will run it as-is.\n",
1276
+ "Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.\n",
1277
+ "Cause: 'arguments' object has no attribute 'posonlyargs'\n",
1278
+ "To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert\n",
1279
+ " 7/1166 [..............................] - ETA: 1:24:49 - loss: 4.5316"
1280
+ ]
1281
+ },
1282
+ {
1283
+ "ename": "KeyboardInterrupt",
1284
+ "evalue": "",
1285
+ "output_type": "error",
1286
+ "traceback": [
1287
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
1288
+ "\u001b[0;31mKeyboardInterrupt\u001b[0m Traceback (most recent call last)",
1289
+ "\u001b[0;32m/var/folders/vj/m14m1x1j47b8nvnmkfkf20ph0000gn/T/ipykernel_10410/3702115951.py\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 15\u001b[0m \u001b[0mcallbacks\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0mtensorboard_callback\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mpush_to_hub_callback\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 16\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 17\u001b[0;31m \u001b[0mmodel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfit\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtrain_set\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mvalidation_data\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mvalidation_set\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mepochs\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcallbacks\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcallbacks\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
1290
+ "\u001b[0;32m~/.local/lib/python3.7/site-packages/keras/utils/traceback_utils.py\u001b[0m in \u001b[0;36merror_handler\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m 62\u001b[0m \u001b[0mfiltered_tb\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 63\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 64\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mfn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 65\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mException\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;31m# pylint: disable=broad-except\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 66\u001b[0m \u001b[0mfiltered_tb\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_process_traceback_frames\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0me\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m__traceback__\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
1291
+ "\u001b[0;32m~/.local/lib/python3.7/site-packages/keras/engine/training.py\u001b[0m in \u001b[0;36mfit\u001b[0;34m(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)\u001b[0m\n\u001b[1;32m 1382\u001b[0m _r=1):\n\u001b[1;32m 1383\u001b[0m \u001b[0mcallbacks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mon_train_batch_begin\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mstep\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1384\u001b[0;31m \u001b[0mtmp_logs\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtrain_function\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0miterator\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1385\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mdata_handler\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mshould_sync\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1386\u001b[0m \u001b[0mcontext\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0masync_wait\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
1292
+ "\u001b[0;32m~/.local/lib/python3.7/site-packages/tensorflow/python/util/traceback_utils.py\u001b[0m in \u001b[0;36merror_handler\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m 148\u001b[0m \u001b[0mfiltered_tb\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 149\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 150\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mfn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 151\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mException\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 152\u001b[0m \u001b[0mfiltered_tb\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_process_traceback_frames\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0me\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m__traceback__\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
1293
+ "\u001b[0;32m~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *args, **kwds)\u001b[0m\n\u001b[1;32m 913\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 914\u001b[0m \u001b[0;32mwith\u001b[0m \u001b[0mOptionalXlaContext\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_jit_compile\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 915\u001b[0;31m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 916\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 917\u001b[0m \u001b[0mnew_tracing_count\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mexperimental_get_tracing_count\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
1294
+ "\u001b[0;32m~/.local/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py\u001b[0m in \u001b[0;36m_call\u001b[0;34m(self, *args, **kwds)\u001b[0m\n\u001b[1;32m 945\u001b[0m \u001b[0;31m# In this case we have created variables on the first call, so we run the\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 946\u001b[0m \u001b[0;31m# defunned version which is guaranteed to never create variables.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 947\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_stateless_fn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# pylint: disable=not-callable\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 948\u001b[0m \u001b[0;32melif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_stateful_fn\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 949\u001b[0m \u001b[0;31m# Release the lock early so that multiple threads can perform the call\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
1295
+ "\u001b[0;32m~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *args, **kwargs)\u001b[0m\n\u001b[1;32m 2955\u001b[0m filtered_flat_args) = self._maybe_define_function(args, kwargs)\n\u001b[1;32m 2956\u001b[0m return graph_function._call_flat(\n\u001b[0;32m-> 2957\u001b[0;31m filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access\n\u001b[0m\u001b[1;32m 2958\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2959\u001b[0m \u001b[0;34m@\u001b[0m\u001b[0mproperty\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
1296
+ "\u001b[0;32m~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py\u001b[0m in \u001b[0;36m_call_flat\u001b[0;34m(self, args, captured_inputs, cancellation_manager)\u001b[0m\n\u001b[1;32m 1852\u001b[0m \u001b[0;31m# No tape is watching; skip to running the function.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1853\u001b[0m return self._build_call_outputs(self._inference_function.call(\n\u001b[0;32m-> 1854\u001b[0;31m ctx, args, cancellation_manager=cancellation_manager))\n\u001b[0m\u001b[1;32m 1855\u001b[0m forward_backward = self._select_forward_and_backward_functions(\n\u001b[1;32m 1856\u001b[0m \u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
1297
+ "\u001b[0;32m~/.local/lib/python3.7/site-packages/tensorflow/python/eager/function.py\u001b[0m in \u001b[0;36mcall\u001b[0;34m(self, ctx, args, cancellation_manager)\u001b[0m\n\u001b[1;32m 502\u001b[0m \u001b[0minputs\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 503\u001b[0m \u001b[0mattrs\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mattrs\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 504\u001b[0;31m ctx=ctx)\n\u001b[0m\u001b[1;32m 505\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 506\u001b[0m outputs = execute.execute_with_cancellation(\n",
1298
+ "\u001b[0;32m~/.local/lib/python3.7/site-packages/tensorflow/python/eager/execute.py\u001b[0m in \u001b[0;36mquick_execute\u001b[0;34m(op_name, num_outputs, inputs, attrs, ctx, name)\u001b[0m\n\u001b[1;32m 53\u001b[0m \u001b[0mctx\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mensure_initialized\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 54\u001b[0m tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,\n\u001b[0;32m---> 55\u001b[0;31m inputs, attrs, num_outputs)\n\u001b[0m\u001b[1;32m 56\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mcore\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_NotOkStatusException\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0me\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 57\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mname\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
1299
+ "\u001b[0;31mKeyboardInterrupt\u001b[0m: "
1300
+ ]
1301
+ }
1302
+ ],
1303
+ "source": [
1304
+ "from transformers.keras_callbacks import PushToHubCallback\n",
1305
+ "from tensorflow.keras.callbacks import TensorBoard\n",
1306
+ "\n",
1307
+ "model_name = model_checkpoint.split(\"/\")[-1]\n",
1308
+ "push_to_hub_model_id = f\"{model_name}-finetuned-wikitext2\"\n",
1309
+ "\n",
1310
+ "tensorboard_callback = TensorBoard(log_dir=\"./clm_model_save/logs\")\n",
1311
+ "\n",
1312
+ "push_to_hub_callback = PushToHubCallback(\n",
1313
+ " output_dir=\"./clm_model_save\",\n",
1314
+ " tokenizer=tokenizer,\n",
1315
+ " hub_model_id=push_to_hub_model_id,\n",
1316
+ ")\n",
1317
+ "\n",
1318
+ "callbacks = [tensorboard_callback, push_to_hub_callback]\n",
1319
+ "\n",
1320
+ "model.fit(train_set, validation_data=validation_set, epochs=1, callbacks=callbacks)"
1321
+ ]
1322
+ },
1323
+ {
1324
+ "cell_type": "markdown",
1325
+ "metadata": {
1326
+ "id": "3APq-vUc3l_R"
1327
+ },
1328
+ "source": [
1329
+ "Once the training is completed, we can evaluate our model and get its cross-entropy loss on the validation set like this:"
1330
+ ]
1331
+ },
1332
+ {
1333
+ "cell_type": "code",
1334
+ "execution_count": 22,
1335
+ "metadata": {
1336
+ "id": "diKZnB1I3l_R",
1337
+ "outputId": "9b3ac725-0117-4830-f380-a555ee57c8cf"
1338
+ },
1339
+ "outputs": [
1340
+ {
1341
+ "name": "stdout",
1342
+ "output_type": "stream",
1343
+ "text": [
1344
+ "121/121 [==============================] - 4s 33ms/step - loss: 3.6752\n"
1345
+ ]
1346
+ }
1347
+ ],
1348
+ "source": [
1349
+ "eval_loss = model.evaluate(validation_set)"
1350
+ ]
1351
+ },
1352
+ {
1353
+ "cell_type": "markdown",
1354
+ "metadata": {},
1355
+ "source": [
1356
+ "The quality of language models is often measured in 'perplexity' rather than cross-entropy. To convert to perplexity, we simply raise e to the power of the cross-entropy loss."
1357
+ ]
1358
+ },
1359
+ {
1360
+ "cell_type": "code",
1361
+ "execution_count": 23,
1362
+ "metadata": {},
1363
+ "outputs": [
1364
+ {
1365
+ "name": "stdout",
1366
+ "output_type": "stream",
1367
+ "text": [
1368
+ "Perplexity: 39.46\n"
1369
+ ]
1370
+ }
1371
+ ],
1372
+ "source": [
1373
+ "import math\n",
1374
+ "\n",
1375
+ "print(f\"Perplexity: {math.exp(eval_loss):.2f}\")"
1376
+ ]
1377
+ },
1378
+ {
1379
+ "cell_type": "markdown",
1380
+ "metadata": {},
1381
+ "source": [
1382
+ "If you saved the model with the callback, you can now share this model with all your friends, family, favorite pets: they can all load it with the identifier `\"your-username/the-name-you-picked\"` so for instance:\n",
1383
+ "\n",
1384
+ "```python\n",
1385
+ "from transformers import AutoModelForCausalLM\n",
1386
+ "\n",
1387
+ "model = AutoModelForCausalLM.from_pretrained(\"sgugger/my-awesome-model\")\n",
1388
+ "```"
1389
+ ]
1390
+ },
1391
+ {
1392
+ "cell_type": "markdown",
1393
+ "metadata": {
1394
+ "id": "q-EIELH43l_T"
1395
+ },
1396
+ "source": [
1397
+ "## Masked language modeling"
1398
+ ]
1399
+ },
1400
+ {
1401
+ "cell_type": "markdown",
1402
+ "metadata": {
1403
+ "id": "LWk97-Ny3l_T"
1404
+ },
1405
+ "source": [
1406
+ "For masked language modeling (MLM) we are going to use the same preprocessing as before for our dataset with one additional step: we will randomly mask some tokens (by replacing them by `[MASK]`) and the labels will be adjusted to only include the masked tokens (we don't have to predict the non-masked tokens).\n",
1407
+ "\n",
1408
+ "We will use the [`distilroberta-base`](https://huggingface.co/distilroberta-base) model for this example. You can pick any of the checkpoints listed [here](https://huggingface.co/models?filter=masked-lm) instead:"
1409
+ ]
1410
+ },
1411
+ {
1412
+ "cell_type": "code",
1413
+ "execution_count": 24,
1414
+ "metadata": {
1415
+ "id": "QRTpmyCc3l_T"
1416
+ },
1417
+ "outputs": [],
1418
+ "source": [
1419
+ "model_checkpoint = \"distilroberta-base\""
1420
+ ]
1421
+ },
1422
+ {
1423
+ "cell_type": "markdown",
1424
+ "metadata": {
1425
+ "id": "12F1ulgT3l_V"
1426
+ },
1427
+ "source": [
1428
+ "We can apply the same tokenization function as before, we just need to update our tokenizer to use the checkpoint we just picked. Don't panic about the warnings about inputs being too long for the model - remember that we'll be breaking them into shorter chunks right afterwards!"
1429
+ ]
1430
+ },
1431
+ {
1432
+ "cell_type": "code",
1433
+ "execution_count": 25,
1434
+ "metadata": {
1435
+ "id": "h8RCYcvr3l_V",
1436
+ "outputId": "a5ffeb0a-71da-4b27-e57a-c62f1927562e"
1437
+ },
1438
+ "outputs": [
1439
+ {
1440
+ "name": "stderr",
1441
+ "output_type": "stream",
1442
+ "text": [
1443
+ "Token indices sequence length is longer than the specified maximum sequence length for this model (544 > 512). Running this sequence through the model will result in indexing errors\n",
1444
+ "Token indices sequence length is longer than the specified maximum sequence length for this model (560 > 512). Running this sequence through the model will result in indexing errors\n",
1445
+ "Token indices sequence length is longer than the specified maximum sequence length for this model (528 > 512). Running this sequence through the model will result in indexing errors\n",
1446
+ "Token indices sequence length is longer than the specified maximum sequence length for this model (638 > 512). Running this sequence through the model will result in indexing errors\n",
1447
+ "Token indices sequence length is longer than the specified maximum sequence length for this model (522 > 512). Running this sequence through the model will result in indexing errors\n"
1448
+ ]
1449
+ }
1450
+ ],
1451
+ "source": [
1452
+ "tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\n",
1453
+ "tokenized_datasets = datasets.map(\n",
1454
+ " tokenize_function, batched=True, num_proc=4, remove_columns=[\"text\"]\n",
1455
+ ")"
1456
+ ]
1457
+ },
1458
+ {
1459
+ "cell_type": "markdown",
1460
+ "metadata": {
1461
+ "id": "MTuy8UUs3l_X"
1462
+ },
1463
+ "source": [
1464
+ "And now, we group texts together and chunk them into samples of length `block_size`. You can skip this step if your dataset is composed of individual sentences."
1465
+ ]
1466
+ },
1467
+ {
1468
+ "cell_type": "code",
1469
+ "execution_count": 26,
1470
+ "metadata": {
1471
+ "id": "LVYPMwEs3l_X",
1472
+ "outputId": "e71ed7f1-b182-4643-a8fb-3d731c70e40b"
1473
+ },
1474
+ "outputs": [],
1475
+ "source": [
1476
+ "lm_datasets = tokenized_datasets.map(\n",
1477
+ " group_texts,\n",
1478
+ " batched=True,\n",
1479
+ " batch_size=1000,\n",
1480
+ " num_proc=4,\n",
1481
+ ")"
1482
+ ]
1483
+ },
1484
+ {
1485
+ "cell_type": "markdown",
1486
+ "metadata": {
1487
+ "id": "nFJ49iHJ3l_Z"
1488
+ },
1489
+ "source": [
1490
+ "The rest is very similar to what we had, with two exceptions. First we use a model suitable for masked LM:"
1491
+ ]
1492
+ },
1493
+ {
1494
+ "cell_type": "code",
1495
+ "execution_count": 27,
1496
+ "metadata": {
1497
+ "id": "PM10A9Za3l_Z",
1498
+ "outputId": "fff2d5bb-397d-4d5d-9aa9-933090cb6680"
1499
+ },
1500
+ "outputs": [
1501
+ {
1502
+ "name": "stderr",
1503
+ "output_type": "stream",
1504
+ "text": [
1505
+ "All model checkpoint layers were used when initializing TFRobertaForMaskedLM.\n",
1506
+ "\n",
1507
+ "All the layers of TFRobertaForMaskedLM were initialized from the model checkpoint at distilroberta-base.\n",
1508
+ "If your task is similar to the task the model of the checkpoint was trained on, you can already use TFRobertaForMaskedLM for predictions without further training.\n"
1509
+ ]
1510
+ }
1511
+ ],
1512
+ "source": [
1513
+ "from transformers import TFAutoModelForMaskedLM\n",
1514
+ "\n",
1515
+ "model = TFAutoModelForMaskedLM.from_pretrained(model_checkpoint)"
1516
+ ]
1517
+ },
1518
+ {
1519
+ "cell_type": "markdown",
1520
+ "metadata": {},
1521
+ "source": [
1522
+ "We redefine our `optimizer` as we did with the CLM model, and we compile the model. We're using the internal loss again, like we did before."
1523
+ ]
1524
+ },
1525
+ {
1526
+ "cell_type": "code",
1527
+ "execution_count": 28,
1528
+ "metadata": {},
1529
+ "outputs": [
1530
+ {
1531
+ "name": "stderr",
1532
+ "output_type": "stream",
1533
+ "text": [
1534
+ "/home/matt/miniconda3/envs/tensorflow28/lib/python3.10/site-packages/keras/optimizer_v2/adam.py:105: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.\n",
1535
+ " super(Adam, self).__init__(name, **kwargs)\n",
1536
+ "No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! Please ensure your labels are passed as keys in the input dict so that they are accessible to the model during the forward pass. To disable this behaviour, please pass a loss argument, or explicitly pass loss=None if you do not want your model to compute a loss.\n"
1537
+ ]
1538
+ }
1539
+ ],
1540
+ "source": [
1541
+ "from transformers import create_optimizer, AdamWeightDecay\n",
1542
+ "import tensorflow as tf\n",
1543
+ "\n",
1544
+ "optimizer = AdamWeightDecay(lr=2e-5, weight_decay_rate=0.01)\n",
1545
+ "\n",
1546
+ "model.compile(optimizer=optimizer)"
1547
+ ]
1548
+ },
1549
+ {
1550
+ "cell_type": "markdown",
1551
+ "metadata": {
1552
+ "id": "z6uuUnvz3l_b"
1553
+ },
1554
+ "source": [
1555
+ "Finally, we use a special `data_collator`. The `data_collator` is a function that is responsible for taking the samples and batching them in tensors. In the previous example, we had nothing special to do, so we just used the default for this argument. Here we want to randomly mask tokens. We could do it as a pre-processing step (like the tokenization) but then the tokens would always be masked the same way at each epoch. By doing this step inside the `data_collator`, we ensure this random masking is done in a new way each time we go over the data.\n",
1556
+ "\n",
1557
+ "To do this masking for us, the library provides a `DataCollatorForLanguageModeling`. We can adjust the probability of the masking. Note that our data collators are designed to work for multiple frameworks, so ensure you set the `return_tensors='tf'` argument to get Tensorflow tensors out - you don't want to accidentally get a load of `torch.Tensor` objects in the middle of your nice TF code!"
1558
+ ]
1559
+ },
1560
+ {
1561
+ "cell_type": "code",
1562
+ "execution_count": 29,
1563
+ "metadata": {
1564
+ "id": "nRZ-5v_P3l_b"
1565
+ },
1566
+ "outputs": [],
1567
+ "source": [
1568
+ "from transformers import DataCollatorForLanguageModeling\n",
1569
+ "\n",
1570
+ "data_collator = DataCollatorForLanguageModeling(\n",
1571
+ " tokenizer=tokenizer, mlm_probability=0.15, return_tensors=\"tf\"\n",
1572
+ ")"
1573
+ ]
1574
+ },
1575
+ {
1576
+ "cell_type": "markdown",
1577
+ "metadata": {
1578
+ "id": "bqHnWcYC3l_d"
1579
+ },
1580
+ "source": [
1581
+ "Now we generate our datasets as before. Remember to pass the `data_collator` you just made to the `collate_fn` argument."
1582
+ ]
1583
+ },
1584
+ {
1585
+ "cell_type": "code",
1586
+ "execution_count": 30,
1587
+ "metadata": {},
1588
+ "outputs": [],
1589
+ "source": [
1590
+ "train_set = lm_datasets[\"train\"].to_tf_dataset(\n",
1591
+ " columns=[\"attention_mask\", \"input_ids\", \"labels\"],\n",
1592
+ " shuffle=True,\n",
1593
+ " batch_size=16,\n",
1594
+ " collate_fn=data_collator,\n",
1595
+ ")\n",
1596
+ "\n",
1597
+ "validation_set = lm_datasets[\"validation\"].to_tf_dataset(\n",
1598
+ " columns=[\"attention_mask\", \"input_ids\", \"labels\"],\n",
1599
+ " shuffle=False,\n",
1600
+ " batch_size=16,\n",
1601
+ " collate_fn=data_collator,\n",
1602
+ ")"
1603
+ ]
1604
+ },
1605
+ {
1606
+ "cell_type": "markdown",
1607
+ "metadata": {},
1608
+ "source": [
1609
+ "And now we fit our model! As before, we can use a callback to sync with the hub during training. You can remove this if you don't want to!"
1610
+ ]
1611
+ },
1612
+ {
1613
+ "cell_type": "code",
1614
+ "execution_count": 32,
1615
+ "metadata": {
1616
+ "id": "V-Y3gNqV3l_d"
1617
+ },
1618
+ "outputs": [
1619
+ {
1620
+ "name": "stderr",
1621
+ "output_type": "stream",
1622
+ "text": [
1623
+ "/home/matt/PycharmProjects/notebooks/examples/mlm_model_save is already a clone of https://huggingface.co/Rocketknight1/distilroberta-base-finetuned-wikitext2. Make sure you pull the latest changes with `repo.git_pull()`.\n"
1624
+ ]
1625
+ },
1626
+ {
1627
+ "name": "stdout",
1628
+ "output_type": "stream",
1629
+ "text": [
1630
+ "1202/1202 [==============================] - ETA: 0s - loss: 1.9043"
1631
+ ]
1632
+ },
1633
+ {
1634
+ "name": "stderr",
1635
+ "output_type": "stream",
1636
+ "text": [
1637
+ "Several commits (2) will be pushed upstream.\n"
1638
+ ]
1639
+ },
1640
+ {
1641
+ "name": "stdout",
1642
+ "output_type": "stream",
1643
+ "text": [
1644
+ "1202/1202 [==============================] - 138s 110ms/step - loss: 1.9043 - val_loss: 1.7174\n"
1645
+ ]
1646
+ },
1647
+ {
1648
+ "data": {
1649
+ "text/plain": [
1650
+ "<keras.callbacks.History at 0x7f96e3be36a0>"
1651
+ ]
1652
+ },
1653
+ "execution_count": 32,
1654
+ "metadata": {},
1655
+ "output_type": "execute_result"
1656
+ }
1657
+ ],
1658
+ "source": [
1659
+ "from transformers.keras_callbacks import PushToHubCallback\n",
1660
+ "\n",
1661
+ "model_name = model_checkpoint.split(\"/\")[-1]\n",
1662
+ "push_to_hub_model_id = f\"{model_name}-finetuned-wikitext2\"\n",
1663
+ "\n",
1664
+ "callback = PushToHubCallback(\n",
1665
+ " output_dir=\"./mlm_model_save\",\n",
1666
+ " tokenizer=tokenizer,\n",
1667
+ " hub_model_id=push_to_hub_model_id,\n",
1668
+ ")\n",
1669
+ "\n",
1670
+ "model.fit(train_set, validation_data=validation_set, epochs=1, callbacks=[callback])"
1671
+ ]
1672
+ },
1673
+ {
1674
+ "cell_type": "markdown",
1675
+ "metadata": {
1676
+ "id": "KDBi0reX3l_g"
1677
+ },
1678
+ "source": [
1679
+ "Like before, we can evaluate our model on the validation set and compute perplexity. The perplexity is much lower than for the CLM objective because for the MLM objective, we only have to make predictions for the masked tokens (which represent 15% of the total here) while having access to the rest of the tokens. It's thus an easier task for the model."
1680
+ ]
1681
+ },
1682
+ {
1683
+ "cell_type": "code",
1684
+ "execution_count": 33,
1685
+ "metadata": {
1686
+ "id": "4hSaANqj3l_g",
1687
+ "outputId": "eeeb8727-2e27-4aeb-ac71-c98123214661"
1688
+ },
1689
+ "outputs": [
1690
+ {
1691
+ "name": "stdout",
1692
+ "output_type": "stream",
1693
+ "text": [
1694
+ "125/125 [==============================] - 4s 32ms/step - loss: 1.7101\n",
1695
+ "Perplexity: 5.53\n"
1696
+ ]
1697
+ }
1698
+ ],
1699
+ "source": [
1700
+ "import math\n",
1701
+ "\n",
1702
+ "eval_results = model.evaluate(validation_set)\n",
1703
+ "print(f\"Perplexity: {math.exp(eval_results):.2f}\")"
1704
+ ]
1705
+ },
1706
+ {
1707
+ "cell_type": "markdown",
1708
+ "metadata": {},
1709
+ "source": [
1710
+ "If you used the callback, you can now share this model with all your friends, family or favorite pets: they can all load it with the identifier `\"your-username/the-name-you-picked\"` so for instance:\n",
1711
+ "\n",
1712
+ "```python\n",
1713
+ "from transformers import AutoModelForMaskedLM\n",
1714
+ "\n",
1715
+ "model = AutoModelForMaskedLM.from_pretrained(\"your-username/my-awesome-model\")\n",
1716
+ "```"
1717
+ ]
1718
+ }
1719
+ ],
1720
+ "metadata": {
1721
+ "colab": {
1722
+ "name": "Fine-tune a language model",
1723
+ "provenance": []
1724
+ },
1725
+ "kernelspec": {
1726
+ "display_name": "Python 3 (ipykernel)",
1727
+ "language": "python",
1728
+ "name": "python3"
1729
+ },
1730
+ "language_info": {
1731
+ "codemirror_mode": {
1732
+ "name": "ipython",
1733
+ "version": 3
1734
+ },
1735
+ "file_extension": ".py",
1736
+ "mimetype": "text/x-python",
1737
+ "name": "python",
1738
+ "nbconvert_exporter": "python",
1739
+ "pygments_lexer": "ipython3",
1740
+ "version": "3.7.13"
1741
+ }
1742
+ },
1743
+ "nbformat": 4,
1744
+ "nbformat_minor": 4
1745
+ }
language_modeling.ipynb ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "id": "efb481c2-3b37-44c2-885a-cdba7766e932",
7
+ "metadata": {},
8
+ "outputs": [],
9
+ "source": [
10
+ "import pandas as pd\n",
11
+ "import numpy as np\n",
12
+ "\n",
13
+ "\n",
14
+ "df = pd.read_csv(\"data/moves.csv\")"
15
+ ]
16
+ },
17
+ {
18
+ "cell_type": "code",
19
+ "execution_count": 2,
20
+ "id": "04ba24c5-1f22-4e4c-8f92-82267234f9b1",
21
+ "metadata": {},
22
+ "outputs": [
23
+ {
24
+ "data": {
25
+ "text/html": [
26
+ "<div>\n",
27
+ "<style scoped>\n",
28
+ " .dataframe tbody tr th:only-of-type {\n",
29
+ " vertical-align: middle;\n",
30
+ " }\n",
31
+ "\n",
32
+ " .dataframe tbody tr th {\n",
33
+ " vertical-align: top;\n",
34
+ " }\n",
35
+ "\n",
36
+ " .dataframe thead th {\n",
37
+ " text-align: right;\n",
38
+ " }\n",
39
+ "</style>\n",
40
+ "<table border=\"1\" class=\"dataframe\">\n",
41
+ " <thead>\n",
42
+ " <tr style=\"text-align: right;\">\n",
43
+ " <th></th>\n",
44
+ " <th>Unnamed: 0</th>\n",
45
+ " <th>Name</th>\n",
46
+ " <th>Type</th>\n",
47
+ " <th>Cat.</th>\n",
48
+ " <th>PP</th>\n",
49
+ " <th>Att.</th>\n",
50
+ " <th>Acc.</th>\n",
51
+ " <th>Effect</th>\n",
52
+ " </tr>\n",
53
+ " </thead>\n",
54
+ " <tbody>\n",
55
+ " <tr>\n",
56
+ " <th>0</th>\n",
57
+ " <td>0</td>\n",
58
+ " <td>Accelerock</td>\n",
59
+ " <td>NaN</td>\n",
60
+ " <td>NaN</td>\n",
61
+ " <td>20</td>\n",
62
+ " <td>40</td>\n",
63
+ " <td>100</td>\n",
64
+ " <td>The user smashes into the target at high speed...</td>\n",
65
+ " </tr>\n",
66
+ " <tr>\n",
67
+ " <th>1</th>\n",
68
+ " <td>2</td>\n",
69
+ " <td>Acrobatics</td>\n",
70
+ " <td>NaN</td>\n",
71
+ " <td>NaN</td>\n",
72
+ " <td>15</td>\n",
73
+ " <td>55</td>\n",
74
+ " <td>100</td>\n",
75
+ " <td>The user nimbly strikes the target. If the use...</td>\n",
76
+ " </tr>\n",
77
+ " <tr>\n",
78
+ " <th>2</th>\n",
79
+ " <td>3</td>\n",
80
+ " <td>Aerial Ace</td>\n",
81
+ " <td>NaN</td>\n",
82
+ " <td>NaN</td>\n",
83
+ " <td>20</td>\n",
84
+ " <td>60</td>\n",
85
+ " <td>101</td>\n",
86
+ " <td>The user confounds the target with speed, then...</td>\n",
87
+ " </tr>\n",
88
+ " <tr>\n",
89
+ " <th>3</th>\n",
90
+ " <td>5</td>\n",
91
+ " <td>Anchor Shot</td>\n",
92
+ " <td>NaN</td>\n",
93
+ " <td>NaN</td>\n",
94
+ " <td>20</td>\n",
95
+ " <td>80</td>\n",
96
+ " <td>100</td>\n",
97
+ " <td>The user entangles the target with its anchor ...</td>\n",
98
+ " </tr>\n",
99
+ " <tr>\n",
100
+ " <th>4</th>\n",
101
+ " <td>6</td>\n",
102
+ " <td>Aqua Jet</td>\n",
103
+ " <td>NaN</td>\n",
104
+ " <td>NaN</td>\n",
105
+ " <td>20</td>\n",
106
+ " <td>40</td>\n",
107
+ " <td>100</td>\n",
108
+ " <td>The user lunges at the target at a speed that ...</td>\n",
109
+ " </tr>\n",
110
+ " <tr>\n",
111
+ " <th>...</th>\n",
112
+ " <td>...</td>\n",
113
+ " <td>...</td>\n",
114
+ " <td>...</td>\n",
115
+ " <td>...</td>\n",
116
+ " <td>...</td>\n",
117
+ " <td>...</td>\n",
118
+ " <td>...</td>\n",
119
+ " <td>...</td>\n",
120
+ " </tr>\n",
121
+ " <tr>\n",
122
+ " <th>738</th>\n",
123
+ " <td>255</td>\n",
124
+ " <td>Withdraw</td>\n",
125
+ " <td>NaN</td>\n",
126
+ " <td>NaN</td>\n",
127
+ " <td>40</td>\n",
128
+ " <td>0</td>\n",
129
+ " <td>101</td>\n",
130
+ " <td>The user withdraws its body into its hard shel...</td>\n",
131
+ " </tr>\n",
132
+ " <tr>\n",
133
+ " <th>739</th>\n",
134
+ " <td>256</td>\n",
135
+ " <td>Wonder Room</td>\n",
136
+ " <td>NaN</td>\n",
137
+ " <td>NaN</td>\n",
138
+ " <td>10</td>\n",
139
+ " <td>0</td>\n",
140
+ " <td>101</td>\n",
141
+ " <td>The user creates a bizarre area in which Pokém...</td>\n",
142
+ " </tr>\n",
143
+ " <tr>\n",
144
+ " <th>740</th>\n",
145
+ " <td>257</td>\n",
146
+ " <td>Work Up</td>\n",
147
+ " <td>NaN</td>\n",
148
+ " <td>NaN</td>\n",
149
+ " <td>30</td>\n",
150
+ " <td>0</td>\n",
151
+ " <td>101</td>\n",
152
+ " <td>The user is roused, and its Attack and Sp. Atk...</td>\n",
153
+ " </tr>\n",
154
+ " <tr>\n",
155
+ " <th>741</th>\n",
156
+ " <td>258</td>\n",
157
+ " <td>Worry Seed</td>\n",
158
+ " <td>NaN</td>\n",
159
+ " <td>NaN</td>\n",
160
+ " <td>10</td>\n",
161
+ " <td>0</td>\n",
162
+ " <td>100</td>\n",
163
+ " <td>A seed that causes worry is planted on the tar...</td>\n",
164
+ " </tr>\n",
165
+ " <tr>\n",
166
+ " <th>742</th>\n",
167
+ " <td>259</td>\n",
168
+ " <td>Yawn</td>\n",
169
+ " <td>NaN</td>\n",
170
+ " <td>NaN</td>\n",
171
+ " <td>10</td>\n",
172
+ " <td>0</td>\n",
173
+ " <td>101</td>\n",
174
+ " <td>The user lets loose a huge yawn that lulls the...</td>\n",
175
+ " </tr>\n",
176
+ " </tbody>\n",
177
+ "</table>\n",
178
+ "<p>743 rows × 8 columns</p>\n",
179
+ "</div>"
180
+ ],
181
+ "text/plain": [
182
+ " Unnamed: 0 Name Type Cat. PP Att. Acc. \\\n",
183
+ "0 0 Accelerock NaN NaN 20 40 100 \n",
184
+ "1 2 Acrobatics NaN NaN 15 55 100 \n",
185
+ "2 3 Aerial Ace NaN NaN 20 60 101 \n",
186
+ "3 5 Anchor Shot NaN NaN 20 80 100 \n",
187
+ "4 6 Aqua Jet NaN NaN 20 40 100 \n",
188
+ ".. ... ... ... ... .. ... ... \n",
189
+ "738 255 Withdraw NaN NaN 40 0 101 \n",
190
+ "739 256 Wonder Room NaN NaN 10 0 101 \n",
191
+ "740 257 Work Up NaN NaN 30 0 101 \n",
192
+ "741 258 Worry Seed NaN NaN 10 0 100 \n",
193
+ "742 259 Yawn NaN NaN 10 0 101 \n",
194
+ "\n",
195
+ " Effect \n",
196
+ "0 The user smashes into the target at high speed... \n",
197
+ "1 The user nimbly strikes the target. If the use... \n",
198
+ "2 The user confounds the target with speed, then... \n",
199
+ "3 The user entangles the target with its anchor ... \n",
200
+ "4 The user lunges at the target at a speed that ... \n",
201
+ ".. ... \n",
202
+ "738 The user withdraws its body into its hard shel... \n",
203
+ "739 The user creates a bizarre area in which Pokém... \n",
204
+ "740 The user is roused, and its Attack and Sp. Atk... \n",
205
+ "741 A seed that causes worry is planted on the tar... \n",
206
+ "742 The user lets loose a huge yawn that lulls the... \n",
207
+ "\n",
208
+ "[743 rows x 8 columns]"
209
+ ]
210
+ },
211
+ "execution_count": 2,
212
+ "metadata": {},
213
+ "output_type": "execute_result"
214
+ }
215
+ ],
216
+ "source": [
217
+ "df"
218
+ ]
219
+ },
220
+ {
221
+ "cell_type": "code",
222
+ "execution_count": null,
223
+ "id": "e23c088f-4196-4980-af58-e4eaa80fbd5a",
224
+ "metadata": {},
225
+ "outputs": [],
226
+ "source": []
227
+ }
228
+ ],
229
+ "metadata": {
230
+ "kernelspec": {
231
+ "display_name": "Python 3 (ipykernel)",
232
+ "language": "python",
233
+ "name": "python3"
234
+ },
235
+ "language_info": {
236
+ "codemirror_mode": {
237
+ "name": "ipython",
238
+ "version": 3
239
+ },
240
+ "file_extension": ".py",
241
+ "mimetype": "text/x-python",
242
+ "name": "python",
243
+ "nbconvert_exporter": "python",
244
+ "pygments_lexer": "ipython3",
245
+ "version": "3.7.13"
246
+ }
247
+ },
248
+ "nbformat": 4,
249
+ "nbformat_minor": 5
250
+ }
move_scraper.ipynb CHANGED
@@ -302,7 +302,7 @@
302
  },
303
  {
304
  "cell_type": "code",
305
- "execution_count": 90,
306
  "metadata": {},
307
  "outputs": [
308
  {
@@ -346,6 +346,7 @@
346
  " unusable_moves = len(moves.Effect.apply(lambda x: \"This move can't be used\" in x))\n",
347
  " print(\"Removing some old moves... Found \", unusable_moves)\n",
348
  " moves = moves[moves.Effect.apply(lambda x: \"This move can't be used\" not in x)]\n",
 
349
  " return moves\n",
350
  "\n",
351
  "\n",
@@ -360,28 +361,197 @@
360
  "special_df = create_moves_df(special_moves)\n",
361
  "status_df = create_moves_df(status_moves)\n",
362
  "\n",
363
- "moves = pd.concat([physical_df, special_df, status_df])\n"
 
364
  ]
365
  },
366
  {
367
  "cell_type": "code",
368
- "execution_count": 93,
369
  "metadata": {},
370
  "outputs": [
371
  {
372
  "data": {
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
373
  "text/plain": [
374
- "743"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
375
  ]
376
  },
377
- "execution_count": 93,
378
  "metadata": {},
379
  "output_type": "execute_result"
380
  }
381
  ],
382
- "source": [
383
- "len(moves)"
384
- ]
385
  },
386
  {
387
  "cell_type": "code",
@@ -393,7 +563,7 @@
393
  ],
394
  "metadata": {
395
  "kernelspec": {
396
- "display_name": "Python 3",
397
  "language": "python",
398
  "name": "python3"
399
  },
@@ -407,7 +577,7 @@
407
  "name": "python",
408
  "nbconvert_exporter": "python",
409
  "pygments_lexer": "ipython3",
410
- "version": "3.8.3"
411
  }
412
  },
413
  "nbformat": 4,
 
302
  },
303
  {
304
  "cell_type": "code",
305
+ "execution_count": 95,
306
  "metadata": {},
307
  "outputs": [
308
  {
 
346
  " unusable_moves = len(moves.Effect.apply(lambda x: \"This move can't be used\" in x))\n",
347
  " print(\"Removing some old moves... Found \", unusable_moves)\n",
348
  " moves = moves[moves.Effect.apply(lambda x: \"This move can't be used\" not in x)]\n",
349
+ " moves = moves.replace(\"--\", 0)\n",
350
  " return moves\n",
351
  "\n",
352
  "\n",
 
361
  "special_df = create_moves_df(special_moves)\n",
362
  "status_df = create_moves_df(status_moves)\n",
363
  "\n",
364
+ "moves = pd.concat([physical_df, special_df, status_df])\n",
365
+ "moves.to_csv(\"data/moves.csv\")"
366
  ]
367
  },
368
  {
369
  "cell_type": "code",
370
+ "execution_count": 96,
371
  "metadata": {},
372
  "outputs": [
373
  {
374
  "data": {
375
+ "text/html": [
376
+ "<div>\n",
377
+ "<style scoped>\n",
378
+ " .dataframe tbody tr th:only-of-type {\n",
379
+ " vertical-align: middle;\n",
380
+ " }\n",
381
+ "\n",
382
+ " .dataframe tbody tr th {\n",
383
+ " vertical-align: top;\n",
384
+ " }\n",
385
+ "\n",
386
+ " .dataframe thead th {\n",
387
+ " text-align: right;\n",
388
+ " }\n",
389
+ "</style>\n",
390
+ "<table border=\"1\" class=\"dataframe\">\n",
391
+ " <thead>\n",
392
+ " <tr style=\"text-align: right;\">\n",
393
+ " <th></th>\n",
394
+ " <th>Name</th>\n",
395
+ " <th>Type</th>\n",
396
+ " <th>Cat.</th>\n",
397
+ " <th>PP</th>\n",
398
+ " <th>Att.</th>\n",
399
+ " <th>Acc.</th>\n",
400
+ " <th>Effect</th>\n",
401
+ " </tr>\n",
402
+ " </thead>\n",
403
+ " <tbody>\n",
404
+ " <tr>\n",
405
+ " <th>0</th>\n",
406
+ " <td>Accelerock</td>\n",
407
+ " <td></td>\n",
408
+ " <td></td>\n",
409
+ " <td>20</td>\n",
410
+ " <td>40</td>\n",
411
+ " <td>100</td>\n",
412
+ " <td>The user smashes into the target at high speed...</td>\n",
413
+ " </tr>\n",
414
+ " <tr>\n",
415
+ " <th>2</th>\n",
416
+ " <td>Acrobatics</td>\n",
417
+ " <td></td>\n",
418
+ " <td></td>\n",
419
+ " <td>15</td>\n",
420
+ " <td>55</td>\n",
421
+ " <td>100</td>\n",
422
+ " <td>The user nimbly strikes the target. If the use...</td>\n",
423
+ " </tr>\n",
424
+ " <tr>\n",
425
+ " <th>3</th>\n",
426
+ " <td>Aerial Ace</td>\n",
427
+ " <td></td>\n",
428
+ " <td></td>\n",
429
+ " <td>20</td>\n",
430
+ " <td>60</td>\n",
431
+ " <td>101</td>\n",
432
+ " <td>The user confounds the target with speed, then...</td>\n",
433
+ " </tr>\n",
434
+ " <tr>\n",
435
+ " <th>5</th>\n",
436
+ " <td>Anchor Shot</td>\n",
437
+ " <td></td>\n",
438
+ " <td></td>\n",
439
+ " <td>20</td>\n",
440
+ " <td>80</td>\n",
441
+ " <td>100</td>\n",
442
+ " <td>The user entangles the target with its anchor ...</td>\n",
443
+ " </tr>\n",
444
+ " <tr>\n",
445
+ " <th>6</th>\n",
446
+ " <td>Aqua Jet</td>\n",
447
+ " <td></td>\n",
448
+ " <td></td>\n",
449
+ " <td>20</td>\n",
450
+ " <td>40</td>\n",
451
+ " <td>100</td>\n",
452
+ " <td>The user lunges at the target at a speed that ...</td>\n",
453
+ " </tr>\n",
454
+ " <tr>\n",
455
+ " <th>...</th>\n",
456
+ " <td>...</td>\n",
457
+ " <td>...</td>\n",
458
+ " <td>...</td>\n",
459
+ " <td>...</td>\n",
460
+ " <td>...</td>\n",
461
+ " <td>...</td>\n",
462
+ " <td>...</td>\n",
463
+ " </tr>\n",
464
+ " <tr>\n",
465
+ " <th>255</th>\n",
466
+ " <td>Withdraw</td>\n",
467
+ " <td></td>\n",
468
+ " <td></td>\n",
469
+ " <td>40</td>\n",
470
+ " <td>0</td>\n",
471
+ " <td>101</td>\n",
472
+ " <td>The user withdraws its body into its hard shel...</td>\n",
473
+ " </tr>\n",
474
+ " <tr>\n",
475
+ " <th>256</th>\n",
476
+ " <td>Wonder Room</td>\n",
477
+ " <td></td>\n",
478
+ " <td></td>\n",
479
+ " <td>10</td>\n",
480
+ " <td>0</td>\n",
481
+ " <td>101</td>\n",
482
+ " <td>The user creates a bizarre area in which Pokém...</td>\n",
483
+ " </tr>\n",
484
+ " <tr>\n",
485
+ " <th>257</th>\n",
486
+ " <td>Work Up</td>\n",
487
+ " <td></td>\n",
488
+ " <td></td>\n",
489
+ " <td>30</td>\n",
490
+ " <td>0</td>\n",
491
+ " <td>101</td>\n",
492
+ " <td>The user is roused, and its Attack and Sp. Atk...</td>\n",
493
+ " </tr>\n",
494
+ " <tr>\n",
495
+ " <th>258</th>\n",
496
+ " <td>Worry Seed</td>\n",
497
+ " <td></td>\n",
498
+ " <td></td>\n",
499
+ " <td>10</td>\n",
500
+ " <td>0</td>\n",
501
+ " <td>100</td>\n",
502
+ " <td>A seed that causes worry is planted on the tar...</td>\n",
503
+ " </tr>\n",
504
+ " <tr>\n",
505
+ " <th>259</th>\n",
506
+ " <td>Yawn</td>\n",
507
+ " <td></td>\n",
508
+ " <td></td>\n",
509
+ " <td>10</td>\n",
510
+ " <td>0</td>\n",
511
+ " <td>101</td>\n",
512
+ " <td>The user lets loose a huge yawn that lulls the...</td>\n",
513
+ " </tr>\n",
514
+ " </tbody>\n",
515
+ "</table>\n",
516
+ "<p>743 rows × 7 columns</p>\n",
517
+ "</div>"
518
+ ],
519
  "text/plain": [
520
+ "0 Name Type Cat. PP Att. Acc. \\\n",
521
+ "0 Accelerock 20 40 100 \n",
522
+ "2 Acrobatics 15 55 100 \n",
523
+ "3 Aerial Ace 20 60 101 \n",
524
+ "5 Anchor Shot 20 80 100 \n",
525
+ "6 Aqua Jet 20 40 100 \n",
526
+ ".. ... ... ... .. ... ... \n",
527
+ "255 Withdraw 40 0 101 \n",
528
+ "256 Wonder Room 10 0 101 \n",
529
+ "257 Work Up 30 0 101 \n",
530
+ "258 Worry Seed 10 0 100 \n",
531
+ "259 Yawn 10 0 101 \n",
532
+ "\n",
533
+ "0 Effect \n",
534
+ "0 The user smashes into the target at high speed... \n",
535
+ "2 The user nimbly strikes the target. If the use... \n",
536
+ "3 The user confounds the target with speed, then... \n",
537
+ "5 The user entangles the target with its anchor ... \n",
538
+ "6 The user lunges at the target at a speed that ... \n",
539
+ ".. ... \n",
540
+ "255 The user withdraws its body into its hard shel... \n",
541
+ "256 The user creates a bizarre area in which Pokém... \n",
542
+ "257 The user is roused, and its Attack and Sp. Atk... \n",
543
+ "258 A seed that causes worry is planted on the tar... \n",
544
+ "259 The user lets loose a huge yawn that lulls the... \n",
545
+ "\n",
546
+ "[743 rows x 7 columns]"
547
  ]
548
  },
549
+ "execution_count": 96,
550
  "metadata": {},
551
  "output_type": "execute_result"
552
  }
553
  ],
554
+ "source": []
 
 
555
  },
556
  {
557
  "cell_type": "code",
 
563
  ],
564
  "metadata": {
565
  "kernelspec": {
566
+ "display_name": "Python 3 (ipykernel)",
567
  "language": "python",
568
  "name": "python3"
569
  },
 
577
  "name": "python",
578
  "nbconvert_exporter": "python",
579
  "pygments_lexer": "ipython3",
580
+ "version": "3.7.11"
581
  }
582
  },
583
  "nbformat": 4,
rnn_generator.ipynb CHANGED
The diff for this file is too large to render. See raw diff