wuyangming commited on
Commit
9693451
1 Parent(s): afecc76

language_modeling_ipynb

Browse files
language_modeling_ipynb.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
language_modeling_ipynb.py ADDED
@@ -0,0 +1,254 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ """“language_modeling.ipynb”的副本
3
+
4
+ Automatically generated by Colab.
5
+
6
+ Original file is located at
7
+ https://colab.research.google.com/drive/1baqtirf_2hHx2-byvSi0iZo4g_5Rm_nZ
8
+ """
9
+
10
+ # Transformers installation
11
+ ! pip install transformers datasets
12
+ # To install from source instead of the last release, comment the command above and uncomment the following one.
13
+ # ! pip install git+https://github.com/huggingface/transformers.git
14
+
15
+ """# Causal language modeling
16
+
17
+ There are two types of language modeling, causal and masked. This guide illustrates causal language modeling.
18
+ Causal language models are frequently used for text generation. You can use these models for creative applications like
19
+ choosing your own text adventure or an intelligent coding assistant like Copilot or CodeParrot.
20
+ """
21
+
22
+ #@title
23
+ from IPython.display import HTML
24
+
25
+ HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/Vpjb1lu0MDk?rel=0&amp;controls=0&amp;showinfo=0" frameborder="0" allowfullscreen></iframe>')
26
+
27
+ """Causal language modeling predicts the next token in a sequence of tokens, and the model can only attend to tokens on
28
+ the left. This means the model cannot see future tokens. GPT-2 is an example of a causal language model.
29
+
30
+ This guide will show you how to:
31
+
32
+ 1. Finetune [DistilGPT2](https://huggingface.co/distilgpt2) on the [r/askscience](https://www.reddit.com/r/askscience/) subset of the [ELI5](https://huggingface.co/datasets/eli5) dataset.
33
+ 2. Use your finetuned model for inference.
34
+
35
+ <Tip>
36
+ You can finetune other architectures for causal language modeling following the same steps in this guide.
37
+ Choose one of the following architectures:
38
+
39
+ <!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
40
+ [BART](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/bart), [BERT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/bert), [Bert Generation](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/bert-generation), [BigBird](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/big_bird), [BigBird-Pegasus](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/bigbird_pegasus), [BioGpt](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/biogpt), [Blenderbot](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/blenderbot), [BlenderbotSmall](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/blenderbot-small), [BLOOM](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/bloom), [CamemBERT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/camembert), [CodeGen](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/codegen), [CPM-Ant](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/cpmant), [CTRL](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/ctrl), [Data2VecText](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/data2vec-text), [ELECTRA](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/electra), [ERNIE](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/ernie), [GIT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/git), [GPT-Sw3](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gpt-sw3), [OpenAI GPT-2](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gpt2), [GPTBigCode](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gpt_bigcode), [GPT Neo](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gpt_neo), [GPT NeoX](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gpt_neox), [GPT NeoX Japanese](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gpt_neox_japanese), [GPT-J](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gptj), [LLaMA](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/llama), [Marian](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/marian), [mBART](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/mbart), [MEGA](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/mega), [Megatron-BERT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/megatron-bert), [MVP](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/mvp), [OpenLlama](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/open-llama), [OpenAI GPT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/openai-gpt), [OPT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/opt), [Pegasus](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/pegasus), [PLBart](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/plbart), [ProphetNet](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/prophetnet), [QDQBert](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/qdqbert), [Reformer](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/reformer), [RemBERT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/rembert), [RoBERTa](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/roberta), [RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/roberta-prelayernorm), [RoCBert](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/roc_bert), [RoFormer](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/roformer), [RWKV](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/rwkv), [Speech2Text2](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/speech_to_text_2), [Transformer-XL](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/transfo-xl), [TrOCR](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/trocr), [XGLM](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xglm), [XLM](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xlm), [XLM-ProphetNet](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xlm-prophetnet), [XLM-RoBERTa](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xlm-roberta), [XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xlm-roberta-xl), [XLNet](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xlnet), [X-MOD](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xmod)
41
+
42
+
43
+ <!--End of the generated tip-->
44
+
45
+ </Tip>
46
+
47
+ Before you begin, make sure you have all the necessary libraries installed:
48
+
49
+ ```bash
50
+ pip install transformers datasets evaluate
51
+ ```
52
+
53
+ We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
54
+ """
55
+
56
+ from huggingface_hub import notebook_login
57
+
58
+ notebook_login()
59
+
60
+ """## Load ELI5 dataset
61
+
62
+ Start by loading a smaller subset of the r/askscience subset of the ELI5 dataset from the 🤗 Datasets library.
63
+ This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
64
+ """
65
+
66
+ from datasets import load_dataset
67
+
68
+ eli5 = load_dataset("eli5", split="train_asks[:5000]")
69
+
70
+ """Split the dataset's `train_asks` split into a train and test set with the [train_test_split](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.train_test_split) method:"""
71
+
72
+ eli5 = eli5.train_test_split(test_size=0.2)
73
+
74
+ """Then take a look at an example:"""
75
+
76
+ eli5["train"][0]
77
+
78
+ """While this may look like a lot, you're only really interested in the `text` field. What's cool about language modeling
79
+ tasks is you don't need labels (also known as an unsupervised task) because the next word *is* the label.
80
+
81
+ ## Preprocess
82
+ """
83
+
84
+ #@title
85
+ from IPython.display import HTML
86
+
87
+ HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/ma1TrR7gE7I?rel=0&amp;controls=0&amp;showinfo=0" frameborder="0" allowfullscreen></iframe>')
88
+
89
+ """The next step is to load a DistilGPT2 tokenizer to process the `text` subfield:"""
90
+
91
+ from transformers import AutoTokenizer
92
+
93
+ tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
94
+
95
+ """You'll notice from the example above, the `text` field is actually nested inside `answers`. This means you'll need to
96
+ extract the `text` subfield from its nested structure with the [`flatten`](https://huggingface.co/docs/datasets/process.html#flatten) method:
97
+ """
98
+
99
+ eli5 = eli5.flatten()
100
+ eli5["train"][0]
101
+
102
+ """Each subfield is now a separate column as indicated by the `answers` prefix, and the `text` field is a list now. Instead
103
+ of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them.
104
+
105
+ Here is a first preprocessing function to join the list of strings for each example and tokenize the result:
106
+ """
107
+
108
+ def preprocess_function(examples):
109
+ return tokenizer([" ".join(x) for x in examples["answers.text"]])
110
+
111
+ """To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [map](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map) method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once, and increasing the number of processes with `num_proc`. Remove any columns you don't need:"""
112
+
113
+ tokenized_eli5 = eli5.map(
114
+ preprocess_function,
115
+ batched=True,
116
+ num_proc=4,
117
+ remove_columns=eli5["train"].column_names,
118
+ )
119
+
120
+ """This dataset contains the token sequences, but some of these are longer than the maximum input length for the model.
121
+
122
+ You can now use a second preprocessing function to
123
+ - concatenate all the sequences
124
+ - split the concatenated sequences into shorter chunks defined by `block_size`, which should be both shorter than the maximum input length and short enough for your GPU RAM.
125
+ """
126
+
127
+ block_size = 128
128
+
129
+
130
+ def group_texts(examples):
131
+ # Concatenate all texts.
132
+ concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
133
+ total_length = len(concatenated_examples[list(examples.keys())[0]])
134
+ # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
135
+ # customize this part to your needs.
136
+ if total_length >= block_size:
137
+ total_length = (total_length // block_size) * block_size
138
+ # Split by chunks of block_size.
139
+ result = {
140
+ k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
141
+ for k, t in concatenated_examples.items()
142
+ }
143
+ result["labels"] = result["input_ids"].copy()
144
+ return result
145
+
146
+ """Apply the `group_texts` function over the entire dataset:"""
147
+
148
+ lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4)
149
+
150
+ """Now create a batch of examples using [DataCollatorForLanguageModeling](https://huggingface.co/docs/transformers/main/en/main_classes/data_collator#transformers.DataCollatorForLanguageModeling). It's more efficient to *dynamically pad* the
151
+ sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
152
+
153
+ Use the end-of-sequence token as the padding token and set `mlm=False`. This will use the inputs as labels shifted to the right by one element:
154
+ """
155
+
156
+ from transformers import DataCollatorForLanguageModeling
157
+
158
+ tokenizer.pad_token = tokenizer.eos_token
159
+ data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
160
+
161
+ """## Train
162
+
163
+ <Tip>
164
+
165
+ If you aren't familiar with finetuning a model with the [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer), take a look at the [basic tutorial](https://huggingface.co/docs/transformers/main/en/tasks/../training#train-with-pytorch-trainer)!
166
+
167
+ </Tip>
168
+
169
+ You're ready to start training your model now! Load DistilGPT2 with [AutoModelForCausalLM](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForCausalLM):
170
+ """
171
+
172
+ from transformers import AutoModelForCausalLM, TrainingArguments, Trainer
173
+
174
+ model = AutoModelForCausalLM.from_pretrained("distilgpt2")
175
+
176
+ """At this point, only three steps remain:
177
+
178
+ 1. Define your training hyperparameters in [TrainingArguments](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.TrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model).
179
+ 2. Pass the training arguments to [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer) along with the model, datasets, and data collator.
180
+ 3. Call [train()](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer.train) to finetune your model.
181
+ """
182
+
183
+ training_args = TrainingArguments(
184
+ output_dir="my_awesome_eli5_clm-model",
185
+ evaluation_strategy="epoch",
186
+ learning_rate=2e-5,
187
+ weight_decay=0.01,
188
+ push_to_hub=True,
189
+ )
190
+
191
+ trainer = Trainer(
192
+ model=model,
193
+ args=training_args,
194
+ train_dataset=lm_dataset["train"],
195
+ eval_dataset=lm_dataset["test"],
196
+ data_collator=data_collator,
197
+ )
198
+
199
+ trainer.train()
200
+
201
+ """Once training is completed, use the [evaluate()](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer.evaluate) method to evaluate your model and get its perplexity:"""
202
+
203
+ import math
204
+
205
+ eval_results = trainer.evaluate()
206
+ print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
207
+
208
+ """Then share your model to the Hub with the [push_to_hub()](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model:"""
209
+
210
+ trainer.push_to_hub()
211
+
212
+ """<Tip>
213
+
214
+ For a more in-depth example of how to finetune a model for causal language modeling, take a look at the corresponding
215
+ [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)
216
+ or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
217
+
218
+ </Tip>
219
+
220
+ ## Inference
221
+
222
+ Great, now that you've finetuned a model, you can use it for inference!
223
+
224
+ Come up with a prompt you'd like to generate text from:
225
+ """
226
+
227
+ prompt = "Somatic hypermutation allows the immune system to"
228
+
229
+ """The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for text generation with your model, and pass your text to it:"""
230
+
231
+ from transformers import pipeline
232
+
233
+ generator = pipeline("text-generation", model="my_awesome_eli5_clm-model")
234
+ generator(prompt)
235
+
236
+ """Tokenize the text and return the `input_ids` as PyTorch tensors:"""
237
+
238
+ from transformers import AutoTokenizer
239
+
240
+ tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model")
241
+ inputs = tokenizer(prompt, return_tensors="pt").input_ids
242
+
243
+ """Use the [generate()](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationMixin.generate) method to generate text.
244
+ For more details about the different text generation strategies and parameters for controlling generation, check out the [Text generation strategies](https://huggingface.co/docs/transformers/main/en/tasks/../generation_strategies) page.
245
+ """
246
+
247
+ from transformers import AutoModelForCausalLM
248
+
249
+ model = AutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model")
250
+ outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
251
+
252
+ """Decode the generated token ids back into text:"""
253
+
254
+ tokenizer.batch_decode(outputs, skip_special_tokens=True)