Migrate model card from transformers-repo
Browse filesRead announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16/README.md
README.md
ADDED
@@ -0,0 +1,295 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Bert2GPT2 Summarization with 🤗 EncoderDecoder Framework
|
2 |
+
|
3 |
+
This model is a Bert2Bert model fine-tuned on summarization.
|
4 |
+
|
5 |
+
Bert2GPT2 is a `EncoderDecoderModel`, meaning that the encoder is a `bert-base-uncased`
|
6 |
+
BERT model and the decoder is a `gpt2` GPT2 model. Leveraging the [EncoderDecoderFramework](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoder-decoder-models), the
|
7 |
+
two pretrained models can simply be loaded into the framework via:
|
8 |
+
|
9 |
+
```python
|
10 |
+
bert2gpt2 = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "gpt2")
|
11 |
+
```
|
12 |
+
|
13 |
+
The decoder of an `EncoderDecoder` model needs cross-attention layers and usually makes use of causal
|
14 |
+
masking for auto-regressiv generation.
|
15 |
+
Thus, ``bert2gpt2`` is consequently fined-tuned on the `CNN/Daily Mail`dataset and the resulting model
|
16 |
+
`bert2gpt2-cnn_dailymail-fp16` is uploaded here.
|
17 |
+
|
18 |
+
## Example
|
19 |
+
|
20 |
+
The model is by no means a state-of-the-art model, but nevertheless
|
21 |
+
produces reasonable summarization results. It was mainly fine-tuned
|
22 |
+
as a proof-of-concept for the 🤗 EncoderDecoder Framework.
|
23 |
+
|
24 |
+
The model can be used as follows:
|
25 |
+
|
26 |
+
```python
|
27 |
+
from transformers import BertTokenizer, GPT2Tokenizer, EncoderDecoderModel
|
28 |
+
|
29 |
+
model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2gpt2-cnn_dailymail-fp16")
|
30 |
+
# reuse tokenizer from bert2bert encoder-decoder model
|
31 |
+
bert_tokenizer = BertTokenizer.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16")
|
32 |
+
|
33 |
+
article = """(CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing a racist chant. SAE's national chapter suspended the students, but University of Oklahoma President David B
|
34 |
+
oren took it a step further, saying the university's affiliation with the fraternity is permanently done. The news is shocking, but it's not the first time SAE has faced controversy. SAE was founded March 9, 185
|
35 |
+
6, at the University of Alabama, five years before the American Civil War, according to the fraternity website. When the war began, the group had fewer than 400 members, of which "369 went to war for the Confede
|
36 |
+
rate States and seven for the Union Army," the website says. The fraternity now boasts more than 200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20 "colonies" seeking fu
|
37 |
+
ll membership at universities. SAE has had to work hard to change recently after a string of member deaths, many blamed on the hazing of new recruits, SAE national President Bradley Cohen wrote in a message on t
|
38 |
+
he fraternity's website. The fraternity's website lists more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least 30 of the incidents involved hazing, and dozens more invol
|
39 |
+
ved alcohol. However, the list is missing numerous incidents from recent months. Among them, according to various media outlets: Yale University banned the SAEs from campus activities last month after members al
|
40 |
+
legedly tried to interfere with a sexual misconduct investigation connected to an initiation rite. Stanford University in December suspended SAE housing privileges after finding sorority members attending a frat
|
41 |
+
ernity function were subjected to graphic sexual content. And Johns Hopkins University in November suspended the fraternity for underage drinking. "The media has labeled us as the 'nation's deadliest fraternity,
|
42 |
+
' " Cohen said. In 2011, for example, a student died while being coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped the fraternity. "As a result, we are paying Lloy
|
43 |
+
d's of London the highest insurance rates in the Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters, and the fraternity had to close 12 in 18 months over hazing in
|
44 |
+
cidents."""
|
45 |
+
|
46 |
+
input_ids = bert_tokenizer(article, return_tensors="pt").input_ids
|
47 |
+
output_ids = model.generate(input_ids)
|
48 |
+
|
49 |
+
# we need a gpt2 tokenizer for the output word embeddings
|
50 |
+
gpt2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
|
51 |
+
|
52 |
+
print(gpt2_tokenizer.decode(output_ids[0], skip_special_tokens=True))
|
53 |
+
# should produce
|
54 |
+
# SAE's national chapter suspended the students, but university president says it's permanent.
|
55 |
+
# The fraternity has had to deal with a string of incidents since 2010.
|
56 |
+
# SAE has more than 200,000 members, many of whom are students.
|
57 |
+
# A student died while being coerced into drinking alcohol.
|
58 |
+
```
|
59 |
+
|
60 |
+
## Training script:
|
61 |
+
|
62 |
+
**IMPORTANT**: In order for this code to work, make sure you checkout to the branch
|
63 |
+
[more_general_trainer_metric](https://github.com/huggingface/transformers/tree/more_general_trainer_metric), which slightly adapts
|
64 |
+
the `Trainer` for `EncoderDecoderModels` according to this PR: https://github.com/huggingface/transformers/pull/5840.
|
65 |
+
|
66 |
+
The following code shows the complete training script that was used to fine-tune `bert2gpt2-cnn_dailymail-fp16
|
67 |
+
` for reproducability. The training last ~11h on a standard GPU.
|
68 |
+
|
69 |
+
```python
|
70 |
+
#!/usr/bin/env python3
|
71 |
+
import nlp
|
72 |
+
import logging
|
73 |
+
from transformers import BertTokenizer, GPT2Tokenizer, EncoderDecoderModel, Trainer, TrainingArguments
|
74 |
+
|
75 |
+
logging.basicConfig(level=logging.INFO)
|
76 |
+
|
77 |
+
model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-cased", "gpt2")
|
78 |
+
# cache is currently not supported by EncoderDecoder framework
|
79 |
+
model.decoder.config.use_cache = False
|
80 |
+
bert_tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
|
81 |
+
|
82 |
+
# CLS token will work as BOS token
|
83 |
+
bert_tokenizer.bos_token = bert_tokenizer.cls_token
|
84 |
+
|
85 |
+
# SEP token will work as EOS token
|
86 |
+
bert_tokenizer.eos_token = bert_tokenizer.sep_token
|
87 |
+
|
88 |
+
|
89 |
+
# make sure GPT2 appends EOS in begin and end
|
90 |
+
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
|
91 |
+
outputs = [self.bos_token_id] + token_ids_0 + [self.eos_token_id]
|
92 |
+
return outputs
|
93 |
+
|
94 |
+
|
95 |
+
GPT2Tokenizer.build_inputs_with_special_tokens = build_inputs_with_special_tokens
|
96 |
+
gpt2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
|
97 |
+
# set pad_token_id to unk_token_id -> be careful here as unk_token_id == eos_token_id == bos_token_id
|
98 |
+
gpt2_tokenizer.pad_token = gpt2_tokenizer.unk_token
|
99 |
+
|
100 |
+
|
101 |
+
# set decoding params
|
102 |
+
model.config.decoder_start_token_id = gpt2_tokenizer.bos_token_id
|
103 |
+
model.config.eos_token_id = gpt2_tokenizer.eos_token_id
|
104 |
+
model.config.max_length = 142
|
105 |
+
model.config.min_length = 56
|
106 |
+
model.config.no_repeat_ngram_size = 3
|
107 |
+
model.early_stopping = True
|
108 |
+
model.length_penalty = 2.0
|
109 |
+
model.num_beams = 4
|
110 |
+
|
111 |
+
# load train and validation data
|
112 |
+
train_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="train")
|
113 |
+
val_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="validation[:5%]")
|
114 |
+
|
115 |
+
# load rouge for validation
|
116 |
+
rouge = nlp.load_metric("rouge", experiment_id=1)
|
117 |
+
|
118 |
+
encoder_length = 512
|
119 |
+
decoder_length = 128
|
120 |
+
batch_size = 16
|
121 |
+
|
122 |
+
|
123 |
+
# map data correctly
|
124 |
+
def map_to_encoder_decoder_inputs(batch): # Tokenizer will automatically set [BOS] <text> [EOS]
|
125 |
+
# use bert tokenizer here for encoder
|
126 |
+
inputs = bert_tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_length)
|
127 |
+
# force summarization <= 128
|
128 |
+
outputs = gpt2_tokenizer(batch["highlights"], padding="max_length", truncation=True, max_length=decoder_length)
|
129 |
+
|
130 |
+
batch["input_ids"] = inputs.input_ids
|
131 |
+
batch["attention_mask"] = inputs.attention_mask
|
132 |
+
batch["decoder_input_ids"] = outputs.input_ids
|
133 |
+
batch["labels"] = outputs.input_ids.copy()
|
134 |
+
batch["decoder_attention_mask"] = outputs.attention_mask
|
135 |
+
|
136 |
+
# complicated list comprehension here because pad_token_id alone is not good enough to know whether label should be excluded or not
|
137 |
+
batch["labels"] = [
|
138 |
+
[-100 if mask == 0 else token for mask, token in mask_and_tokens] for mask_and_tokens in [zip(masks, labels) for masks, labels in zip(batch["decoder_attention_mask"], batch["labels"])]
|
139 |
+
]
|
140 |
+
|
141 |
+
assert all([len(x) == encoder_length for x in inputs.input_ids])
|
142 |
+
assert all([len(x) == decoder_length for x in outputs.input_ids])
|
143 |
+
|
144 |
+
return batch
|
145 |
+
|
146 |
+
|
147 |
+
def compute_metrics(pred):
|
148 |
+
labels_ids = pred.label_ids
|
149 |
+
pred_ids = pred.predictions
|
150 |
+
|
151 |
+
# all unnecessary tokens are removed
|
152 |
+
pred_str = gpt2_tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
|
153 |
+
labels_ids[labels_ids == -100] = gpt2_tokenizer.eos_token_id
|
154 |
+
label_str = gpt2_tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
|
155 |
+
|
156 |
+
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
|
157 |
+
|
158 |
+
return {
|
159 |
+
"rouge2_precision": round(rouge_output.precision, 4),
|
160 |
+
"rouge2_recall": round(rouge_output.recall, 4),
|
161 |
+
"rouge2_fmeasure": round(rouge_output.fmeasure, 4),
|
162 |
+
}
|
163 |
+
|
164 |
+
|
165 |
+
# make train dataset ready
|
166 |
+
train_dataset = train_dataset.map(
|
167 |
+
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"],
|
168 |
+
)
|
169 |
+
train_dataset.set_format(
|
170 |
+
type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
|
171 |
+
)
|
172 |
+
|
173 |
+
# same for validation dataset
|
174 |
+
val_dataset = val_dataset.map(
|
175 |
+
map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"],
|
176 |
+
)
|
177 |
+
val_dataset.set_format(
|
178 |
+
type="torch", columns=["input_ids", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"],
|
179 |
+
)
|
180 |
+
|
181 |
+
# set training arguments - these params are not really tuned, feel free to change
|
182 |
+
training_args = TrainingArguments(
|
183 |
+
output_dir="./",
|
184 |
+
per_device_train_batch_size=batch_size,
|
185 |
+
per_device_eval_batch_size=batch_size,
|
186 |
+
predict_from_generate=True,
|
187 |
+
evaluate_during_training=True,
|
188 |
+
do_train=True,
|
189 |
+
do_eval=True,
|
190 |
+
logging_steps=1000,
|
191 |
+
save_steps=1000,
|
192 |
+
eval_steps=1000,
|
193 |
+
overwrite_output_dir=True,
|
194 |
+
warmup_steps=2000,
|
195 |
+
save_total_limit=10,
|
196 |
+
fp16=True,
|
197 |
+
)
|
198 |
+
|
199 |
+
# instantiate trainer
|
200 |
+
trainer = Trainer(
|
201 |
+
model=model,
|
202 |
+
args=training_args,
|
203 |
+
compute_metrics=compute_metrics,
|
204 |
+
train_dataset=train_dataset,
|
205 |
+
eval_dataset=val_dataset,
|
206 |
+
)
|
207 |
+
|
208 |
+
# start training
|
209 |
+
trainer.train()
|
210 |
+
```
|
211 |
+
|
212 |
+
## Evaluation
|
213 |
+
|
214 |
+
The following script evaluates the model on the test set of
|
215 |
+
CNN/Daily Mail.
|
216 |
+
|
217 |
+
```python
|
218 |
+
#!/usr/bin/env python3
|
219 |
+
import nlp
|
220 |
+
from transformers import BertTokenizer, GPT2Tokenizer, EncoderDecoderModel
|
221 |
+
|
222 |
+
model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2gpt2-cnn_dailymail-fp16")
|
223 |
+
model.to("cuda")
|
224 |
+
|
225 |
+
bert_tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
|
226 |
+
|
227 |
+
# CLS token will work as BOS token
|
228 |
+
bert_tokenizer.bos_token = bert_tokenizer.cls_token
|
229 |
+
|
230 |
+
# SEP token will work as EOS token
|
231 |
+
bert_tokenizer.eos_token = bert_tokenizer.sep_token
|
232 |
+
|
233 |
+
|
234 |
+
# make sure GPT2 appends EOS in begin and end
|
235 |
+
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
|
236 |
+
outputs = [self.bos_token_id] + token_ids_0 + [self.eos_token_id]
|
237 |
+
return outputs
|
238 |
+
|
239 |
+
|
240 |
+
GPT2Tokenizer.build_inputs_with_special_tokens = build_inputs_with_special_tokens
|
241 |
+
gpt2_tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
|
242 |
+
# set pad_token_id to unk_token_id -> be careful here as unk_token_id == eos_token_id == bos_token_id
|
243 |
+
gpt2_tokenizer.pad_token = gpt2_tokenizer.unk_token
|
244 |
+
|
245 |
+
|
246 |
+
# set decoding params
|
247 |
+
model.config.decoder_start_token_id = gpt2_tokenizer.bos_token_id
|
248 |
+
model.config.eos_token_id = gpt2_tokenizer.eos_token_id
|
249 |
+
model.config.max_length = 142
|
250 |
+
model.config.min_length = 56
|
251 |
+
model.config.no_repeat_ngram_size = 3
|
252 |
+
model.early_stopping = True
|
253 |
+
model.length_penalty = 2.0
|
254 |
+
model.num_beams = 4
|
255 |
+
|
256 |
+
test_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="test")
|
257 |
+
batch_size = 64
|
258 |
+
|
259 |
+
|
260 |
+
# map data correctly
|
261 |
+
def generate_summary(batch):
|
262 |
+
# Tokenizer will automatically set [BOS] <text> [EOS]
|
263 |
+
# cut off at BERT max length 512
|
264 |
+
inputs = bert_tokenizer(batch["article"], padding="max_length", truncation=True, max_length=512, return_tensors="pt")
|
265 |
+
input_ids = inputs.input_ids.to("cuda")
|
266 |
+
attention_mask = inputs.attention_mask.to("cuda")
|
267 |
+
|
268 |
+
outputs = model.generate(input_ids, attention_mask=attention_mask)
|
269 |
+
|
270 |
+
# all special tokens including will be removed
|
271 |
+
output_str = gpt2_tokenizer.batch_decode(outputs, skip_special_tokens=True)
|
272 |
+
|
273 |
+
batch["pred"] = output_str
|
274 |
+
|
275 |
+
return batch
|
276 |
+
|
277 |
+
|
278 |
+
results = test_dataset.map(generate_summary, batched=True, batch_size=batch_size, remove_columns=["article"])
|
279 |
+
|
280 |
+
# load rouge for validation
|
281 |
+
rouge = nlp.load_metric("rouge")
|
282 |
+
|
283 |
+
pred_str = results["pred"]
|
284 |
+
label_str = results["highlights"]
|
285 |
+
|
286 |
+
rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid
|
287 |
+
|
288 |
+
print(rouge_output)
|
289 |
+
```
|
290 |
+
|
291 |
+
The obtained results should be:
|
292 |
+
|
293 |
+
| - | Rouge2 - mid -precision | Rouge2 - mid - recall | Rouge2 - mid - fmeasure |
|
294 |
+
|----------|:-------------:|:------:|:------:|
|
295 |
+
| **CNN/Daily Mail** | 14.42 | 16.99 | **15.16** |
|