ydshieh commited on
Commit
3eb3e05
1 Parent(s): 8da3271

upload TF checkpoint

Browse files
Files changed (6) hide show
  1. README.md +52 -0
  2. config.json +168 -0
  3. special_tokens_map.json +1 -0
  4. tf_model.h5 +3 -0
  5. tokenizer_config.json +1 -0
  6. vocab.txt +0 -0
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Bert2Bert Summarization with 🤗 EncoderDecoder Framework
2
+
3
+ This model is a Bert2Bert model fine-tuned on summarization.
4
+
5
+ Bert2Bert is a `EncoderDecoderModel`, meaning that both the encoder and the decoder are `bert-base-uncased`
6
+ BERT models. Leveraging the [EncoderDecoderFramework](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoder-decoder-models), the
7
+ two pretrained models can simply be loaded into the framework via:
8
+
9
+ ```python
10
+ bert2bert = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased")
11
+ ```
12
+
13
+ The decoder of an `EncoderDecoder` model needs cross-attention layers and usually makes use of causal
14
+ masking for auto-regressiv generation.
15
+ Thus, ``bert2bert`` is consequently fined-tuned on the `CNN/Daily Mail`dataset and the resulting model
16
+ `bert2bert-cnn_dailymail-fp16` is uploaded here.
17
+
18
+ ## Example
19
+
20
+ The model is by no means a state-of-the-art model, but nevertheless
21
+ produces reasonable summarization results. It was mainly fine-tuned
22
+ as a proof-of-concept for the 🤗 EncoderDecoder Framework.
23
+
24
+ The model can be used as follows:
25
+
26
+ ```python
27
+ from transformers import BertTokenizer, EncoderDecoderModel
28
+
29
+ model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16")
30
+ tokenizer = BertTokenizer.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16")
31
+
32
+ article = """(CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing a racist chant. SAE's national chapter suspended the students, but University of Oklahoma President David Boren took it a step further, saying the university's affiliation with the fraternity is permanently done. The news is shocking, but it's not the first time SAE has faced controversy. SAE was founded March 9, 1856, at the University of Alabama, five years before the American Civil War, according to the fraternity website. When the war began, the group had fewer than 400 members, of which "369 went to war for the Confederate States and seven for the Union Army," the website says. The fraternity now boasts more than 200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20 "colonies" seeking full membership at universities. SAE has had to work hard to change recently after a string of member deaths, many blamed on the hazing of new recruits, SAE national President Bradley Cohen wrote in a message on the fraternity's website. The fraternity's website lists more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least 30 of the incidents involved hazing, and dozens more involved alcohol. However, the list is missing numerous incidents from recent months. Among them, according to various media outlets: Yale University banned the SAEs from campus activities last month after members allegedly tried to interfere with a sexual misconduct investigation connected to an initiation rite. Stanford University in December suspended SAE housing privileges after finding sorority members attending a fraternity function were subjected to graphic sexual content. And Johns Hopkins University in November suspended the fraternity for underage drinking. "The media has labeled us as the 'nation's deadliest fraternity,' " Cohen said. In 2011, for example, a student died while being coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped the fraternity. "As a result, we are paying Lloyd's of London the highest insurance rates in the Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters, and the fraternity had to close 12 in 18 months over hazing incidents."""
33
+
34
+ input_ids = tokenizer(article, return_tensors="pt").input_ids
35
+ output_ids = model.generate(input_ids)
36
+
37
+ print(tokenizer.decode(output_ids[0], skip_special_tokens=True))
38
+ # should produce
39
+ # sae was founded in 1856, five years before the civil war. the fraternity has had to work hard to change recently. the university of oklahoma president says the university's affiliation with the fraternity is permanently done. the sae has had a string of members in recent mon
40
+ ths.
41
+ ```
42
+
43
+ ## Training script:
44
+
45
+ Please follow this tutorial to see how to warm-start a BERT2BERT model:
46
+ https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing
47
+
48
+ The obtained results should be:
49
+
50
+ | - | Rouge2 - mid -precision | Rouge2 - mid - recall | Rouge2 - mid - fmeasure |
51
+ |----------|:-------------:|:------:|:------:|
52
+ | **CNN/Daily Mail** | 16.12 | 17.07 | **16.1** |
config.json ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "patrickvonplaten/bert2bert-cnn_dailymail-fp16",
3
+ "architectures": [
4
+ "EncoderDecoderModel"
5
+ ],
6
+ "decoder": {
7
+ "_name_or_path": "",
8
+ "add_cross_attention": true,
9
+ "architectures": [
10
+ "BertLMHeadModel"
11
+ ],
12
+ "attention_probs_dropout_prob": 0.1,
13
+ "bad_words_ids": null,
14
+ "bos_token_id": null,
15
+ "chunk_size_feed_forward": 0,
16
+ "classifier_dropout": null,
17
+ "cross_attention_hidden_size": null,
18
+ "decoder_start_token_id": null,
19
+ "diversity_penalty": 0.0,
20
+ "do_sample": false,
21
+ "early_stopping": false,
22
+ "encoder_no_repeat_ngram_size": 0,
23
+ "eos_token_id": null,
24
+ "finetuning_task": null,
25
+ "forced_bos_token_id": null,
26
+ "forced_eos_token_id": null,
27
+ "gradient_checkpointing": false,
28
+ "hidden_act": "gelu",
29
+ "hidden_dropout_prob": 0.1,
30
+ "hidden_size": 768,
31
+ "id2label": {
32
+ "0": "LABEL_0",
33
+ "1": "LABEL_1"
34
+ },
35
+ "initializer_range": 0.02,
36
+ "intermediate_size": 3072,
37
+ "is_decoder": true,
38
+ "is_encoder_decoder": false,
39
+ "label2id": {
40
+ "LABEL_0": 0,
41
+ "LABEL_1": 1
42
+ },
43
+ "layer_norm_eps": 1e-12,
44
+ "length_penalty": 1.0,
45
+ "max_length": 20,
46
+ "max_position_embeddings": 512,
47
+ "min_length": 0,
48
+ "model_type": "bert",
49
+ "no_repeat_ngram_size": 0,
50
+ "num_attention_heads": 12,
51
+ "num_beam_groups": 1,
52
+ "num_beams": 1,
53
+ "num_hidden_layers": 12,
54
+ "num_return_sequences": 1,
55
+ "output_attentions": false,
56
+ "output_hidden_states": false,
57
+ "output_scores": false,
58
+ "pad_token_id": 0,
59
+ "position_embedding_type": "absolute",
60
+ "prefix": null,
61
+ "problem_type": null,
62
+ "pruned_heads": {},
63
+ "remove_invalid_values": false,
64
+ "repetition_penalty": 1.0,
65
+ "return_dict": false,
66
+ "return_dict_in_generate": false,
67
+ "sep_token_id": null,
68
+ "task_specific_params": null,
69
+ "temperature": 1.0,
70
+ "tie_encoder_decoder": false,
71
+ "tie_word_embeddings": true,
72
+ "tokenizer_class": null,
73
+ "top_k": 50,
74
+ "top_p": 1.0,
75
+ "torch_dtype": "float32",
76
+ "torchscript": false,
77
+ "transformers_version": "4.12.0.dev0",
78
+ "type_vocab_size": 2,
79
+ "use_bfloat16": false,
80
+ "use_cache": true,
81
+ "vocab_size": 30522
82
+ },
83
+ "decoder_start_token_id": 101,
84
+ "encoder": {
85
+ "_name_or_path": "",
86
+ "add_cross_attention": false,
87
+ "architectures": [
88
+ "BertModel"
89
+ ],
90
+ "attention_probs_dropout_prob": 0.1,
91
+ "bad_words_ids": null,
92
+ "bos_token_id": null,
93
+ "chunk_size_feed_forward": 0,
94
+ "classifier_dropout": null,
95
+ "cross_attention_hidden_size": null,
96
+ "decoder_start_token_id": null,
97
+ "diversity_penalty": 0.0,
98
+ "do_sample": false,
99
+ "early_stopping": false,
100
+ "encoder_no_repeat_ngram_size": 0,
101
+ "eos_token_id": null,
102
+ "finetuning_task": null,
103
+ "forced_bos_token_id": null,
104
+ "forced_eos_token_id": null,
105
+ "gradient_checkpointing": false,
106
+ "hidden_act": "gelu",
107
+ "hidden_dropout_prob": 0.1,
108
+ "hidden_size": 768,
109
+ "id2label": {
110
+ "0": "LABEL_0",
111
+ "1": "LABEL_1"
112
+ },
113
+ "initializer_range": 0.02,
114
+ "intermediate_size": 3072,
115
+ "is_decoder": false,
116
+ "is_encoder_decoder": false,
117
+ "label2id": {
118
+ "LABEL_0": 0,
119
+ "LABEL_1": 1
120
+ },
121
+ "layer_norm_eps": 1e-12,
122
+ "length_penalty": 1.0,
123
+ "max_length": 20,
124
+ "max_position_embeddings": 512,
125
+ "min_length": 0,
126
+ "model_type": "bert",
127
+ "no_repeat_ngram_size": 0,
128
+ "num_attention_heads": 12,
129
+ "num_beam_groups": 1,
130
+ "num_beams": 1,
131
+ "num_hidden_layers": 12,
132
+ "num_return_sequences": 1,
133
+ "output_attentions": false,
134
+ "output_hidden_states": false,
135
+ "output_scores": false,
136
+ "pad_token_id": 0,
137
+ "position_embedding_type": "absolute",
138
+ "prefix": null,
139
+ "problem_type": null,
140
+ "pruned_heads": {},
141
+ "remove_invalid_values": false,
142
+ "repetition_penalty": 1.0,
143
+ "return_dict": false,
144
+ "return_dict_in_generate": false,
145
+ "sep_token_id": null,
146
+ "task_specific_params": null,
147
+ "temperature": 1.0,
148
+ "tie_encoder_decoder": false,
149
+ "tie_word_embeddings": true,
150
+ "tokenizer_class": null,
151
+ "top_k": 50,
152
+ "top_p": 1.0,
153
+ "torch_dtype": "float32",
154
+ "torchscript": false,
155
+ "transformers_version": "4.12.0.dev0",
156
+ "type_vocab_size": 2,
157
+ "use_bfloat16": false,
158
+ "use_cache": true,
159
+ "vocab_size": 30522
160
+ },
161
+ "eos_token_id": 102,
162
+ "is_encoder_decoder": true,
163
+ "max_length": 142,
164
+ "min_length": 56,
165
+ "model_type": "encoder-decoder",
166
+ "no_repeat_ngram_size": 3,
167
+ "transformers_version": null
168
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "[CLS]", "eos_token": "[SEP]", "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a03dc57c014477a54ddf1fcd22a5e15da412fad9ca4739301ca555db2acc96e
3
+ size 990154000
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "model_max_length": 512}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff