hhou435 commited on
Commit
5a7e11a
1 Parent(s): 6f7c1f0
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: Chinese
3
+ datasets: CLUECorpusSmall
4
+ widget:
5
+ - text: "中国的首都是[MASK]"
6
+
7
+
8
+
9
+ ---
10
+ # Chinese word-based RoBERTa Miniatures
11
+
12
+ ## Model description
13
+
14
+ This is the set of 5 Chinese word-based RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
15
+
16
+ [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 5 Chinese word-based RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details.
17
+
18
+ You can download the 5 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
19
+
20
+ | | Link |
21
+ | -------- | :-----------------------: |
22
+ | **Tiny** | [**2/128 (Tiny)**][2_128] |
23
+ | **Mini** | [**4/256 (Mini)**][4_256] |
24
+ | **Small** | [**4/512 (Small)**][4_512] |
25
+ | **Medium** | [**8/512 (Medium)**][8_512] |
26
+ | **Base** | [**12/768 (Base)**][12_768] |
27
+
28
+ ## How to use
29
+
30
+ You can use this model directly with a pipeline for masked language modeling:
31
+
32
+ ```python
33
+ >>> from transformers import pipeline
34
+ >>> unmasker = pipeline('fill-mask', model='uer/roberta-base-word-chinese-cluecorpussmall')
35
+ >>> unmasker("[MASK]的首都是北京。")
36
+ [
37
+ {'sequence': '中国 的首都是北京。',
38
+ 'score': 0.21525809168815613,
39
+ 'token': 2873,
40
+ 'token_str': '中国'},
41
+ {'sequence': '北京 的首都是北京。',
42
+ 'score': 0.15194718539714813,
43
+ 'token': 9502,
44
+ 'token_str': '北京'},
45
+ {'sequence': '我们 的首都是北京。',
46
+ 'score': 0.08854265511035919,
47
+ 'token': 4215,
48
+ 'token_str': '我们'},
49
+ {'sequence': '美国 的首都是北京。',
50
+ 'score': 0.06808705627918243,
51
+ 'token': 7810,
52
+ 'token_str': '美国'},
53
+ {'sequence': '日本 的首都是北京。',
54
+ 'score': 0.06071401759982109,
55
+ 'token': 7788,
56
+ 'token_str': '日本'}
57
+ ]
58
+ ```
59
+
60
+ BertTokenizer does not support sentencepiece, so we use AlbertTokenizer here.
61
+
62
+ Here is how to use this model to get the features of a given text in PyTorch:
63
+
64
+ ```python
65
+ from transformers import AlbertTokenizer, BertModel
66
+ tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-base-word-chinese-cluecorpussmall')
67
+ model = BertModel.from_pretrained("uer/roberta-base-word-chinese-cluecorpussmall")
68
+ text = "用你喜欢的任何文本替换我。"
69
+ encoded_input = tokenizer(text, return_tensors='pt')
70
+ output = model(**encoded_input)
71
+ ```
72
+
73
+ and in TensorFlow:
74
+
75
+ ```python
76
+ from transformers import AlbertTokenizer, TFBertModel
77
+ tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-base-word-chinese-cluecorpussmall')
78
+ model = TFBertModel.from_pretrained("uer/roberta-base-word-chinese-cluecorpussmall")
79
+ text = "用你喜欢的任何文本替换我。"
80
+ encoded_input = tokenizer(text, return_tensors='tf')
81
+ output = model(encoded_input)
82
+ ```
83
+
84
+ ## Training data
85
+
86
+ [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. Google's [sentencepiece](https://github.com/google/sentencepiece) is used for word segmentation. The sentencepiece model is trained on CLUECorpusSmall corpus:
87
+
88
+ ```
89
+ >>> import sentencepiece as spm
90
+ >>> spm.SentencePieceTrainer.train(input='cluecorpussmall.txt',
91
+ model_prefix='cluecorpussmall_spm',
92
+ vocab_size=100000,
93
+ max_sentence_length=1024,
94
+ max_sentencepiece_length=6,
95
+ user_defined_symbols=['[MASK]','[unused1]','[unused2]',
96
+ '[unused3]','[unused4]','[unused5]','[unused6]',
97
+ '[unused7]','[unused8]','[unused9]','[unused10]'],
98
+ pad_id=0,
99
+ pad_piece='[PAD]',
100
+ unk_id=1,
101
+ unk_piece='[UNK]',
102
+ bos_id=2,
103
+ bos_piece='[CLS]',
104
+ eos_id=3,
105
+ eos_piece='[SEP]',
106
+ train_extremely_large_corpus=True
107
+ )
108
+ ```
109
+
110
+ ## Training procedure
111
+
112
+ Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
113
+
114
+ Taking the case of word-based RoBERTa-Medium
115
+
116
+ Stage1:
117
+
118
+ ```
119
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
120
+ --spm_model_path models/cluecorpussmall_spm.model \
121
+ --dataset_path cluecorpussmall_word_seq128_dataset.pt \
122
+ --processes_num 32 --seq_length 128 \
123
+ --dynamic_masking --target mlm
124
+ ```
125
+
126
+ ```
127
+ python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
128
+ --spm_model_path models/cluecorpussmall_spm.model \
129
+ --config_path models/bert/medium_config.json \
130
+ --output_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin \
131
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
132
+ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
133
+ --learning_rate 1e-4 --batch_size 64 \
134
+ --embedding word_pos_seg --encoder transformer --mask fully_visible --target mlm --tie_weights
135
+ ```
136
+
137
+ Stage2:
138
+
139
+ ```
140
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
141
+ --spm_model_path models/cluecorpussmall_spm.model \
142
+ --dataset_path cluecorpussmall_word_seq512_dataset.pt \
143
+ --processes_num 32 --seq_length 512 \
144
+ --dynamic_masking --target mlm
145
+ ```
146
+
147
+ ```
148
+ python3 pretrain.py --dataset_path cluecorpussmall_word_seq512_dataset.pt \
149
+ --pretrained_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-1000000 \
150
+ --spm_model_path models/cluecorpussmall_spm.model \
151
+ --config_path models/bert/medium_config.json \
152
+ --output_model_path models/cluecorpussmall_word_roberta_medium_seq512_model.bin \
153
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
154
+ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
155
+ --learning_rate 5e-5 --batch_size 16 \
156
+ --embedding word_pos_seg --encoder transformer --mask fully_visible --target mlm --tie_weights
157
+ ```
158
+
159
+ Finally, we convert the pre-trained model into Huggingface's format:
160
+
161
+ ```
162
+ python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-250000 \
163
+ --output_model_path pytorch_model.bin \
164
+ --layers_num 12 --target mlm
165
+ ```
166
+
167
+ ### BibTeX entry and citation info
168
+
169
+ ```
170
+ @article{zhao2019uer,
171
+ title={UER: An Open-Source Toolkit for Pre-training Models},
172
+ author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
173
+ journal={EMNLP-IJCNLP 2019},
174
+ pages={241},
175
+ year={2019}
176
+ }
177
+ ```
178
+
179
+ [2_128]:https://huggingface.co/uer/roberta-tiny-word-chinese-cluecorpussmall
180
+ [4_256]:https://huggingface.co/uer/roberta-mini-word-chinese-cluecorpussmall
181
+ [4_512]:https://huggingface.co/uer/roberta-small-word-chinese-cluecorpussmall
182
+ [8_512]:https://huggingface.co/uer/roberta-medium-word-chinese-cluecorpussmall
183
+ [12_768]:https://huggingface.co/uer/roberta-base-word-chinese-cluecorpussmall
config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForMaskedLM"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "gradient_checkpointing": false,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 128,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 512,
12
+ "layer_norm_eps": 1e-12,
13
+ "max_position_embeddings": 512,
14
+ "model_type": "bert",
15
+ "num_attention_heads": 2,
16
+ "num_hidden_layers": 2,
17
+ "pad_token_id": 0,
18
+ "tokenizer_class": "AlbertTokenizer",
19
+ "vocab_size": 100000
20
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10bf8c085fe2f3a597e9c29e32982d63fab3f6642ac190f684e02bd7706510d7
3
+ size 53538951
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "[CLS]", "eos_token": "[SEP]", "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9a190932eeea14788ec1abd3405b2fb612b52622945ccb3bb68d67fd586dfcc
3
+ size 1991738
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:decfb62cde01f4e4cc23efba2cf4de449ceb9e762c455db6f2f08045bd33a13a
3
+ size 104787448
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"do_lower_case": false, "remove_space": true, "keep_accents": false, "bos_token": "[CLS]", "eos_token": "[SEP]", "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512}