uer commited on
Commit
24d115b
1 Parent(s): 6f96607

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +95 -67
README.md CHANGED
@@ -1,8 +1,10 @@
1
  ---
2
  language: Chinese
3
- datasets: CLUECorpus
4
  widget:
5
  - text: "北京是[MASK]国的首都。"
 
 
6
  ---
7
 
8
 
@@ -12,46 +14,64 @@ widget:
12
 
13
  This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://www.aclweb.org/anthology/D19-3041.pdf).
14
 
 
 
15
  You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
16
 
17
- | |H=128|H=256|H=512|H=768|
18
- |---|:---:|:---:|:---:|:---:|
19
- | **L=2** |[**2/128 (Tiny)**][2_128]|[2/256][2_256]|[2/512]|[2/768][2_768]|
20
- | **L=4** |[4/128]|[**4/256 (Mini)**][4_256]|[**4/512 (Small)**][4_512]|[4/768]|
21
- | **L=6** |[6/128]|[6/256]|[6/512]|[6/768]|
22
- | **L=8** |[8/128]|[8/256]|[**8/512 (Medium)**][8_512]|[8/768]|
23
- | **L=10** |[10/128]|[10/256]|[10/512]|[10/768]|
24
- | **L=12** |[12/128][12_128]|[12/256]|[12/512]|[**12/768 (Base)**]|
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  ## How to use
27
 
28
- You can use this model directly with a pipeline for masked language modeling:
29
 
30
  ```python
31
  >>> from transformers import pipeline
32
- >>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-2_H-768')
33
  >>> unmasker("中国的首都是[MASK]京。")
34
  [
35
- {'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
36
- 'score': 0.6976630091667175,
37
- 'token': 1266,
38
- 'token_str': '北'},
 
 
 
 
39
  {'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
40
- 'score': 0.2517661452293396,
41
  'token': 691,
42
- 'token_str': '东'},
43
- {'sequence': '[CLS] 中 国 的 首 都 是 京 。 [SEP]',
44
- 'score': 0.04122703894972801,
45
- 'token': 1298,
46
- 'token_str': ''},
47
- {'sequence': '[CLS] 中 国 的 首 京 。 [SEP]',
48
- 'score': 0.0015233848243951797,
49
- 'token': 1426,
50
- 'token_str': ''},
51
- {'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]',
52
- 'score': 0.001429844880476594,
53
- 'token': 3249,
54
- 'token_str': '普'}
55
  ]
56
  ```
57
 
@@ -59,8 +79,8 @@ Here is how to use this model to get the features of a given text in PyTorch:
59
 
60
  ```python
61
  from transformers import BertTokenizer, BertModel
62
- tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-2_H-768')
63
- model = BertModel.from_pretrained("uer/chinese_roberta_L-2_H-768")
64
  text = "用你喜欢的任何文本替换我。"
65
  encoded_input = tokenizer(text, return_tensors='pt')
66
  output = model(**encoded_input)
@@ -70,59 +90,72 @@ and in TensorFlow:
70
 
71
  ```python
72
  from transformers import BertTokenizer, TFBertModel
73
- tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-2_H-768')
74
- model = TFBertModel.from_pretrained("uer/chinese_roberta_L-2_H-768")
75
  text = "用你喜欢的任何文本替换我。"
76
  encoded_input = tokenizer(text, return_tensors='tf')
77
  output = model(encoded_input)
78
  ```
79
 
80
-
81
-
82
  ## Training data
83
 
84
- CLUECorpus2020 and CLUECorpusSmall are used as training data.
85
 
86
  ## Training procedure
87
 
88
  Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512.
89
 
 
 
90
  Stage1:
 
91
  ```
92
- python3 preprocess.py --corpus_path corpora/cluecorpus.txt \
93
  --vocab_path models/google_zh_vocab.txt \
94
- --dataset_path cluecorpus_seq128_dataset.pt \
95
- --processes_num 32 --seq_length 128 \
96
- --dynamic_masking --target mlm
97
  ```
 
98
  ```
99
- python3 pretrain.py --dataset_path cluecorpus_seq128_dataset.pt \
100
  --vocab_path models/google_zh_vocab.txt \
101
- --config_path models/bert_l2h768_config.json \
102
- --output_model_path models/cluecorpus_roberta_l2h768_seq512_model.bin \
103
- --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
104
- --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
105
- --learning_rate 1e-4 --batch_size 64 \
106
- --tie_weights --encoder bert --target mlm
107
  ```
 
108
  Stage2:
 
109
  ```
110
- python3 preprocess.py --corpus_path corpora/cluecorpus.txt \
111
  --vocab_path models/google_zh_vocab.txt \
112
- --dataset_path cluecorpus_seq512_dataset.pt \
113
- --processes_num 32 --seq_length 512 \
114
- --dynamic_masking --target mlm
115
  ```
 
116
  ```
117
- python3 pretrain.py --dataset_path cluecorpus_seq512_dataset.pt \
118
- --pretrained_model_path models/cluecorpus_roberta_l2h768_seq512_model.bin-1000000 \
119
- --vocab_path models/google_zh_vocab.txt \
120
- --config_path models/bert_l2h768_config.json \
121
- --output_model_path models/cluecorpus_roberta_l2h768_seq512_model.bin \
122
- --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
123
- --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
124
- --learning_rate 5e-5 --batch_size 16 \
125
- --tie_weights --encoder bert --target mlm
 
 
 
 
 
 
 
 
126
  ```
127
 
128
  ### BibTeX entry and citation info
@@ -139,11 +172,6 @@ python3 pretrain.py --dataset_path cluecorpus_seq512_dataset.pt \
139
 
140
  [2_128]: https://huggingface.co/uer/chinese_roberta_L-2_H-128
141
  [4_256]: https://huggingface.co/uer/chinese_roberta_L-4_H-256
142
- [8_512]: https://huggingface.co/uer/chinese_roberta_L-8_H-512
143
  [4_512]: https://huggingface.co/uer/chinese_roberta_L-4_H-512
144
-
145
- [2_256]: https://huggingface.co/uer/chinese_roberta_L-2_H-256
146
-
147
- [12_128]: https://huggingface.co/uer/chinese_roberta_L-12_H-128
148
- [2_768]: https://huggingface.co/uer/chinese_roberta_L-2_H-768
149
-
 
1
  ---
2
  language: Chinese
3
+ datasets: CLUECorpusSmall
4
  widget:
5
  - text: "北京是[MASK]国的首都。"
6
+
7
+
8
  ---
9
 
10
 
 
14
 
15
  This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://www.aclweb.org/anthology/D19-3041.pdf).
16
 
17
+ [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
18
+
19
  You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
20
 
21
+ | | H=128 | H=256 | H=512 | H=768 |
22
+ | -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
23
+ | **L=2** | [**2/128 (Tiny)**][2_128] | [2/256] | [2/512] | [2/768] |
24
+ | **L=4** | [4/128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768] |
25
+ | **L=6** | [6/128] | [6/256] | [6/512] | [6/768] |
26
+ | **L=8** | [8/128] | [8/256] | [**8/512 (Medium)**][8_512] | [8/768] |
27
+ | **L=10** | [10/128] | [10/256] | [10/512] | [10/768] |
28
+ | **L=12** | [12/128] | [12/256] | [12/512] | [**12/768 (Base)**][12_768] |
29
+
30
+ Here are scores on the devlopment set of six Chinese tasks:
31
+
32
+ | Model | Score | douban | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
33
+ | -------------- | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
34
+ | RoBERTa-Tiny | 72.3 | 83.0 | 91.4 | 81.8 | 62.0 | 55.0 | 60.3 |
35
+ | RoBERTa-Mini | 75.7 | 84.8 | 93.7 | 86.1 | 63.9 | 58.3 | 67.4 |
36
+ | RoBERTa-Small | 76.8 | 86.5 | 93.4 | 86.5 | 65.1 | 59.4 | 69.7 |
37
+ | RoBERTa-Medium | 77.8 | 87.6 | 94.8 | 88.1 | 65.6 | 59.5 | 71.2 |
38
+ | RoBERTa-Base | 79.5 | 89.1 | 95.2 | 89.2 | 67.0 | 60.9 | 75.5 |
39
+
40
+ For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
41
+
42
+ - epochs: 3, 5, 8
43
+ - batch sizes: 32, 64
44
+ - learning rates: 3e-5, 1e-4, 3e-4
45
 
46
  ## How to use
47
 
48
+ You can use this model directly with a pipeline for masked language modeling (take the case of RoBERTa-Medium):
49
 
50
  ```python
51
  >>> from transformers import pipeline
52
+ >>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
53
  >>> unmasker("中国的首都是[MASK]京。")
54
  [
55
+ {'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
56
+ 'score': 0.8701988458633423,
57
+ 'token': 1266,
58
+ 'token_str': '北'},
59
+ {'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
60
+ 'score': 0.1194809079170227,
61
+ 'token': 1298,
62
+ 'token_str': '南'},
63
  {'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]',
64
+ 'score': 0.0037803512532263994,
65
  'token': 691,
66
+ 'token_str': '东'},
67
+ {'sequence': '[CLS] 中 国 的 首 都 是 京 。 [SEP]',
68
+ 'score': 0.0017127094324678183,
69
+ 'token': 3249,
70
+ 'token_str': ''},
71
+ {'sequence': '[CLS] 中 国 的 首 �� 京 。 [SEP]',
72
+ 'score': 0.001687526935711503,
73
+ 'token': 3307,
74
+ 'token_str': ''}
 
 
 
 
75
  ]
76
  ```
77
 
 
79
 
80
  ```python
81
  from transformers import BertTokenizer, BertModel
82
+ tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
83
+ model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
84
  text = "用你喜欢的任何文本替换我。"
85
  encoded_input = tokenizer(text, return_tensors='pt')
86
  output = model(**encoded_input)
 
90
 
91
  ```python
92
  from transformers import BertTokenizer, TFBertModel
93
+ tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
94
+ model = TFBertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
95
  text = "用你喜欢的任何文本替换我。"
96
  encoded_input = tokenizer(text, return_tensors='tf')
97
  output = model(encoded_input)
98
  ```
99
 
 
 
100
  ## Training data
101
 
102
+ [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. We found that models pre-trained on CLUECorpusSmall outperform those pre-trained on CLUECorpus2020, although CLUECorpus2020 is much larger than CLUECorpusSmall.
103
 
104
  ## Training procedure
105
 
106
  Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512.
107
 
108
+ Taking the case of RoBERTa-Medium
109
+
110
  Stage1:
111
+
112
  ```
113
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
114
  --vocab_path models/google_zh_vocab.txt \
115
+ --dataset_path cluecorpussmall_seq128_dataset.pt \
116
+ --processes_num 32 --seq_length 128 \
117
+ --dynamic_masking --target mlm
118
  ```
119
+
120
  ```
121
+ python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
122
  --vocab_path models/google_zh_vocab.txt \
123
+ --config_path models/bert_medium_config.json \
124
+ --output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
125
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
126
+ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
127
+ --learning_rate 1e-4 --batch_size 64 \
128
+ --tie_weights --embedding word_pos_seg --encoder transformer --mask fully_visible --target mlm
129
  ```
130
+
131
  Stage2:
132
+
133
  ```
134
+ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
135
  --vocab_path models/google_zh_vocab.txt \
136
+ --dataset_path cluecorpussmall_seq512_dataset.pt \
137
+ --processes_num 32 --seq_length 512 \
138
+ --dynamic_masking --target mlm
139
  ```
140
+
141
  ```
142
+ python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
143
+ --pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
144
+ --vocab_path models/google_zh_vocab.txt \
145
+ --config_path models/bert_medium_config.json \
146
+ --output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
147
+ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
148
+ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
149
+ --learning_rate 5e-5 --batch_size 16 \
150
+ --tie_weights --embedding word_pos_seg --encoder transformer --mask fully_visible --target mlm
151
+ ```
152
+
153
+ Finally, we convert the pre-trained model into Huggingface's format:
154
+
155
+ ```
156
+ python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
157
+ --output_model_path pytorch_model.bin \
158
+ --layers_num 8 --target mlm
159
  ```
160
 
161
  ### BibTeX entry and citation info
 
172
 
173
  [2_128]: https://huggingface.co/uer/chinese_roberta_L-2_H-128
174
  [4_256]: https://huggingface.co/uer/chinese_roberta_L-4_H-256
 
175
  [4_512]: https://huggingface.co/uer/chinese_roberta_L-4_H-512
176
+ [8_512]: https://huggingface.co/uer/chinese_roberta_L-8_H-512
177
+ [12_768]: https://huggingface.co/uer/chinese_roberta_L-12_H-768