Update README.md
Browse files
README.md
CHANGED
@@ -4,17 +4,44 @@ datasets: CLUECorpusSmall
|
|
4 |
widget:
|
5 |
- text: "北京是[MASK]国的首都。"
|
6 |
|
7 |
-
---
|
8 |
|
9 |
-
|
|
|
|
|
10 |
|
11 |
## Model description
|
12 |
|
13 |
-
This is
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
-
|
16 |
|
17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
## How to use
|
20 |
|
@@ -22,38 +49,41 @@ You can use this model directly with a pipeline for masked language modeling:
|
|
22 |
|
23 |
```python
|
24 |
>>> from transformers import pipeline
|
25 |
-
>>> unmasker = pipeline('fill-mask', model='uer/roberta-
|
26 |
>>> unmasker("北京是[MASK]国的首都。")
|
27 |
[
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
]
|
|
|
|
|
|
|
49 |
```
|
50 |
|
51 |
Here is how to use this model to get the features of a given text in PyTorch:
|
52 |
|
53 |
```python
|
54 |
from transformers import BertTokenizer, BertModel
|
55 |
-
tokenizer = BertTokenizer.from_pretrained('uer/roberta-
|
56 |
-
model = BertModel.from_pretrained("uer/roberta-
|
57 |
text = "用你喜欢的任何文本替换我。"
|
58 |
encoded_input = tokenizer(text, return_tensors='pt')
|
59 |
output = model(**encoded_input)
|
@@ -63,8 +93,8 @@ and in TensorFlow:
|
|
63 |
|
64 |
```python
|
65 |
from transformers import BertTokenizer, TFBertModel
|
66 |
-
tokenizer = BertTokenizer.from_pretrained('uer/roberta-
|
67 |
-
model = TFBertModel.from_pretrained("uer/roberta-
|
68 |
text = "用你喜欢的任何文本替换我。"
|
69 |
encoded_input = tokenizer(text, return_tensors='tf')
|
70 |
output = model(encoded_input)
|
@@ -76,10 +106,12 @@ output = model(encoded_input)
|
|
76 |
|
77 |
## Training procedure
|
78 |
|
79 |
-
Models are pre-trained by [
|
80 |
|
81 |
[jieba](https://github.com/fxsjy/jieba) is used as word segmentation tool.
|
82 |
|
|
|
|
|
83 |
Stage1:
|
84 |
|
85 |
```
|
@@ -91,22 +123,15 @@ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
|
91 |
```
|
92 |
|
93 |
```
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
```
|
104 |
-
|
105 |
-
Before stage2, we extract fp32 consolidated weights from a zero 2 and 3 DeepSpeed checkpoints:
|
106 |
-
|
107 |
-
```
|
108 |
-
python3 models/cluecorpussmall_wwm_roberta_xlarge_seq128_model/zero_to_fp32.py models/cluecorpussmall_wwm_roberta_xlarge_seq128_model/ \
|
109 |
-
models/cluecorpussmall_wwm_roberta_xlarge_seq128_model.bin
|
110 |
```
|
111 |
|
112 |
Stage2:
|
@@ -120,31 +145,24 @@ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
|
120 |
```
|
121 |
|
122 |
```
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
```
|
134 |
-
|
135 |
-
Then, we extract fp32 consolidated weights from a zero 2 and 3 DeepSpeed checkpoints:
|
136 |
-
|
137 |
-
```
|
138 |
-
python3 models/cluecorpussmall_wwm_roberta_xlarge_seq512_model/zero_to_fp32.py models/cluecorpussmall_wwm_roberta_xlarge_seq512_model/ \
|
139 |
-
models/cluecorpussmall_wwm_roberta_xlarge_seq512_model.bin
|
140 |
```
|
141 |
|
142 |
Finally, we convert the pre-trained model into Huggingface's format:
|
143 |
|
144 |
```
|
145 |
-
python3 scripts/
|
146 |
-
|
147 |
-
|
148 |
```
|
149 |
|
150 |
### BibTeX entry and citation info
|
@@ -165,3 +183,10 @@ python3 scripts/convert_bert_from_tencentpretrain_to_huggingface.py --input_mode
|
|
165 |
pages={217},
|
166 |
year={2023}
|
167 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
widget:
|
5 |
- text: "北京是[MASK]国的首都。"
|
6 |
|
|
|
7 |
|
8 |
+
|
9 |
+
---
|
10 |
+
# Chinese Whole Word Masking RoBERTa Miniatures
|
11 |
|
12 |
## Model description
|
13 |
|
14 |
+
This is the set of 6 Chinese Whole Word Masking RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658). Besides, the models could also be pre-trained by [TencentPretrain](https://github.com/Tencent/TencentPretrain) introduced in [this paper](https://arxiv.org/abs/2212.06385), which inherits UER-py to support models with parameters above one billion, and extends it to a multimodal pre-training framework.
|
15 |
+
|
16 |
+
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 6 Chinese Whole Word Masking RoBERTa models. In order to facilitate users in reproducing the results, we used a publicly available corpus and word segmentation tool, and provided all training details.
|
17 |
+
|
18 |
+
You can download the 6 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
|
19 |
+
|
20 |
+
| | Link |
|
21 |
+
| -------- | :-----------------------: |
|
22 |
+
| **Tiny** | [**2/128 (Tiny)**][2_128] |
|
23 |
+
| **Mini** | [**4/256 (Mini)**][4_256] |
|
24 |
+
| **Small** | [**4/512 (Small)**][4_512] |
|
25 |
+
| **Medium** | [**8/512 (Medium)**][8_512] |
|
26 |
+
| **Base** | [**12/768 (Base)**][12_768] |
|
27 |
+
| **Large** | [**24/1024 (Large)**][24_1024] |
|
28 |
|
29 |
+
Here are scores on the devlopment set of six Chinese tasks:
|
30 |
|
31 |
+
| Model | Score | book_review | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) |
|
32 |
+
| ------------------ | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: |
|
33 |
+
| RoBERTa-Tiny-WWM | 72.2 | 83.7 | 91.8 | 81.8 | 62.1 | 55.4 | 58.6 |
|
34 |
+
| RoBERTa-Mini-WWM | 76.3 | 86.4 | 93.0 | 86.8 | 64.4 | 58.7 | 68.8 |
|
35 |
+
| RoBERTa-Small-WWM | 77.6 | 88.1 | 93.8 | 87.2 | 65.2 | 59.6 | 71.4 |
|
36 |
+
| RoBERTa-Medium-WWM | 78.6 | 89.3 | 94.4 | 88.8 | 66.0 | 59.9 | 73.2 |
|
37 |
+
| RoBERTa-Base-WWM | 80.2 | 90.6 | 95.8 | 89.4 | 67.5 | 61.8 | 76.2 |
|
38 |
+
| RoBERTa-Large-WWM | 81.1 | 91.1 | 95.8 | 90.0 | 68.5 | 62.1 | 79.1 |
|
39 |
+
|
40 |
+
For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128:
|
41 |
+
|
42 |
+
- epochs: 3, 5, 8
|
43 |
+
- batch sizes: 32, 64
|
44 |
+
- learning rates: 3e-5, 1e-4, 3e-4
|
45 |
|
46 |
## How to use
|
47 |
|
|
|
49 |
|
50 |
```python
|
51 |
>>> from transformers import pipeline
|
52 |
+
>>> unmasker = pipeline('fill-mask', model='uer/roberta-tiny-wwm-chinese-cluecorpussmall')
|
53 |
>>> unmasker("北京是[MASK]国的首都。")
|
54 |
[
|
55 |
+
{'score': 0.294228732585907,
|
56 |
+
'token': 704,
|
57 |
+
'token_str': '中',
|
58 |
+
'sequence': '北 京 是 中 国 的 首 都 。'},
|
59 |
+
{'score': 0.19691626727581024,
|
60 |
+
'token': 1266,
|
61 |
+
'token_str': '北',
|
62 |
+
'sequence': '北 京 是 北 国 的 首 都 。'},
|
63 |
+
{'score': 0.1070084273815155,
|
64 |
+
'token': 7506,
|
65 |
+
'token_str': '韩',
|
66 |
+
'sequence': '北 京 是 韩 国 的 首 都 。'},
|
67 |
+
{'score': 0.031527262181043625,
|
68 |
+
'token': 2769,
|
69 |
+
'token_str': '我',
|
70 |
+
'sequence': '北 京 是 我 国 的 首 都 。'},
|
71 |
+
{'score': 0.023054633289575577,
|
72 |
+
'token': 1298,
|
73 |
+
'token_str': '南',
|
74 |
+
'sequence': '北 京 是 南 国 的 首 都 。'}
|
75 |
]
|
76 |
+
|
77 |
+
|
78 |
+
|
79 |
```
|
80 |
|
81 |
Here is how to use this model to get the features of a given text in PyTorch:
|
82 |
|
83 |
```python
|
84 |
from transformers import BertTokenizer, BertModel
|
85 |
+
tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
|
86 |
+
model = BertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
|
87 |
text = "用你喜欢的任何文本替换我。"
|
88 |
encoded_input = tokenizer(text, return_tensors='pt')
|
89 |
output = model(**encoded_input)
|
|
|
93 |
|
94 |
```python
|
95 |
from transformers import BertTokenizer, TFBertModel
|
96 |
+
tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall')
|
97 |
+
model = TFBertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall")
|
98 |
text = "用你喜欢的任何文本替换我。"
|
99 |
encoded_input = tokenizer(text, return_tensors='tf')
|
100 |
output = model(encoded_input)
|
|
|
106 |
|
107 |
## Training procedure
|
108 |
|
109 |
+
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
|
110 |
|
111 |
[jieba](https://github.com/fxsjy/jieba) is used as word segmentation tool.
|
112 |
|
113 |
+
Taking the case of Whole Word Masking RoBERTa-Medium
|
114 |
+
|
115 |
Stage1:
|
116 |
|
117 |
```
|
|
|
123 |
```
|
124 |
|
125 |
```
|
126 |
+
python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
|
127 |
+
--vocab_path models/google_zh_vocab.txt \
|
128 |
+
--config_path models/bert/medium_config.json \
|
129 |
+
--output_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin \
|
130 |
+
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
131 |
+
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
|
132 |
+
--learning_rate 1e-4 --batch_size 64 \
|
133 |
+
--whole_word_masking \
|
134 |
+
--data_processor mlm --target mlm
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
135 |
```
|
136 |
|
137 |
Stage2:
|
|
|
145 |
```
|
146 |
|
147 |
```
|
148 |
+
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
|
149 |
+
--vocab_path models/google_zh_vocab.txt \
|
150 |
+
--pretrained_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin-1000000 \
|
151 |
+
--config_path models/bert/medium_config.json \
|
152 |
+
--output_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
|
153 |
+
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
154 |
+
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
|
155 |
+
--learning_rate 5e-5 --batch_size 16 \
|
156 |
+
--whole_word_masking \
|
157 |
+
--data_processor mlm --target mlm
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
158 |
```
|
159 |
|
160 |
Finally, we convert the pre-trained model into Huggingface's format:
|
161 |
|
162 |
```
|
163 |
+
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \
|
164 |
+
--output_model_path pytorch_model.bin \
|
165 |
+
--layers_num 8 --type mlm
|
166 |
```
|
167 |
|
168 |
### BibTeX entry and citation info
|
|
|
183 |
pages={217},
|
184 |
year={2023}
|
185 |
```
|
186 |
+
|
187 |
+
[2_128]:https://huggingface.co/uer/roberta-tiny-wwm-chinese-cluecorpussmall
|
188 |
+
[4_256]:https://huggingface.co/uer/roberta-mini-wwm-chinese-cluecorpussmall
|
189 |
+
[4_512]:https://huggingface.co/uer/roberta-small-wwm-chinese-cluecorpussmall
|
190 |
+
[8_512]:https://huggingface.co/uer/roberta-medium-wwm-chinese-cluecorpussmall
|
191 |
+
[12_768]:https://huggingface.co/uer/roberta-base-wwm-chinese-cluecorpussmall
|
192 |
+
[24_1024]:https://huggingface.co/uer/roberta-large-wwm-chinese-cluecorpussmall
|