File size: 7,698 Bytes
5a7e11a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
---
language: Chinese
datasets: CLUECorpusSmall
widget: 
- text: "中国的首都是[MASK]"



---
# Chinese word-based RoBERTa Miniatures

## Model description

This is the set of 5 Chinese word-based RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).

[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 5 Chinese word-based RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details.

You can download the 5 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:

|          |           Link           |
| -------- | :-----------------------: |
| **Tiny**  | [**2/128 (Tiny)**][2_128] |
| **Mini**  | [**4/256 (Mini)**][4_256] |
| **Small**  | [**4/512 (Small)**][4_512] |
| **Medium**  | [**8/512 (Medium)**][8_512] |
| **Base** | [**12/768 (Base)**][12_768] |

## How to use

You can use this model directly with a pipeline for masked language modeling:

```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/roberta-base-word-chinese-cluecorpussmall')
>>> unmasker("[MASK]的首都是北京。")
[
    {'sequence': '中国 的首都是北京。',
     'score': 0.21525809168815613, 
     'token': 2873, 
     'token_str': '中国'}, 
    {'sequence': '北京 的首都是北京。', 
     'score': 0.15194718539714813, 
     'token': 9502, 
     'token_str': '北京'}, 
    {'sequence': '我们 的首都是北京。', 
     'score': 0.08854265511035919, 
     'token': 4215, 
     'token_str': '我们'},
    {'sequence': '美国 的首都是北京。', 
     'score': 0.06808705627918243, 
     'token': 7810, 
     'token_str': '美国'}, 
    {'sequence': '日本 的首都是北京。', 
     'score': 0.06071401759982109, 
     'token': 7788, 
     'token_str': '日本'}
]
```

BertTokenizer does not support sentencepiece, so we use AlbertTokenizer here.

Here is how to use this model to get the features of a given text in PyTorch:

```python
from transformers import AlbertTokenizer, BertModel
tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-base-word-chinese-cluecorpussmall')
model = BertModel.from_pretrained("uer/roberta-base-word-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```

and in TensorFlow:

```python
from transformers import AlbertTokenizer, TFBertModel
tokenizer = AlbertTokenizer.from_pretrained('uer/roberta-base-word-chinese-cluecorpussmall')
model = TFBertModel.from_pretrained("uer/roberta-base-word-chinese-cluecorpussmall")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```

## Training data

[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. Google's [sentencepiece](https://github.com/google/sentencepiece) is used for word segmentation. The sentencepiece model is trained on CLUECorpusSmall corpus:

```
>>> import sentencepiece as spm
>>> spm.SentencePieceTrainer.train(input='cluecorpussmall.txt',
             model_prefix='cluecorpussmall_spm',
             vocab_size=100000,
             max_sentence_length=1024,
             max_sentencepiece_length=6,
             user_defined_symbols=['[MASK]','[unused1]','[unused2]',
                '[unused3]','[unused4]','[unused5]','[unused6]',
                '[unused7]','[unused8]','[unused9]','[unused10]'],
             pad_id=0,
             pad_piece='[PAD]',
             unk_id=1,
             unk_piece='[UNK]',
             bos_id=2,
             bos_piece='[CLS]',
             eos_id=3,
             eos_piece='[SEP]',
             train_extremely_large_corpus=True
            )
```

## Training procedure

Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.

Taking the case of word-based RoBERTa-Medium

Stage1:

```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
                      --spm_model_path models/cluecorpussmall_spm.model \
                      --dataset_path cluecorpussmall_word_seq128_dataset.pt \
                      --processes_num 32 --seq_length 128 \
                      --dynamic_masking --target mlm
```

```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
                    --spm_model_path models/cluecorpussmall_spm.model \
                    --config_path models/bert/medium_config.json \
                    --output_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin \
                    --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
                    --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
                    --learning_rate 1e-4 --batch_size 64 \
                    --embedding word_pos_seg --encoder transformer --mask fully_visible --target mlm --tie_weights
```

Stage2:

```
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
                      --spm_model_path models/cluecorpussmall_spm.model \
                      --dataset_path cluecorpussmall_word_seq512_dataset.pt \
                      --processes_num 32 --seq_length 512 \
                      --dynamic_masking --target mlm
```

```
python3 pretrain.py --dataset_path cluecorpussmall_word_seq512_dataset.pt \
                    --pretrained_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-1000000 \
                    --spm_model_path models/cluecorpussmall_spm.model \
                    --config_path models/bert/medium_config.json \
                    --output_model_path models/cluecorpussmall_word_roberta_medium_seq512_model.bin \
                    --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
                    --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
                    --learning_rate 5e-5 --batch_size 16 \
                    --embedding word_pos_seg --encoder transformer --mask fully_visible --target mlm --tie_weights
```

Finally, we convert the pre-trained model into Huggingface's format:

```
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-250000 \
                                                        --output_model_path pytorch_model.bin \
                                                        --layers_num 12 --target mlm
```

### BibTeX entry and citation info

```
@article{zhao2019uer,
  title={UER: An Open-Source Toolkit for Pre-training Models},
  author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
  journal={EMNLP-IJCNLP 2019},
  pages={241},
  year={2019}
}
```

[2_128]:https://huggingface.co/uer/roberta-tiny-word-chinese-cluecorpussmall
[4_256]:https://huggingface.co/uer/roberta-mini-word-chinese-cluecorpussmall
[4_512]:https://huggingface.co/uer/roberta-small-word-chinese-cluecorpussmall
[8_512]:https://huggingface.co/uer/roberta-medium-word-chinese-cluecorpussmall
[12_768]:https://huggingface.co/uer/roberta-base-word-chinese-cluecorpussmall