File size: 5,470 Bytes
6f96607
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
---
language: Chinese
datasets: CLUECorpus
widget: 
- text: "北京是[MASK]国的首都。"
---


# Chinese RoBERTa Miniatures

## Model description

This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://www.aclweb.org/anthology/D19-3041.pdf).

You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:

|   |H=128|H=256|H=512|H=768|
|---|:---:|:---:|:---:|:---:|
| **L=2**  |[**2/128 (Tiny)**][2_128]|[2/256][2_256]|[2/512]|[2/768][2_768]|
| **L=4**  |[4/128]|[**4/256 (Mini)**][4_256]|[**4/512 (Small)**][4_512]|[4/768]|
| **L=6**  |[6/128]|[6/256]|[6/512]|[6/768]|
| **L=8**  |[8/128]|[8/256]|[**8/512 (Medium)**][8_512]|[8/768]|
| **L=10** |[10/128]|[10/256]|[10/512]|[10/768]|
| **L=12** |[12/128][12_128]|[12/256]|[12/512]|[**12/768 (Base)**]|

## How to use

You can use this model directly with a pipeline for masked language modeling:

```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-2_H-768')
>>> unmasker("中国的首都是[MASK]京。")
[
    {'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
     'score': 0.6976630091667175, 
     'token': 1266,
     'token_str': '北'}, 
    {'sequence': '[CLS] 中 国 的 首 都 是 东 京 。 [SEP]', 
     'score': 0.2517661452293396,
     'token': 691, 
     'token_str': '东'}, 
    {'sequence': '[CLS] 中 国 的 首 都 是 南 京 。 [SEP]',
     'score': 0.04122703894972801,
     'token': 1298,
     'token_str': '南'}, 
    {'sequence': '[CLS] 中 国 的 首 都 是 吴 京 。 [SEP]',
     'score': 0.0015233848243951797,
     'token': 1426,
     'token_str': '吴'}, 
    {'sequence': '[CLS] 中 国 的 首 都 是 普 京 。 [SEP]', 
     'score': 0.001429844880476594, 
     'token': 3249, 
     'token_str': '普'}
]
```

Here is how to use this model to get the features of a given text in PyTorch:

```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-2_H-768')
model = BertModel.from_pretrained("uer/chinese_roberta_L-2_H-768")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```

and in TensorFlow:

```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-2_H-768')
model = TFBertModel.from_pretrained("uer/chinese_roberta_L-2_H-768")
text = "用你喜欢的任何文本替换我。"
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```



## Training data

CLUECorpus2020 and CLUECorpusSmall are used as training data.

## Training procedure

Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512.

Stage1:
```
python3 preprocess.py --corpus_path corpora/cluecorpus.txt \
                      --vocab_path models/google_zh_vocab.txt \
					  --dataset_path cluecorpus_seq128_dataset.pt \
					  --processes_num 32 --seq_length 128 \
					  --dynamic_masking --target mlm
```
```
python3 pretrain.py --dataset_path cluecorpus_seq128_dataset.pt \
                    --vocab_path models/google_zh_vocab.txt \
					--config_path models/bert_l2h768_config.json \
					--output_model_path models/cluecorpus_roberta_l2h768_seq512_model.bin \
					--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
					--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
					--learning_rate 1e-4 --batch_size 64 \
					--tie_weights --encoder bert --target mlm
```
Stage2:
```
python3 preprocess.py --corpus_path corpora/cluecorpus.txt \
                      --vocab_path models/google_zh_vocab.txt \
					  --dataset_path cluecorpus_seq512_dataset.pt \
					  --processes_num 32 --seq_length 512 \
					  --dynamic_masking --target mlm
```
```
python3 pretrain.py --dataset_path cluecorpus_seq512_dataset.pt \
                    --pretrained_model_path models/cluecorpus_roberta_l2h768_seq512_model.bin-1000000 \
					--vocab_path models/google_zh_vocab.txt \
					--config_path models/bert_l2h768_config.json \
					--output_model_path models/cluecorpus_roberta_l2h768_seq512_model.bin \
					--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
					--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
					--learning_rate 5e-5 --batch_size 16 \
					--tie_weights --encoder bert --target mlm
```

### BibTeX entry and citation info

```
@article{zhao2019uer,
  title={UER: An Open-Source Toolkit for Pre-training Models},
  author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
  journal={EMNLP-IJCNLP 2019},
  pages={241},
  year={2019}
}
```

[2_128]: https://huggingface.co/uer/chinese_roberta_L-2_H-128
[4_256]: https://huggingface.co/uer/chinese_roberta_L-4_H-256
[8_512]: https://huggingface.co/uer/chinese_roberta_L-8_H-512
[4_512]: https://huggingface.co/uer/chinese_roberta_L-4_H-512

[2_256]: https://huggingface.co/uer/chinese_roberta_L-2_H-256

[12_128]: https://huggingface.co/uer/chinese_roberta_L-12_H-128
[2_768]: https://huggingface.co/uer/chinese_roberta_L-2_H-768