Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ widget:
|
|
12 |
|
13 |
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://www.aclweb.org/anthology/D19-3041.pdf).
|
14 |
|
15 |
-
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
|
16 |
|
17 |
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
|
18 |
|
@@ -31,7 +31,7 @@ Here are scores on the devlopment set of six Chinese tasks:
|
|
31 |
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
32 |
|BERT-Tiny|72.3|83.0|91.4|81.8|62.0|55.0|60.3|
|
33 |
|BERT-Mini|75.7|84.8|93.7|86.1|63.9|58.3|67.4|
|
34 |
-
|BERT-Small|
|
35 |
|BERT-Medium|77.8|87.6|94.8|88.1|65.6|59.5|71.2|
|
36 |
|BERT-Base|79.5|89.1|95.2|89.2|67.0|60.9|75.5|
|
37 |
|
@@ -42,11 +42,11 @@ For each task, we selected the best fine-tuning hyperparameters from the lists b
|
|
42 |
|
43 |
## How to use
|
44 |
|
45 |
-
You can use this model directly with a pipeline for masked language modeling:
|
46 |
|
47 |
```python
|
48 |
>>> from transformers import pipeline
|
49 |
-
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-
|
50 |
>>> unmasker("中国的首都是[MASK]京。")
|
51 |
[
|
52 |
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
|
@@ -76,8 +76,8 @@ Here is how to use this model to get the features of a given text in PyTorch:
|
|
76 |
|
77 |
```python
|
78 |
from transformers import BertTokenizer, BertModel
|
79 |
-
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-
|
80 |
-
model = BertModel.from_pretrained("uer/chinese_roberta_L-
|
81 |
text = "用你喜欢的任何文本替换我。"
|
82 |
encoded_input = tokenizer(text, return_tensors='pt')
|
83 |
output = model(**encoded_input)
|
@@ -102,7 +102,8 @@ CLUECorpusSmall is used as training data. We found that models pre-trained on CL
|
|
102 |
|
103 |
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512.
|
104 |
|
105 |
-
|
|
|
106 |
```
|
107 |
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
108 |
--vocab_path models/google_zh_vocab.txt \
|
@@ -113,14 +114,14 @@ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
|
113 |
```
|
114 |
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
|
115 |
--vocab_path models/google_zh_vocab.txt \
|
116 |
-
--config_path models/
|
117 |
-
--output_model_path models/
|
118 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
119 |
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
|
120 |
--learning_rate 1e-4 --batch_size 64 \
|
121 |
--tie_weights --embedding word_pos_seg --encoder transformer --mask fully_visible --target mlm
|
122 |
```
|
123 |
-
Stage2
|
124 |
```
|
125 |
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
126 |
--vocab_path models/google_zh_vocab.txt \
|
@@ -130,10 +131,10 @@ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
|
130 |
```
|
131 |
```
|
132 |
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
|
133 |
-
--pretrained_model_path models/
|
134 |
--vocab_path models/google_zh_vocab.txt \
|
135 |
-
--config_path models/
|
136 |
-
--output_model_path models/
|
137 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
138 |
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
|
139 |
--learning_rate 5e-5 --batch_size 16 \
|
|
|
12 |
|
13 |
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://www.aclweb.org/anthology/D19-3041.pdf).
|
14 |
|
15 |
+
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
|
16 |
|
17 |
You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
|
18 |
|
|
|
31 |
|---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|
32 |
|BERT-Tiny|72.3|83.0|91.4|81.8|62.0|55.0|60.3|
|
33 |
|BERT-Mini|75.7|84.8|93.7|86.1|63.9|58.3|67.4|
|
34 |
+
|BERT-Small|76.8|86.5|93.4|86.5|65.1|59.4|69.7|
|
35 |
|BERT-Medium|77.8|87.6|94.8|88.1|65.6|59.5|71.2|
|
36 |
|BERT-Base|79.5|89.1|95.2|89.2|67.0|60.9|75.5|
|
37 |
|
|
|
42 |
|
43 |
## How to use
|
44 |
|
45 |
+
You can use this model directly with a pipeline for masked language modeling (take the case of BERT-Medium):
|
46 |
|
47 |
```python
|
48 |
>>> from transformers import pipeline
|
49 |
+
>>> unmasker = pipeline('fill-mask', model='uer/chinese_roberta_L-8_H-512')
|
50 |
>>> unmasker("中国的首都是[MASK]京。")
|
51 |
[
|
52 |
{'sequence': '[CLS] 中 国 的 首 都 是 北 京 。 [SEP]',
|
|
|
76 |
|
77 |
```python
|
78 |
from transformers import BertTokenizer, BertModel
|
79 |
+
tokenizer = BertTokenizer.from_pretrained('uer/chinese_roberta_L-8_H-512')
|
80 |
+
model = BertModel.from_pretrained("uer/chinese_roberta_L-8_H-512")
|
81 |
text = "用你喜欢的任何文本替换我。"
|
82 |
encoded_input = tokenizer(text, return_tensors='pt')
|
83 |
output = model(**encoded_input)
|
|
|
102 |
|
103 |
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512.
|
104 |
|
105 |
+
Taking the case of BERT-Medium:
|
106 |
+
Stage1
|
107 |
```
|
108 |
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
109 |
--vocab_path models/google_zh_vocab.txt \
|
|
|
114 |
```
|
115 |
python3 pretrain.py --dataset_path cluecorpussmall_seq128_dataset.pt \
|
116 |
--vocab_path models/google_zh_vocab.txt \
|
117 |
+
--config_path models/bert_medium_config.json \
|
118 |
+
--output_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin \
|
119 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
120 |
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
|
121 |
--learning_rate 1e-4 --batch_size 64 \
|
122 |
--tie_weights --embedding word_pos_seg --encoder transformer --mask fully_visible --target mlm
|
123 |
```
|
124 |
+
Stage2
|
125 |
```
|
126 |
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
127 |
--vocab_path models/google_zh_vocab.txt \
|
|
|
131 |
```
|
132 |
```
|
133 |
python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
|
134 |
+
--pretrained_model_path models/cluecorpussmall_roberta_medium_seq128_model.bin-1000000 \
|
135 |
--vocab_path models/google_zh_vocab.txt \
|
136 |
+
--config_path models/bert_medium_config.json \
|
137 |
+
--output_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin \
|
138 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
139 |
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
|
140 |
--learning_rate 5e-5 --batch_size 16 \
|