Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,6 @@ This is the set of 5 Chinese word-based RoBERTa models pre-trained by [UER-py](h
|
|
19 |
|
20 |
You can download the 5 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
|
21 |
|
22 |
-
|
23 |
| | Link |
|
24 |
| -------- | :-----------------------: |
|
25 |
| **Tiny** | [**2/128 (Tiny)**][2_128] |
|
@@ -28,10 +27,6 @@ You can download the 5 Chinese RoBERTa miniatures either from the [UER-py Github
|
|
28 |
| **Medium** | [**8/512 (Medium)**][8_512] |
|
29 |
| **Base** | [**12/768 (Base)**][12_768] |
|
30 |
|
31 |
-
|
32 |
-
We use sentencepiece model to segment Chinese word and train this RoBERTa base model. You can download the model via HuggingFace from the link [roberta-base-word-chinese-cluecorpussmall](https://huggingface.co/uer/roberta-base-word-chinese-cluecorpussmall).
|
33 |
-
|
34 |
-
We found some bugs when using Hosted inference API. If the target character is a single word, the entire sentence will be displayed. If the target character is multiple words, only the target character will be displayed. In order to display correctly ,we recommend using the JSON Output in the lower left corner of the Hosted inference API.
|
35 |
## How to use
|
36 |
|
37 |
You can use this model directly with a pipeline for masked language modeling:
|
@@ -68,16 +63,12 @@ output = model(encoded_input)
|
|
68 |
|
69 |
## Training data
|
70 |
|
71 |
-
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data.
|
72 |
-
|
73 |
-
## Training procedure
|
74 |
-
|
75 |
-
We use google's **[sentencepiece](https://github.com/google/sentencepiece)** to train the sentencepiece model.
|
76 |
|
77 |
```
|
78 |
>>> import sentencepiece as spm
|
79 |
-
>>> spm.SentencePieceTrainer.train(input='
|
80 |
-
model_prefix='
|
81 |
vocab_size=100000,
|
82 |
max_sentence_length=1024,
|
83 |
max_sentencepiece_length=6,
|
@@ -96,13 +87,17 @@ We use google's **[sentencepiece](https://github.com/google/sentencepiece)** to
|
|
96 |
)
|
97 |
```
|
98 |
|
99 |
-
|
|
|
|
|
|
|
|
|
100 |
|
101 |
Stage1:
|
102 |
|
103 |
```
|
104 |
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
105 |
-
--spm_model_path models/
|
106 |
--dataset_path cluecorpussmall_word_seq128_dataset.pt \
|
107 |
--processes_num 32 --seq_length 128 \
|
108 |
--dynamic_masking --target mlm
|
@@ -111,13 +106,12 @@ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
|
111 |
```
|
112 |
python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
|
113 |
--spm_model_path models/cluecorpussmall_spm.model \
|
114 |
-
--config_path models/bert/
|
115 |
-
--output_model_path models/
|
116 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
117 |
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
|
118 |
--learning_rate 1e-4 --batch_size 64 \
|
119 |
-
--embedding word_pos_seg --encoder transformer --mask fully_visible
|
120 |
-
--target mlm --tie_weights
|
121 |
```
|
122 |
|
123 |
Stage2:
|
@@ -132,21 +126,20 @@ python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
|
132 |
|
133 |
```
|
134 |
python3 pretrain.py --dataset_path cluecorpussmall_word_seq512_dataset.pt \
|
135 |
-
--pretrained_model_path models/
|
136 |
--spm_model_path models/cluecorpussmall_spm.model \
|
137 |
-
--config_path models/bert/
|
138 |
-
--output_model_path models/
|
139 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
140 |
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
|
141 |
--learning_rate 5e-5 --batch_size 16 \
|
142 |
-
--embedding word_pos_seg --encoder transformer --mask fully_visible
|
143 |
-
--target mlm --tie_weights
|
144 |
```
|
145 |
|
146 |
Finally, we convert the pre-trained model into Huggingface's format:
|
147 |
|
148 |
```
|
149 |
-
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/
|
150 |
--output_model_path pytorch_model.bin \
|
151 |
--layers_num 12 --target mlm
|
152 |
```
|
19 |
|
20 |
You can download the 5 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
|
21 |
|
|
|
22 |
| | Link |
|
23 |
| -------- | :-----------------------: |
|
24 |
| **Tiny** | [**2/128 (Tiny)**][2_128] |
|
27 |
| **Medium** | [**8/512 (Medium)**][8_512] |
|
28 |
| **Base** | [**12/768 (Base)**][12_768] |
|
29 |
|
|
|
|
|
|
|
|
|
30 |
## How to use
|
31 |
|
32 |
You can use this model directly with a pipeline for masked language modeling:
|
63 |
|
64 |
## Training data
|
65 |
|
66 |
+
[CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. Google's [sentencepiece](https://github.com/google/sentencepiece) is used for word segmentation. The sentencepiece model is trained on CLUECorpusSmall corpus:
|
|
|
|
|
|
|
|
|
67 |
|
68 |
```
|
69 |
>>> import sentencepiece as spm
|
70 |
+
>>> spm.SentencePieceTrainer.train(input='cluecorpussmall.txt',
|
71 |
+
model_prefix='cluecorpussmall_spm',
|
72 |
vocab_size=100000,
|
73 |
max_sentence_length=1024,
|
74 |
max_sentencepiece_length=6,
|
87 |
)
|
88 |
```
|
89 |
|
90 |
+
## Training procedure
|
91 |
+
|
92 |
+
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
|
93 |
+
|
94 |
+
Taking the case of word-based RoBERTa-Medium
|
95 |
|
96 |
Stage1:
|
97 |
|
98 |
```
|
99 |
python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \
|
100 |
+
--spm_model_path models/cluecorpussmall_spm.model \
|
101 |
--dataset_path cluecorpussmall_word_seq128_dataset.pt \
|
102 |
--processes_num 32 --seq_length 128 \
|
103 |
--dynamic_masking --target mlm
|
106 |
```
|
107 |
python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \
|
108 |
--spm_model_path models/cluecorpussmall_spm.model \
|
109 |
+
--config_path models/bert/medium_config.json \
|
110 |
+
--output_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin \
|
111 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
112 |
--total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \
|
113 |
--learning_rate 1e-4 --batch_size 64 \
|
114 |
+
--embedding word_pos_seg --encoder transformer --mask fully_visible --target mlm --tie_weights
|
|
|
115 |
```
|
116 |
|
117 |
Stage2:
|
126 |
|
127 |
```
|
128 |
python3 pretrain.py --dataset_path cluecorpussmall_word_seq512_dataset.pt \
|
129 |
+
--pretrained_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-1000000 \
|
130 |
--spm_model_path models/cluecorpussmall_spm.model \
|
131 |
+
--config_path models/bert/medium_config.json \
|
132 |
+
--output_model_path models/cluecorpussmall_word_roberta_medium_seq512_model.bin \
|
133 |
--world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \
|
134 |
--total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \
|
135 |
--learning_rate 5e-5 --batch_size 16 \
|
136 |
+
--embedding word_pos_seg --encoder transformer --mask fully_visible --target mlm --tie_weights
|
|
|
137 |
```
|
138 |
|
139 |
Finally, we convert the pre-trained model into Huggingface's format:
|
140 |
|
141 |
```
|
142 |
+
python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_word_roberta_medium_seq128_model.bin-250000 \
|
143 |
--output_model_path pytorch_model.bin \
|
144 |
--layers_num 12 --target mlm
|
145 |
```
|