uer commited on
Commit
3eaf964
1 Parent(s): 6c92148

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -12,11 +12,11 @@ widget:
12
 
13
  ## Model description
14
 
15
- This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
16
 
17
  [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
18
 
19
- You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below:
20
 
21
  | | H=128 | H=256 | H=512 | H=768 |
22
  | -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
@@ -103,7 +103,7 @@ output = model(encoded_input)
103
 
104
  ## Training procedure
105
 
106
- Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
107
 
108
  Taking the case of RoBERTa-Medium
109
 
@@ -153,7 +153,7 @@ python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \
153
  Finally, we convert the pre-trained model into Huggingface's format:
154
 
155
  ```
156
- python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
157
  --output_model_path pytorch_model.bin \
158
  --layers_num 8 --target mlm
159
  ```
 
12
 
13
  ## Model description
14
 
15
+ This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).
16
 
17
  [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
18
 
19
+ You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo), or via HuggingFace from the links below:
20
 
21
  | | H=128 | H=256 | H=512 | H=768 |
22
  | -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
 
103
 
104
  ## Training procedure
105
 
106
+ Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
107
 
108
  Taking the case of RoBERTa-Medium
109
 
 
153
  Finally, we convert the pre-trained model into Huggingface's format:
154
 
155
  ```
156
+ python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_roberta_medium_seq512_model.bin-250000 \
157
  --output_model_path pytorch_model.bin \
158
  --layers_num 8 --target mlm
159
  ```