Update README.md
Browse files
README.md
CHANGED
@@ -4,6 +4,7 @@ datasets: CLUECorpusSmall
|
|
4 |
widget:
|
5 |
- text: "北京是[MASK]国的首都。"
|
6 |
|
|
|
7 |
---
|
8 |
|
9 |
|
@@ -11,7 +12,7 @@ widget:
|
|
11 |
|
12 |
## Model description
|
13 |
|
14 |
-
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://
|
15 |
|
16 |
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
|
17 |
|
@@ -19,12 +20,12 @@ You can download the 24 Chinese RoBERTa miniatures either from the [UER-py Githu
|
|
19 |
|
20 |
| | H=128 | H=256 | H=512 | H=768 |
|
21 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
|
22 |
-
| **L=2** | [**2/128 (Tiny)**][2_128] |
|
23 |
-
| **L=4** |
|
24 |
-
| **L=6** |
|
25 |
-
| **L=8** |
|
26 |
-
| **L=10** |
|
27 |
-
| **L=12** |
|
28 |
|
29 |
Here are scores on the devlopment set of six Chinese tasks:
|
30 |
|
@@ -102,7 +103,7 @@ output = model(encoded_input)
|
|
102 |
|
103 |
## Training procedure
|
104 |
|
105 |
-
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512.
|
106 |
|
107 |
Taking the case of RoBERTa-Medium
|
108 |
|
@@ -169,8 +170,27 @@ python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path model
|
|
169 |
}
|
170 |
```
|
171 |
|
172 |
-
[2_128]:
|
173 |
-
[
|
174 |
-
[
|
175 |
-
[
|
176 |
-
[
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
widget:
|
5 |
- text: "北京是[MASK]国的首都。"
|
6 |
|
7 |
+
|
8 |
---
|
9 |
|
10 |
|
|
|
12 |
|
13 |
## Model description
|
14 |
|
15 |
+
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
|
16 |
|
17 |
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
|
18 |
|
|
|
20 |
|
21 |
| | H=128 | H=256 | H=512 | H=768 |
|
22 |
| -------- | :-----------------------: | :-----------------------: | :-------------------------: | :-------------------------: |
|
23 |
+
| **L=2** | [**2/128 (Tiny)**][2_128] | [2/256][2_256] | [2/512][2_512] | [2/768][2_768] |
|
24 |
+
| **L=4** | [4/128][4_128] | [**4/256 (Mini)**][4_256] | [**4/512 (Small)**][4_512] | [4/768][4_768] |
|
25 |
+
| **L=6** | [6/128][6_128] | [6/256][6_256] | [6/512][6_512] | [6/768][6_768] |
|
26 |
+
| **L=8** | [8/128][8_128] | [8/256][8_256] | [**8/512 (Medium)**][8_512] | [8/768][8_768] |
|
27 |
+
| **L=10** | [10/128][10_128] | [10/256][10_256] | [10/512][10_512] | [10/768][10_768] |
|
28 |
+
| **L=12** | [12/128][12_128] | [12/256][12_256] | [12/512][12_512] | [**12/768 (Base)**][12_768] |
|
29 |
|
30 |
Here are scores on the devlopment set of six Chinese tasks:
|
31 |
|
|
|
103 |
|
104 |
## Training procedure
|
105 |
|
106 |
+
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
|
107 |
|
108 |
Taking the case of RoBERTa-Medium
|
109 |
|
|
|
170 |
}
|
171 |
```
|
172 |
|
173 |
+
[2_128]:https://huggingface.co/uer/chinese_roberta_L-2_H-128
|
174 |
+
[2_256]:https://huggingface.co/uer/chinese_roberta_L-2_H-256
|
175 |
+
[2_512]:https://huggingface.co/uer/chinese_roberta_L-2_H-512
|
176 |
+
[2_768]:https://huggingface.co/uer/chinese_roberta_L-2_H-768
|
177 |
+
[4_128]:https://huggingface.co/uer/chinese_roberta_L-4_H-128
|
178 |
+
[4_256]:https://huggingface.co/uer/chinese_roberta_L-4_H-256
|
179 |
+
[4_512]:https://huggingface.co/uer/chinese_roberta_L-4_H-512
|
180 |
+
[4_768]:https://huggingface.co/uer/chinese_roberta_L-4_H-768
|
181 |
+
[6_128]:https://huggingface.co/uer/chinese_roberta_L-6_H-128
|
182 |
+
[6_256]:https://huggingface.co/uer/chinese_roberta_L-6_H-256
|
183 |
+
[6_512]:https://huggingface.co/uer/chinese_roberta_L-6_H-512
|
184 |
+
[6_768]:https://huggingface.co/uer/chinese_roberta_L-6_H-768
|
185 |
+
[8_128]:https://huggingface.co/uer/chinese_roberta_L-8_H-128
|
186 |
+
[8_256]:https://huggingface.co/uer/chinese_roberta_L-8_H-256
|
187 |
+
[8_512]:https://huggingface.co/uer/chinese_roberta_L-8_H-512
|
188 |
+
[8_768]:https://huggingface.co/uer/chinese_roberta_L-8_H-768
|
189 |
+
[10_128]:https://huggingface.co/uer/chinese_roberta_L-10_H-128
|
190 |
+
[10_256]:https://huggingface.co/uer/chinese_roberta_L-10_H-256
|
191 |
+
[10_512]:https://huggingface.co/uer/chinese_roberta_L-10_H-512
|
192 |
+
[10_768]:https://huggingface.co/uer/chinese_roberta_L-10_H-768
|
193 |
+
[12_128]:https://huggingface.co/uer/chinese_roberta_L-12_H-128
|
194 |
+
[12_256]:https://huggingface.co/uer/chinese_roberta_L-12_H-256
|
195 |
+
[12_512]:https://huggingface.co/uer/chinese_roberta_L-12_H-512
|
196 |
+
[12_768]:https://huggingface.co/uer/chinese_roberta_L-12_H-768
|