Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ It topped FewCLUE and ZeroCLUE benchmarks in 2021, solving NLU tasks, was the la
|
|
27 |
|
28 |
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
|
29 |
| :----: | :----: | :----: | :----: | :----: | :----: |
|
30 |
-
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen |
|
31 |
|
32 |
## 模型信息 Model Information
|
33 |
|
@@ -39,13 +39,13 @@ We follow [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), using 32 A100s a
|
|
39 |
|
40 |
### 成就 Achievements
|
41 |
|
42 |
-
1.2021年11月10
|
43 |
-
2.2022年1月24
|
44 |
-
3.在2022年7月10
|
45 |
|
46 |
-
1.On November 10, 2021, Erlangshen topped the FewCLUE benchmark. Among them, our Erlangshen outperformed human performance in CHIDF (idiom fill-in-the-blank) and TNEWS (news classification) subtasks. In addition, our Erlangshen ranked the top in CHIDF (idiom fill-in-the-blank), CSLDCP (subject literature classification), and OCNLI (natural language inference) tasks.
|
47 |
-
2.On January 24, 2022, Erlangshen-
|
48 |
-
3.Erlangshen topped the CLUE benchmark semantic matching task on July 10, 2022.
|
49 |
|
50 |
### 下游效果 Performance
|
51 |
|
|
|
27 |
|
28 |
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
|
29 |
| :----: | :----: | :----: | :----: | :----: | :----: |
|
30 |
+
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | MegatronBERT | 1.3B | - |
|
31 |
|
32 |
## 模型信息 Model Information
|
33 |
|
|
|
39 |
|
40 |
### 成就 Achievements
|
41 |
|
42 |
+
1.2021年11月10日,Erlangshen-MegatronBert-1.3B在FewCLUE上取得第一。其中,它在CHIDF(成语填空)和TNEWS(新闻分类)子任务中的表现优于人类表现。此外,它在CHIDF(成语填空), CSLDCP(学科文献分类), OCNLI(自然语言推理)任务中均名列前茅。
|
43 |
+
2.2022年1月24日,Erlangshen-MegatronBert-1.3B在CLUE基准测试中的ZeroCLUE中取得第一。具体到子任务,我们在CSLDCP(主题文献分类), TNEWS(新闻分类), IFLYTEK(应用描述分类), CSL(抽象关键字识别)和CLUEWSC(参考消歧)任务中取得第一。
|
44 |
+
3.在2022年7月10日,Erlangshen-MegatronBert-1.3B在CLUE基准的语义匹配任务中取得第一。
|
45 |
|
46 |
+
1.On November 10, 2021, Erlangshen-MegatronBert-1.3B topped the FewCLUE benchmark. Among them, our Erlangshen outperformed human performance in CHIDF (idiom fill-in-the-blank) and TNEWS (news classification) subtasks. In addition, our Erlangshen ranked the top in CHIDF (idiom fill-in-the-blank), CSLDCP (subject literature classification), and OCNLI (natural language inference) tasks.
|
47 |
+
2.On January 24, 2022, Erlangshen-MegatronBert-1.3B topped the ZeroCLUE benchmark. For each of these tasks, we rank the top ones in CSLDCP (Subject Literature Classification), TNEWS (News Classification), IFLYTEK (Application Description Classification), CSL (Abstract Keyword Recognition), and CLUEWSC (Referential Disambiguation) tasks.
|
48 |
+
3.Erlangshen-MegatronBert-1.3B topped the CLUE benchmark semantic matching task on July 10, 2022.
|
49 |
|
50 |
### 下游效果 Performance
|
51 |
|