Update README.md
Browse files
README.md
CHANGED
@@ -21,9 +21,9 @@ widget:
|
|
21 |
|
22 |
## 简介 Brief Introduction
|
23 |
|
24 |
-
善于处理NLU任务,采用全词掩码的,中文版的3.2亿参数DeBERTa-v2-
|
25 |
|
26 |
-
Good at solving NLU tasks, adopting Whole Word Masking, Chinese DeBERTa-v2-
|
27 |
|
28 |
## 模型分类 Model Taxonomy
|
29 |
|
@@ -33,6 +33,8 @@ Good at solving NLU tasks, adopting Whole Word Masking, Chinese DeBERTa-v2-large
|
|
33 |
|
34 |
## 模型信息 Model Information
|
35 |
|
|
|
|
|
36 |
为了得到一个中文版的DeBERTa-v2-large(320M),我们用悟道语料库(180G版本)进行预训练。我们在MLM中使用了全词掩码(wwm)的方式。具体地,我们在预训练阶段中使用了[封神框架](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen)大概花费了8张A100(80G)约7天。
|
37 |
|
38 |
To get a Chinese DeBERTa-v2-large (320M), we use WuDao Corpora (180 GB version) for pre-training. We employ the Whole Word Masking (wwm) in MLM. Specifically, we use the [fengshen framework](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen) in the pre-training phase which cost about 7 days with 8 A100(80G) GPUs.
|
|
|
21 |
|
22 |
## 简介 Brief Introduction
|
23 |
|
24 |
+
善于处理NLU任务,采用全词掩码的,中文版的3.2亿参数DeBERTa-v2-Large。
|
25 |
|
26 |
+
Good at solving NLU tasks, adopting Whole Word Masking, Chinese DeBERTa-v2-Large with 320M parameters.
|
27 |
|
28 |
## 模型分类 Model Taxonomy
|
29 |
|
|
|
33 |
|
34 |
## 模型信息 Model Information
|
35 |
|
36 |
+
参考论文:[DeBERTa: Decoding-enhanced BERT with Disentangled Attention](https://readpaper.com/paper/3033187248)
|
37 |
+
|
38 |
为了得到一个中文版的DeBERTa-v2-large(320M),我们用悟道语料库(180G版本)进行预训练。我们在MLM中使用了全词掩码(wwm)的方式。具体地,我们在预训练阶段中使用了[封神框架](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen)大概花费了8张A100(80G)约7天。
|
39 |
|
40 |
To get a Chinese DeBERTa-v2-large (320M), we use WuDao Corpora (180 GB version) for pre-training. We employ the Whole Word Masking (wwm) in MLM. Specifically, we use the [fengshen framework](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen) in the pre-training phase which cost about 7 days with 8 A100(80G) GPUs.
|