wanng commited on
Commit
cd703a4
1 Parent(s): 2ce5650

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -32,7 +32,7 @@ Good at solving NLU tasks, adopting sentence piece, Chinese DeBERTa-v2 with 186M
32
 
33
  ## 模型信息 Model Information
34
 
35
- 为了得到一个中文版的DeBERTa-v2(186M),我们用悟道语料库(180G版本)进行预训练。我们使用了Sentence Piece的方式分词(词表大小:约128000)。具体地,我们在预训练阶段中使用了[封神框架](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen)大概花费了8张3090TI(40G)约21天。
36
 
37
  To get a Chinese DeBERTa-v2 (186M), we use WuDao Corpora (180 GB version) for pre-training. We employ the sentence piece as the tokenizer (vocabulary size: around 128,000). Specifically, we use the [fengshen framework](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen) in the pre-training phase which cost about 21 days with 8 3090TI(24G) GPUs.
38
 
 
32
 
33
  ## 模型信息 Model Information
34
 
35
+ 为了得到一个中文版的DeBERTa-v2(186M),我们用悟道语料库(180G版本)进行预训练。我们使用了Sentence Piece的方式分词(词表大小:约128000)。具体地,我们在预训练阶段中使用了[封神框架](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen)大概花费了8张3090TI(24G)约21天。
36
 
37
  To get a Chinese DeBERTa-v2 (186M), we use WuDao Corpora (180 GB version) for pre-training. We employ the sentence piece as the tokenizer (vocabulary size: around 128,000). Specifically, we use the [fengshen framework](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen) in the pre-training phase which cost about 21 days with 8 3090TI(24G) GPUs.
38