wanng commited on
Commit
33623e3
1 Parent(s): 2ff9b3e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -10
README.md CHANGED
@@ -12,17 +12,42 @@ inference: true
12
  widget:
13
  - text: "中国首都位于<mask>。"
14
  ---
15
- # Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece,one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
16
 
17
- The 186 million parameter deberta-V2 base model, using 180G Chinese data, 8 3090TI(24G) training for 21 days,which is a encoder-only transformer structure. Consumed totally 500M samples.
18
 
19
- We pretrained a 128000 vocab from train datasets using sentence piece. And achieve a better in downstream task.
 
20
 
21
- ## Task Description
22
 
23
- Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece is pre-trained by bert like mask task from Deberta [paper](https://readpaper.com/paper/3033187248)
24
 
25
- ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ```python
28
  from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline
@@ -44,15 +69,31 @@ We present the dev results on some tasks.
44
  | RoBERTa-base | 0.743 | 0.7973 |
45
  | **Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece** | 0.7625 | 0.81 |
46
 
47
- ## Citation
 
 
48
 
49
- If you find the resource is useful, please cite the following website in your paper.
50
 
 
 
 
 
 
 
 
 
51
  ```
 
 
 
 
 
 
52
  @misc{Fengshenbang-LM,
53
  title={Fengshenbang-LM},
54
  author={IDEA-CCNL},
55
- year={2022},
56
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
57
  }
58
- ```
 
12
  widget:
13
  - text: "中国首都位于<mask>。"
14
  ---
 
15
 
16
+ # Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece
17
 
18
+ - Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
19
+ - Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
20
 
21
+ ## 简介 Brief Introduction
22
 
23
+ 善于处理NLU任务,采用sentence piece分词的,中文版的1.86亿参数DeBERTa-v2
24
 
25
+ Good at solving NLU tasks, adopting sentence piece, Chinese DeBERTa-v2 with 186M parameters.
26
+
27
+ ## 模型分类 Model Taxonomy
28
+
29
+ | 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
30
+ | :----: | :----: | :----: | :----: | :----: | :----: |
31
+ | 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | DeBERTa-v2 | 186M | Chinese-SentencePiece |
32
+
33
+ ## 模型信息 Model Information
34
+
35
+ 为了得到一个中文版的DeBERTa-v2(186M),我们用悟道语料库(180G版本)进行预训练。我们使用了Sentence Piece的方式分词(词表大小:约128000)。具体地,我们在预训练阶段中使用了[封神框架](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen)大概花费了8张3090TI(40G)约21天。
36
+
37
+ To get a Chinese DeBERTa-v2 (186M), we use WuDao Corpora (180 GB version) for pre-training. We employ the sentence piece as the tokenizer (vocabulary size: around 128,000). Specifically, we use the [fengshen framework](https://github.com/IDEA-CCNL/Fengshenbang-LM/tree/main/fengshen) in the pre-training phase which cost about 21 days with 8 3090TI(24G) GPUs.
38
+
39
+ ### 下游效果 Performance
40
+
41
+ 我们展示了下列下游任务的结果(dev集):
42
+
43
+ We present the results (dev set) on the following tasks:
44
+
45
+ | Model | OCNLI | CMNLI |
46
+ | ---------------------------------------------------- | ------ | ------ |
47
+ | RoBERTa-base | 0.743 | 0.7973 |
48
+ | **Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece** | 0.7625 | 0.81 |
49
+
50
+ ## 使用 Usage
51
 
52
  ```python
53
  from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline
 
69
  | RoBERTa-base | 0.743 | 0.7973 |
70
  | **Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece** | 0.7625 | 0.81 |
71
 
72
+ ## 引用 Citation
73
+
74
+ 如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
75
 
76
+ If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
77
 
78
+ ```text
79
+ @article{fengshenbang,
80
+ author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
81
+ title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
82
+ journal = {CoRR},
83
+ volume = {abs/2209.02970},
84
+ year = {2022}
85
+ }
86
  ```
87
+
88
+ 也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
89
+
90
+ You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
91
+
92
+ ```text
93
  @misc{Fengshenbang-LM,
94
  title={Fengshenbang-LM},
95
  author={IDEA-CCNL},
96
+ year={2021},
97
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
98
  }
99
+ ```