gxy commited on
Commit
41af8ad
1 Parent(s): f22e343

FEAT: first commit

Browse files
README.md CHANGED
@@ -1,3 +1,58 @@
1
  ---
 
 
 
2
  license: apache-2.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - zh
4
+
5
  license: apache-2.0
6
+
7
+ tags:
8
+ - bert
9
+
10
+ inference: true
11
+
12
+ widget:
13
+ - text: "中国首都位于[MASK]。"
14
  ---
15
+ # Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece,one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
16
+
17
+ The 186 million parameter deberta-V2 base model, using 180G Chinese data, 8 3090TI(24G) training for 21 days,which is a encoder-only transformer structure. Consumed totally 500M samples.
18
+
19
+ We pretrained a 128000 vocab from train datasets using sentence piece. And achieve a better in downstream task.
20
+
21
+ ## Task Description
22
+
23
+ Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece is pre-trained by bert like mask task from Deberta [paper](https://readpaper.com/paper/3033187248)
24
+
25
+ ## Usage
26
+
27
+ ```python
28
+ from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline
29
+ import torch
30
+
31
+ tokenizer=AutoTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece', use_fast=False)
32
+ model=AutoModelForMaskedLM.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece')
33
+ text = '中国首都位于[MASK]。'
34
+ fillmask_pipe = FillMaskPipeline(model, tokenizer)
35
+ print(fillmask_pipe(text, top_k=10))
36
+ ```
37
+
38
+ ## Finetune
39
+
40
+ We present the dev results on some tasks.
41
+
42
+ | Model | OCNLI | CMNLI |
43
+ | ---------------------------------------------------- | ------ | ------ |
44
+ | RoBERTa-base | 0.743 | 0.7973 |
45
+ | **Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece** | 0.7625 | 0.81 |
46
+
47
+ ## Citation
48
+
49
+ If you find the resource is useful, please cite the following website in your paper.
50
+
51
+ ```
52
+ @misc{Fengshenbang-LM,
53
+ title={Fengshenbang-LM},
54
+ author={IDEA-CCNL},
55
+ year={2022},
56
+ howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
57
+ }
58
+ ```
added_tokens.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"<mask>": 128000}
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "deberta-v2",
3
+ "attention_probs_dropout_prob": 0.1,
4
+ "hidden_act": "gelu",
5
+ "hidden_dropout_prob": 0.1,
6
+ "hidden_size": 768,
7
+ "initializer_range": 0.02,
8
+ "intermediate_size": 3072,
9
+ "max_position_embeddings": 512,
10
+ "relative_attention": true,
11
+ "position_buckets": 256,
12
+ "norm_rel_ebd": "layer_norm",
13
+ "share_att_key": true,
14
+ "pos_att_type": "c2p|p2c",
15
+ "conv_kernel_size": 3,
16
+ "conv_act": "gelu",
17
+ "layer_norm_eps": 1e-7,
18
+ "max_relative_positions": -1,
19
+ "position_biased_input": false,
20
+ "num_attention_heads": 12,
21
+ "num_hidden_layers": 12,
22
+ "type_vocab_size": 0,
23
+ "num_labels": 15,
24
+ "vocab_size": 128128
25
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbceb61b7dc9acbfc4119b4f43caeae34f45204af4a68101dab6bbf5beefc121
3
+ size 372721635
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": "<mask>"}
spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec224fdac5b0511c6b901a79c0a4355f069b624dfc04400860d5d8fad4404e40
3
+ size 2390173
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_lower_case": false,
3
+ "bos_token": "<s>",
4
+ "eos_token": "</s>",
5
+ "unk_token": "<unk>",
6
+ "sep_token": "</s>",
7
+ "pad_token": "<pad>",
8
+ "cls_token": "<s>",
9
+ "mask_token": "<mask>",
10
+ "split_by_punct": false,
11
+ "sp_model_kwargs": {},
12
+ "special_tokens_map_file": null,
13
+ "name_or_path": "/cognitive_comp/gaoxinyu/pretrained_model/deberta-base-sp",
14
+ "tokenizer_class": "DebertaV2Tokenizer"
15
+ }