Pengcheng He commited on
Commit
8632ca6
1 Parent(s): 66fb451

DeBERTa XLarge-v2 MNLI model

Browse files
Files changed (5) hide show
  1. README.md +43 -0
  2. config.json +28 -0
  3. pytorch_model.bin +3 -0
  4. spm.model +3 -0
  5. tokenizer_config.json +3 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
3
+ license: mit
4
+ ---
5
+
6
+ ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
7
+
8
+ [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
9
+
10
+ Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
11
+
12
+ This the DeBERTa V2 xlarge model fine-tuned with MNLI task, 24 layers, 1536 hidden size. Total parameters 900M.
13
+
14
+ | MNLI-m | MNLI-mm|
15
+ |--------|--------|
16
+ |91.74 |91.59 |
17
+
18
+ #### Fine-tuning on NLU tasks
19
+
20
+ We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
21
+
22
+ | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B|
23
+ |-------------------|-----------|-----------|--------|-------|------|------|------|------|------|-----|
24
+ | BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6 | 93.2 | 92.3 | 60.6 | 70.4 | 88.0 | 91.3 |90.0 |
25
+ | RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2 | 96.4 | 93.9 | 68.0 | 86.6 | 90.9 | 92.2 |92.4 |
26
+ | XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8 | 97.0 | 94.9 | 69.0 | 85.9 | 90.8 | 92.3 |92.5 |
27
+ | DeBERTa-Large | 95.5/90.1 | 90.7/88.0 | 91.1 | 96.5 | 95.3 | 69.5 | 88.1 | 92.5 | 92.3 |92.5 |
28
+ | **DeBERTa-XLarge-V2** | - | - | **91.7**| - | - | - | - | - | - |- |
29
+
30
+ ### Citation
31
+
32
+ If you find DeBERTa useful for your work, please cite the following paper:
33
+
34
+ ``` latex
35
+ @misc{he2020deberta,
36
+ title={DeBERTa: Decoding-enhanced BERT with Disentangled Attention},
37
+ author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
38
+ year={2020},
39
+ eprint={2006.03654},
40
+ archivePrefix={arXiv},
41
+ primaryClass={cs.CL}
42
+ }
43
+ ```
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "attention_probs_dropout_prob": 0.1,
3
+ "hidden_act": "gelu",
4
+ "hidden_dropout_prob": 0.1,
5
+ "hidden_size": 1536,
6
+ "initializer_range": 0.02,
7
+ "intermediate_size": 6144,
8
+ "max_position_embeddings": 512,
9
+ "relative_attention": true,
10
+ "position_buckets": 256,
11
+ "norm_rel_ebd": "layer_norm",
12
+ "share_att_key": true,
13
+ "pos_att_type": "p2c|c2p",
14
+ "layer_norm_eps": 1e-7,
15
+ "conv_kernel_size": 3,
16
+ "conv_act": "gelu",
17
+ "max_relative_positions": -1,
18
+ "position_biased_input": false,
19
+ "num_attention_heads": 24,
20
+ "attention_head_size": 64,
21
+ "num_hidden_layers": 24,
22
+ "type_vocab_size": 0,
23
+ "vocab_size": 128100,
24
+ "pooling": {
25
+ "dropout": 0,
26
+ "hidden_act": "gelu"
27
+ }
28
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03cc991edd1ed915780bf330e436d91bbdda7fc207048f58996f1a9fbe87a312
3
+ size 1773982994
spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5598d5e96f339a8d980c15f9afd405a2e5e1be7db41de3ed13b0f03fac1e8c17
3
+ size 2447305
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ {
2
+ "do_lower_case": false
3
+ }