Pengcheng He commited on
Commit
e464797
1 Parent(s): 766641b

Improve README

Browse files
Files changed (2) hide show
  1. README.md +16 -13
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -7,28 +7,31 @@ thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
7
  license: mit
8
  ---
9
 
10
- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
11
 
12
  [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
13
 
14
- Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
15
 
16
- In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up.
17
 
18
- The DeBERTa V3 base model comes with 12 layers and a hidden size of 768. Its total parameter number is 183M since we use a vocabulary containing 128K tokens which introduce 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
19
 
20
 
21
  #### Fine-tuning on NLU tasks
22
 
23
- We present the dev results on SQuAD 1.1/2.0 and MNLI tasks.
 
 
 
 
 
 
 
 
 
24
 
25
- | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m |
26
- |-------------------|-----------|-----------|--------|
27
- | RoBERTa-base | 91.5/84.6 | 83.7/80.5 | 87.6 |
28
- | XLNet-base | -/- | -/80.2 | 86.8 |
29
- | DeBERTa-base | 93.1/87.2 | 86.2/83.1 | 88.8 |
30
- | **DeBERTa-v3-base** | 93.9/88.4 | 88.4/85.4 | 90.6 |
31
- | DeBERTa-v3-base+SiFT | -/- | -/- | **91.0** |
32
 
33
  #### Fine-tuning with HF transformers
34
 
@@ -67,7 +70,7 @@ python -m torch.distributed.launch --nproc_per_node=${num_gpus} \
67
 
68
  ### Citation
69
 
70
- If you find DeBERTa useful for your work, please cite the following paper:
71
 
72
  ``` latex
73
  @misc{he2021debertav3,
7
  license: mit
8
  ---
9
 
10
+ ## DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing
11
 
12
  [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
13
 
14
+ In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543).
15
 
16
+ Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates.
17
 
18
+ The DeBERTa V3 base model comes with 12 layers and a hidden size of 768. It has only 86M backbone parameters with a vocabulary containing 128K tokens which introduces 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
19
 
20
 
21
  #### Fine-tuning on NLU tasks
22
 
23
+ We present the dev results on SQuAD 2.0 and MNLI tasks.
24
+
25
+ | Model |Vocabulary(K)|Backbone #Params(M)| SQuAD 2.0(F1/EM) | MNLI-m/mm(ACC)|
26
+ |-------------------|----------|-------------------|-----------|----------|
27
+ | RoBERTa-base |50 |86 | 83.7/80.5 | 87.6/- |
28
+ | XLNet-base |32 |92 | -/80.2 | 86.8/- |
29
+ | ELECTRA-base |30 |86 | -/80.5 | 88.8/ |
30
+ | DeBERTa-base |50 |100 | 86.2/83.1| 88.8/88.5|
31
+ | DeBERTa-v3-base |128|86 | 88.4/85.4 | 90.6/90.7|
32
+ | DeBERTa-v3-base + SiFT |128|86 | -/- | 91.0/-|
33
 
34
+ We present the dev results on SQuAD 1.1/2.0 and MNLI tasks.
 
 
 
 
 
 
35
 
36
  #### Fine-tuning with HF transformers
37
 
70
 
71
  ### Citation
72
 
73
+ If you find DeBERTa useful for your work, please cite the following papers:
74
 
75
  ``` latex
76
  @misc{he2021debertav3,
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8a7d0474c09c9e86cbe66ea6f8ac9e6d11670389829b586cd467ccfab7a24f29
3
  size 371146213
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:691d48a2800b926a19e3051def466fc2cca4f59a15e42ce4a0cf7f1b380b5e33
3
  size 371146213