NDugar commited on
Commit
818713e
1 Parent(s): 264b0ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -1
README.md CHANGED
@@ -9,6 +9,11 @@ thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
9
  license: mit
10
  pipeline_tag: zero-shot-classification
11
  ---
 
 
 
 
 
12
  We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
13
  | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
14
  |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
@@ -24,4 +29,50 @@ We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
24
  #### Notes.
25
  - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
26
  - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
27
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  license: mit
10
  pipeline_tag: zero-shot-classification
11
  ---
12
+ ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
13
+ [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
14
+ Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
15
+ This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
16
+ ### Fine-tuning on NLU tasks
17
  We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
18
  | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
19
  |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
 
29
  #### Notes.
30
  - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
31
  - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
32
+
33
+ Run with `Deepspeed`,
34
+ ```bash
35
+ pip install datasets
36
+ pip install deepspeed
37
+ # Download the deepspeed config file
38
+ wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
39
+ export TASK_NAME=mnli
40
+ output_dir="ds_results"
41
+ num_gpus=8
42
+ batch_size=8
43
+ python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
44
+ run_glue.py \\
45
+ --model_name_or_path microsoft/deberta-v2-xxlarge \\
46
+ --task_name $TASK_NAME \\
47
+ --do_train \\
48
+ --do_eval \\
49
+ --max_seq_length 256 \\
50
+ --per_device_train_batch_size ${batch_size} \\
51
+ --learning_rate 3e-6 \\
52
+ --num_train_epochs 3 \\
53
+ --output_dir $output_dir \\
54
+ --overwrite_output_dir \\
55
+ --logging_steps 10 \\
56
+ --logging_dir $output_dir \\
57
+ --deepspeed ds_config.json
58
+ ```
59
+ You can also run with `--sharded_ddp`
60
+ ```bash
61
+ cd transformers/examples/text-classification/
62
+ export TASK_NAME=mnli
63
+ python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
64
+ --task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
65
+ --learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
66
+ ```
67
+ ### Citation
68
+ If you find DeBERTa useful for your work, please cite the following paper:
69
+ ``` latex
70
+ @inproceedings{
71
+ he2021deberta,
72
+ title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
73
+ author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
74
+ booktitle={International Conference on Learning Representations},
75
+ year={2021},
76
+ url={https://openreview.net/forum?id=XPZIaotutsD}
77
+ }
78
+ ```