NDugar commited on
Commit
9eb624d
1 Parent(s): 0be1e78

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -67
README.md CHANGED
@@ -9,70 +9,4 @@ thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
9
  license: mit
10
  pipeline_tag: zero-shot-classification
11
  ---
12
- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
13
- [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
14
- Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
15
- This is the DeBERTa V2 xxlarge model with 48 layers, 1536 hidden size. The total parameters are 1.5B and it is trained with 160GB raw data.
16
- ### Fine-tuning on NLU tasks
17
- We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
18
- | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
19
- |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
20
- | | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
21
- | BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
22
- | RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
23
- | XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
24
- | [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
25
- | [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
26
- | [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
27
- |**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
28
- --------
29
- #### Notes.
30
- - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
31
- - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
32
-
33
- Run with `Deepspeed`,
34
- ```bash
35
- pip install datasets
36
- pip install deepspeed
37
- # Download the deepspeed config file
38
- wget https://huggingface.co/microsoft/deberta-v2-xxlarge/resolve/main/ds_config.json -O ds_config.json
39
- export TASK_NAME=mnli
40
- output_dir="ds_results"
41
- num_gpus=8
42
- batch_size=8
43
- python -m torch.distributed.launch --nproc_per_node=${num_gpus} \\
44
- run_glue.py \\
45
- --model_name_or_path microsoft/deberta-v2-xxlarge \\
46
- --task_name $TASK_NAME \\
47
- --do_train \\
48
- --do_eval \\
49
- --max_seq_length 256 \\
50
- --per_device_train_batch_size ${batch_size} \\
51
- --learning_rate 3e-6 \\
52
- --num_train_epochs 3 \\
53
- --output_dir $output_dir \\
54
- --overwrite_output_dir \\
55
- --logging_steps 10 \\
56
- --logging_dir $output_dir \\
57
- --deepspeed ds_config.json
58
- ```
59
- You can also run with `--sharded_ddp`
60
- ```bash
61
- cd transformers/examples/text-classification/
62
- export TASK_NAME=mnli
63
- python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
64
- --task_name $TASK_NAME --do_train --do_eval --max_seq_length 256 --per_device_train_batch_size 8 \\
65
- --learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
66
- ```
67
- ### Citation
68
- If you find DeBERTa useful for your work, please cite the following paper:
69
- ``` latex
70
- @inproceedings{
71
- he2021deberta,
72
- title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
73
- author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
74
- booktitle={International Conference on Learning Representations},
75
- year={2021},
76
- url={https://openreview.net/forum?id=XPZIaotutsD}
77
- }
78
- ```
9
  license: mit
10
  pipeline_tag: zero-shot-classification
11
  ---
12
+ I tried to train v3 xl to mnli using my own training code and got this result.