NDugar commited on
Commit
264b0ad
1 Parent(s): e22942b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -24
README.md CHANGED
@@ -9,11 +9,6 @@ thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
9
  license: mit
10
  pipeline_tag: zero-shot-classification
11
  ---
12
- ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
13
- [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
14
- Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
15
- This is the DeBERTa large model fine-tuned with MNLI task.
16
- #### Fine-tuning on NLU tasks
17
  We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
18
  | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
19
  |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
@@ -28,22 +23,5 @@ We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
28
  --------
29
  #### Notes.
30
  - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
31
- - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp**
32
-
33
- ```bash
34
- cd transformers/examples/text-classification/
35
- export TASK_NAME=mrpc
36
- python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\\n--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\\n--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
37
- ```
38
- ### Citation
39
- If you find DeBERTa useful for your work, please cite the following paper:
40
- ``` latex
41
- @inproceedings{
42
- he2021deberta,
43
- title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
44
- author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
45
- booktitle={International Conference on Learning Representations},
46
- year={2021},
47
- url={https://openreview.net/forum?id=XPZIaotutsD}
48
- }
49
- ```
 
9
  license: mit
10
  pipeline_tag: zero-shot-classification
11
  ---
 
 
 
 
 
12
  We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
13
  | Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
14
  |---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
 
23
  --------
24
  #### Notes.
25
  - <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
26
+ - <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, we recommand using **deepspeed** as it's faster and saves memory.
27
+