Pengcheng He commited on
Commit
02a9971
1 Parent(s): 65a695b

Add mDeBERTa base model

Browse files
Files changed (5) hide show
  1. README.md +82 -0
  2. config.json +22 -0
  3. pytorch_model.bin +3 -0
  4. spm.model +3 -0
  5. tokenizer_config.json +4 -0
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - deberta
5
+ - deberta-v3
6
+ - mdeberta
7
+ thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
8
+ license: mit
9
+ ---
10
+
11
+ ## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
12
+
13
+ [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
14
+
15
+ Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
16
+
17
+ In DeBERTa V3, we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up.
18
+
19
+ mDeBERTa is multilingual version of DeBERTa which use the same structure as DeBERTa and was trained with CC100 multilingual data.
20
+
21
+ The mDeBERTa V3 base model comes with 12 layers and a hidden size of 768. Its total parameter number is 280M since we use a vocabulary containing 250K tokens which introduce 190M parameters in the Embedding layer. This model was trained using the 2.5T CC100 data as XLM-R.
22
+
23
+
24
+ #### Fine-tuning on NLU tasks
25
+
26
+ We present the dev results on XNLI with zero-shot crosslingual transfer setting, i.e. training with english data only, test with other languages.
27
+
28
+ | Model | en | fr| es | de | el | bg | ru |tr |ar |vi | th | zh | hi | sw | ur | avg |
29
+ |-------------------|----|----|---- |-- |-- |-- | -- |-- |-- |-- | -- | -- | -- | -- | -- | ----|
30
+ | XLM-R-base |85.8|79.7|80.7 |78.7 |77.5 |79.6 |78.1 |74.2 |73.8 |76.5 |74.6 |76.7| 72.4| 66.5| 68.3|75.6 |
31
+ | mDeBERTa-base |88.2|82.6|84.4 |82.7 |82.3 |82.4 |80.8 |79.5 |78.5 |78.1 |76.4 |79.5| 75.9| 73.9| 72.4|79.8 +/- 0.2|
32
+
33
+ #### Fine-tuning with HF transformers
34
+
35
+ ```bash
36
+ #!/bin/bash
37
+
38
+ cd transformers/examples/pytorch/text-classification/
39
+
40
+ pip install datasets
41
+
42
+ output_dir="ds_results"
43
+
44
+ num_gpus=8
45
+
46
+ batch_size=4
47
+
48
+ python -m torch.distributed.launch --nproc_per_node=${num_gpus} \
49
+ run_xnli.py \
50
+ --model_name_or_path microsoft/deberta-v3-base \
51
+ --task_name $TASK_NAME \
52
+ --do_train \
53
+ --do_eval \
54
+ --train_language en \
55
+ --language en \
56
+ --evaluation_strategy steps \
57
+ --max_seq_length 256 \
58
+ --warmup_steps 3000 \
59
+ --per_device_train_batch_size ${batch_size} \
60
+ --learning_rate 2e-5 \
61
+ --num_train_epochs 6 \
62
+ --output_dir $output_dir \
63
+ --overwrite_output_dir \
64
+ --logging_steps 1000 \
65
+ --logging_dir $output_dir
66
+
67
+ ```
68
+
69
+ ### Citation
70
+
71
+ If you find DeBERTa useful for your work, please cite the following paper:
72
+
73
+ ``` latex
74
+ @inproceedings{
75
+ he2021deberta,
76
+ title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
77
+ author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
78
+ booktitle={International Conference on Learning Representations},
79
+ year={2021},
80
+ url={https://openreview.net/forum?id=XPZIaotutsD}
81
+ }
82
+ ```
config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "deberta-v2",
3
+ "attention_probs_dropout_prob": 0.1,
4
+ "hidden_act": "gelu",
5
+ "hidden_dropout_prob": 0.1,
6
+ "hidden_size": 768,
7
+ "initializer_range": 0.02,
8
+ "intermediate_size": 3072,
9
+ "max_position_embeddings": 512,
10
+ "relative_attention": true,
11
+ "position_buckets": 256,
12
+ "norm_rel_ebd": "layer_norm",
13
+ "share_att_key": true,
14
+ "pos_att_type": "p2c|c2p",
15
+ "layer_norm_eps": 1e-7,
16
+ "max_relative_positions": -1,
17
+ "position_biased_input": false,
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "type_vocab_size": 0,
21
+ "vocab_size": 251000
22
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05c748186a22f523505099ce137f90dd4e55f875a4035c11350aaa125932230c
3
+ size 560166373
spm.model ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13c8d666d62a7bc4ac8f040aab68e942c861f93303156cc28f5c7e885d86d6e3
3
+ size 4305025
tokenizer_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
1
+ {
2
+ "do_lower_case": false,
3
+ "vocab_type": "spm"
4
+ }