DeBERTa commited on
Commit
41efbb1
1 Parent(s): e69fce6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ Please check the [official repository](https://github.com/microsoft/DeBERTa) for
15
 
16
  In DeBERTa V3, we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up.
17
 
18
- The DeBERTa V3 large model comes with 12 layers and a hidden size of 768. Its total parameter number is 183M since we use a vocabulary containing 128K tokens which introduce 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
19
 
20
 
21
  #### Fine-tuning on NLU tasks
 
15
 
16
  In DeBERTa V3, we replaced the MLM objective with the RTD(Replaced Token Detection) objective introduced by ELECTRA for pre-training, as well as some innovations to be introduced in our upcoming paper. Compared to DeBERTa-V2, our V3 version significantly improves the model performance in downstream tasks. You can find a simple introduction about the model from the appendix A11 in our original [paper](https://arxiv.org/abs/2006.03654), but we will provide more details in a separate write-up.
17
 
18
+ The DeBERTa V3 base model comes with 12 layers and a hidden size of 768. Its total parameter number is 183M since we use a vocabulary containing 128K tokens which introduce 98M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
19
 
20
 
21
  #### Fine-tuning on NLU tasks