mrm8488 commited on
Commit
ef4c8e9
1 Parent(s): 964b25b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -36,7 +36,13 @@ It achieves the following results on the evaluation set:
36
 
37
  ## Model description
38
 
39
- More information needed
 
 
 
 
 
 
40
 
41
  ## Intended uses & limitations
42
 
 
36
 
37
  ## Model description
38
 
39
+ [DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. With those two improvements, DeBERTa out perform RoBERTa on a majority of NLU tasks with 80GB training data.
40
+
41
+ In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the efficiency of DeBERTa using ELECTRA-Style pre-training with Gradient Disentangled Embedding Sharing. Compared to DeBERTa, our V3 version significantly improves the model performance on downstream tasks. You can find more technique details about the new model from our [paper](https://arxiv.org/abs/2111.09543).
42
+
43
+ Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates.
44
+
45
+ The DeBERTa V3 large model comes with 24 layers and a hidden size of 1024. It has 304M backbone parameters with a vocabulary containing 128K tokens which introduces 131M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
46
 
47
  ## Intended uses & limitations
48