DeBERTa commited on
Commit
280feda
1 Parent(s): 14809e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ In [DeBERTa V3](https://arxiv.org/abs/2111.09543), we further improved the effic
15
 
16
  Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates.
17
 
18
- The DeBERTa V3 xsmall model comes with 12 layers and a hidden size of 384. Its backbone parameter number is 22M with a vocabulary containing 128K tokens which introduce 48M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
19
 
20
 
21
  #### Fine-tuning on NLU tasks
 
15
 
16
  Please check the [official repository](https://github.com/microsoft/DeBERTa) for more implementation details and updates.
17
 
18
+ The DeBERTa V3 xsmall model comes with 12 layers and a hidden size of 384. It has only **22M** backbone parameters with a vocabulary containing 128K tokens which introduces 48M parameters in the Embedding layer. This model was trained using the 160GB data as DeBERTa V2.
19
 
20
 
21
  #### Fine-tuning on NLU tasks