Update README.md
Browse files
README.md
CHANGED
@@ -21,11 +21,14 @@ DeBERTina é um modelo [DeBERTa-v3](https://arxiv.org/abs/2111.09543) em portugu
|
|
21 |
|
22 |
*DeBERTina is a portuguese [DeBERTa-v3](https://arxiv.org/abs/2111.09543) model trained electra-style [ELECTRA](https://arxiv.org/abs/2003.10555) (with Replaced Token Detection - RTD) and gradient-disentangled embedding sharing (GDES).*
|
23 |
|
24 |
-
| Model | type | Vocabulary | Parameters
|
25 |
| :-: | :-: | :-: | :-: |
|
26 |
-
| [ult5-pt-small](https://huggingface.co/tgsc/ult5-pt-small) | encoder-decoder | 65k |82.4M |
|
27 |
-
| [sentence-transformer-ult5-pt-small](https://huggingface.co/tgsc/sentence-transformer-ult5-pt-small) | sentence-transformer | 65k | 51M |
|
28 |
-
| [DeBERTina-base](https://huggingface.co/tgsc/debertina-base) | encoder | 32k |
|
|
|
|
|
|
|
29 |
|
30 |
- **Developed by:** Thacio Garcia Scandaroli
|
31 |
- **Model type:** DeBERTa-v3
|
|
|
21 |
|
22 |
*DeBERTina is a portuguese [DeBERTa-v3](https://arxiv.org/abs/2111.09543) model trained electra-style [ELECTRA](https://arxiv.org/abs/2003.10555) (with Replaced Token Detection - RTD) and gradient-disentangled embedding sharing (GDES).*
|
23 |
|
24 |
+
| Model | type | Vocabulary | Backbone + Embeddings = Total Parameters |
|
25 |
| :-: | :-: | :-: | :-: |
|
26 |
+
| [ult5-pt-small](https://huggingface.co/tgsc/ult5-pt-small) | encoder-decoder | 65k | 56.6M + 25.8M = 82.4M |
|
27 |
+
| [sentence-transformer-ult5-pt-small](https://huggingface.co/tgsc/sentence-transformer-ult5-pt-small) | sentence-transformer | 65k | 25.2 + 25.8M = 51M |
|
28 |
+
| [DeBERTina-base](https://huggingface.co/tgsc/debertina-base) | encoder | 32k | 85.5M + 24.6M = 110.0M |
|
29 |
+
| [DeBERTina-base-128k-vocab](https://huggingface.co/tgsc/debertina-base-128k-vocab) | encoder | 128k | 85.5M + 98.3M = 183.8M |
|
30 |
+
| [DeBERTina-large](https://huggingface.co/tgsc/debertina-large) | encoder | 128k | 348.4M + 98.3M = 433.9.0M |
|
31 |
+
| [DeBERTina-xsmall](https://huggingface.co/tgsc/debertina-xsmall) | encoder | 128k | 21.5M + 49.2M = 70.6M |
|
32 |
|
33 |
- **Developed by:** Thacio Garcia Scandaroli
|
34 |
- **Model type:** DeBERTa-v3
|