Commit
•
e0e58ca
1
Parent(s):
614e448
Update README.md
Browse files
README.md
CHANGED
@@ -34,7 +34,7 @@ widget:
|
|
34 |
# Multilingual mDeBERTa-v3-base-mnli-xnli
|
35 |
## Model description
|
36 |
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
|
37 |
-
As of December 2021, mDeBERTa-base is the best performing multilingual transformer
|
38 |
|
39 |
|
40 |
## Intended uses & limitations
|
|
|
34 |
# Multilingual mDeBERTa-v3-base-mnli-xnli
|
35 |
## Model description
|
36 |
This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
|
37 |
+
As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model, introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
|
38 |
|
39 |
|
40 |
## Intended uses & limitations
|