MoritzLaurer HF staff commited on
Commit
7420447
1 Parent(s): 574dbac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -34,9 +34,14 @@ widget:
34
  ---
35
  # Multilingual mDeBERTa-v3-base-mnli-xnli
36
  ## Model description
37
- This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual zero-shot classification. The underlying model was pre-trained by Microsoft on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
38
- As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model, introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
 
 
 
39
 
 
 
40
 
41
  ### How to use the model
42
  #### Simple zero-shot classification pipeline
 
34
  ---
35
  # Multilingual mDeBERTa-v3-base-mnli-xnli
36
  ## Model description
37
+ This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual
38
+ zero-shot classification. The underlying model was pre-trained by Microsoft on the
39
+ [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
40
+ As of December 2021, mDeBERTa-base is the best performing multilingual base-sized transformer model,
41
+ introduced by Microsoft in [this paper](https://arxiv.org/pdf/2111.09543.pdf).
42
 
43
+ If you are looking for a smaller, faster (but less performant) model, you can
44
+ try [multilingual-MiniLMv2-L6-mnli-xnli](https://huggingface.co/MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli).
45
 
46
  ### How to use the model
47
  #### Simple zero-shot classification pipeline