Commit
•
ade8c81
1
Parent(s):
5cab41d
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,137 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- multilingual
|
4 |
+
- en
|
5 |
+
- ar
|
6 |
+
- bg
|
7 |
+
- de
|
8 |
+
- el
|
9 |
+
- es
|
10 |
+
- fr
|
11 |
+
- hi
|
12 |
+
- ru
|
13 |
+
- sw
|
14 |
+
- th
|
15 |
+
- tr
|
16 |
+
- ur
|
17 |
+
- vi
|
18 |
+
- zh
|
19 |
license: mit
|
20 |
+
tags:
|
21 |
+
- zero-shot-classification
|
22 |
+
- text-classification
|
23 |
+
- nli
|
24 |
+
- pytorch
|
25 |
+
metrics:
|
26 |
+
- accuracy
|
27 |
+
datasets:
|
28 |
+
- multi_nli
|
29 |
+
- xnli
|
30 |
+
pipeline_tag: zero-shot-classification
|
31 |
+
widget:
|
32 |
+
- text: "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
|
33 |
+
candidate_labels: "politics, economy, entertainment, environment"
|
34 |
---
|
35 |
+
|
36 |
+
|
37 |
+
---
|
38 |
+
# Multilingual XLM-V-base-mnli-xnli
|
39 |
+
## Model description
|
40 |
+
This multilingual model can perform natural language inference (NLI) on 116 languages and is therefore also
|
41 |
+
suitable for multilingual zero-shot classification. The underlying XLM-V-base model was created
|
42 |
+
by Meta AI and pretrained on the [CC100 multilingual dataset](https://huggingface.co/datasets/cc100).
|
43 |
+
It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which contains hypothesis-premise pairs from 15 languages,
|
44 |
+
as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
|
45 |
+
XLM-V-base was publish on 23.01.2023 in [this paper](https://arxiv.org/pdf/2301.10472.pdf).
|
46 |
+
Its main innovation is a larger vocabulary: previous multilingual models had a vocabulary of 250 000 tokens,
|
47 |
+
while XLM-V has 1 million tokens. The improved vocabulary allows for better representations of more languages.
|
48 |
+
|
49 |
+
[mDeBERTa-v3](https://arxiv.org/pdf/2111.09543.pdf).
|
50 |
+
|
51 |
+
|
52 |
+
### How to use the model
|
53 |
+
#### Simple zero-shot classification pipeline
|
54 |
+
```python
|
55 |
+
from transformers import pipeline
|
56 |
+
classifier = pipeline("zero-shot-classification", model="MoritzLaurer/xlm-v-base-mnli-xnli")
|
57 |
+
|
58 |
+
sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
|
59 |
+
candidate_labels = ["politics", "economy", "entertainment", "environment"]
|
60 |
+
output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
|
61 |
+
print(output)
|
62 |
+
```
|
63 |
+
#### NLI use-case
|
64 |
+
```python
|
65 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
66 |
+
import torch
|
67 |
+
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
|
68 |
+
|
69 |
+
model_name = "MoritzLaurer/xlm-v-base-mnli-xnli"
|
70 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
71 |
+
model = AutoModelForSequenceClassification.from_pretrained(model_name)
|
72 |
+
|
73 |
+
premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
|
74 |
+
hypothesis = "Emmanuel Macron is the President of France"
|
75 |
+
|
76 |
+
input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
|
77 |
+
output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
|
78 |
+
prediction = torch.softmax(output["logits"][0], -1).tolist()
|
79 |
+
label_names = ["entailment", "neutral", "contradiction"]
|
80 |
+
prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
|
81 |
+
print(prediction)
|
82 |
+
```
|
83 |
+
|
84 |
+
### Training data
|
85 |
+
This model was trained on the XNLI development dataset and the MNLI train dataset.
|
86 |
+
The XNLI development set consists of 2490 professionally translated texts from English
|
87 |
+
to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)).
|
88 |
+
Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages,
|
89 |
+
but due to quality issues with these machine translations, this model was only trained on the professional translations
|
90 |
+
from the XNLI development set and the original English MNLI training set (392 702 texts).
|
91 |
+
Not using machine translated texts can avoid overfitting the model to the 15 languages;
|
92 |
+
avoids catastrophic forgetting of the other 85 languages mDeBERTa was pre-trained on;
|
93 |
+
and significantly reduces training costs.
|
94 |
+
|
95 |
+
### Training procedure
|
96 |
+
xlm-v-base-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters.
|
97 |
+
```
|
98 |
+
training_args = TrainingArguments(
|
99 |
+
num_train_epochs=3, # total number of training epochs
|
100 |
+
learning_rate=2e-05,
|
101 |
+
per_device_train_batch_size=32, # batch size per device during training
|
102 |
+
per_device_eval_batch_size=120, # batch size for evaluation
|
103 |
+
warmup_ratio=0.06, # number of warmup steps for learning rate scheduler
|
104 |
+
weight_decay=0.01, # strength of weight decay
|
105 |
+
)
|
106 |
+
```
|
107 |
+
### Eval results
|
108 |
+
The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total).
|
109 |
+
Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training data
|
110 |
+
in the specific language (cross-lingual transfer). This means that the model is also able of doing NLI on
|
111 |
+
the other 101~ languages XLM-V was training on, but performance is most likely lower than for those languages available in XNLI.
|
112 |
+
|
113 |
+
Also note that if other multilingual models on the model hub claim performance of around 90% on languages other than English,
|
114 |
+
the authors have most likely made a mistake during testing since non of the latest papers shows a multilingual average performance
|
115 |
+
of more than a few points above 80% on XNLI (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)).
|
116 |
+
|
117 |
+
average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh
|
118 |
+
---------|----------|---------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------
|
119 |
+
0.808 | 0.802 | 0.829 | 0.825 | 0.826 | 0.883 | 0.845 | 0.834 | 0.771 | 0.813 | 0.748 | 0.793 | 0.807 | 0.740 | 0.795 | 0.8116
|
120 |
+
|
121 |
+
|
122 |
+
|Datasets|mnli_m|mnli_mm|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh|
|
123 |
+
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
124 |
+
|Accuracy|0.852|0.854|0.757|0.808|0.796|0.79|0.856|0.814|0.806|0.751|0.782|0.725|0.757|0.766|0.729|0.784|0.782|
|
125 |
+
|Speed (text/sec)|2098.0|2170.0|3501.0|3324.0|3438.0|3174.0|3713.0|3500.0|3129.0|3042.0|3419.0|3468.0|3782.0|3772.0|3099.0|3117.0|4217.0|
|
126 |
+
|
127 |
+
|
128 |
+
## Limitations and bias
|
129 |
+
Please consult the original XLM-V paper and literature on different NLI datasets for potential biases.
|
130 |
+
|
131 |
+
## Citation
|
132 |
+
If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
|
133 |
+
|
134 |
+
## Ideas for cooperation or questions?
|
135 |
+
If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
|
136 |
+
|
137 |
+
|