MoritzLaurer HF staff commited on
Commit
50fa40e
1 Parent(s): 74de213

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +135 -0
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - multilingual
4
+ - en
5
+ - ar
6
+ - bg
7
+ - de
8
+ - el
9
+ - es
10
+ - fr
11
+ - hi
12
+ - ru
13
+ - sw
14
+ - th
15
+ - tr
16
+ - ur
17
+ - vi
18
+ - zh
19
+ license: apache-2.0
20
+ tags:
21
+ - zero-shot-classification
22
+ - text-classification
23
+ - nli
24
+ - pytorch
25
+ metrics:
26
+ - accuracy
27
+ datasets:
28
+ - multi_nli
29
+ - xnli
30
+ pipeline_tag: zero-shot-classification
31
+ widget:
32
+ - text: "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
33
+ candidate_labels: "politics, economy, entertainment, environment"
34
+ ---
35
+ # Multilingual ernie-m-large-mnli-xnli
36
+ ## Model description
37
+ This multilingual model can perform natural language inference (NLI) on 100 languages and is therefore also suitable for multilingual
38
+ zero-shot classification. The underlying model was pre-trained by Baidu, based on Meta's RoBERTa (pre-trained on the
39
+ [CC100 multilingual dataset](https://huggingface.co/datasets/cc100). It was then fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli),
40
+ which contains hypothesis-premise pairs from 15 languages, as well as the English [MNLI dataset](https://huggingface.co/datasets/multi_nli).
41
+ The model was introduced by Baidu in [this paper](https://arxiv.org/pdf/2012.15674.pdf).
42
+
43
+ If you are looking for a much faster (but less performant) model, you can
44
+ try [multilingual-MiniLMv2-L6-mnli-xnli](https://huggingface.co/MoritzLaurer/multilingual-MiniLMv2-L6-mnli-xnli).
45
+ If you are looking for a base-sized model with a good mix of performance and speed,
46
+ you can try [mDeBERTa-v3-base-mnli-xnli](https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli)
47
+
48
+ ### How to use the model
49
+ #### Simple zero-shot classification pipeline
50
+ ```python
51
+ from transformers import pipeline
52
+ classifier = pipeline("zero-shot-classification", model="MoritzLaurer/ernie-m-large-mnli-xnli")
53
+
54
+ sequence_to_classify = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
55
+ candidate_labels = ["politics", "economy", "entertainment", "environment"]
56
+ output = classifier(sequence_to_classify, candidate_labels, multi_label=False)
57
+ print(output)
58
+ ```
59
+ #### NLI use-case
60
+ ```python
61
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
62
+ import torch
63
+ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
64
+
65
+ model_name = "MoritzLaurer/ernie-m-large-mnli-xnli"
66
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
67
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
68
+
69
+ premise = "Angela Merkel ist eine Politikerin in Deutschland und Vorsitzende der CDU"
70
+ hypothesis = "Emmanuel Macron is the President of France"
71
+
72
+ input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
73
+ output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
74
+ prediction = torch.softmax(output["logits"][0], -1).tolist()
75
+ label_names = ["entailment", "neutral", "contradiction"]
76
+ prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
77
+ print(prediction)
78
+ ```
79
+
80
+ ### Training data
81
+ This model was trained on the XNLI development dataset and the MNLI train dataset.
82
+ The XNLI development set consists of 2490 professionally translated texts from English
83
+ to 14 other languages (37350 texts in total) (see [this paper](https://arxiv.org/pdf/1809.05053.pdf)).
84
+ Note that the XNLI contains a training set of 15 machine translated versions of the MNLI dataset for 15 languages,
85
+ but due to quality issues with these machine translations, this model was only trained
86
+ on the professional translations from the XNLI development set and the original English
87
+ MNLI training set (392 702 texts). Not using machine translated texts can avoid overfitting the
88
+ model to the 15 languages; avoids catastrophic forgetting of the other 85 languages ernie-m
89
+ was pre-trained on; and significantly reduces training costs.
90
+
91
+ ### Training procedure
92
+ ernie-m-large-mnli-xnli was trained using the Hugging Face trainer with the following hyperparameters.
93
+ ```
94
+ training_args = TrainingArguments(
95
+ num_train_epochs=3, # total number of training epochs
96
+ learning_rate=3e-05,
97
+ per_device_train_batch_size=16, # batch size per device during training
98
+ gradient_accumulation_steps=2,
99
+ per_device_eval_batch_size=16, # batch size for evaluation
100
+ warmup_ratio=0.1, # number of warmup steps for learning rate scheduler
101
+ weight_decay=0.01, # strength of weight decay
102
+ fp16=True,
103
+ )
104
+ ```
105
+ ### Eval results
106
+ The model was evaluated on the XNLI test set on 15 languages (5010 texts per language, 75150 in total).
107
+ Note that multilingual NLI models are capable of classifying NLI texts without receiving NLI training
108
+ data in the specific language (cross-lingual transfer). This means that the model is also able of
109
+ doing NLI on the other 85 languages mDeBERTa was training on, but performance is most likely lower
110
+ than for those languages available in XNLI.
111
+
112
+ Also note that if other multilingual models on the model hub claim performance of around 90% on languages
113
+ other than English, the authors have most likely made a mistake during testing since non of the latest papers
114
+ shows a multilingual average performance of more than a few points above 80% on XNLI
115
+ (see [here](https://arxiv.org/pdf/2111.09543.pdf) or [here](https://arxiv.org/pdf/1911.02116.pdf)).
116
+
117
+ |Datasets|mnli_m|mnli_mm|ar|bg|de|el|en|es|fr|hi|ru|sw|th|tr|ur|vi|zh|avg_xnli|
118
+ | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
119
+ |Accuracy|0.881|0.878|0.818|0.853|0.84|0.837|0.882|0.855|0.849|0.799|0.83|0.751|0.809|0.818|0.76|0.826|0.799|0.822|
120
+ |Inference text/sec (A100, batch=120)|783.0|774.0|1487.0|1396.0|1430.0|1206.0|1623.0|1482.0|1291.0|1302.0|1366.0|1484.0|1500.0|1609.0|1344.0|1403.0|1302.0|1415.0|
121
+
122
+
123
+ ## Limitations and bias
124
+ Please consult the original ernie-m paper and literature on different NLI datasets for potential biases.
125
+
126
+ ## Citation
127
+ If you use this model, please cite: Laurer, Moritz,
128
+ Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022.
129
+ ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine
130
+ Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
131
+
132
+ ## Ideas for cooperation or questions?
133
+ If you have questions or ideas for cooperation, contact me at m{dot}laurer{at}vu{dot}nl
134
+ or [LinkedIn](https://www.linkedin.com/in/moritz-laurer/)
135
+