1 ---
2 language: ru
3 pipeline_tag: zero-shot-classification
4 tags:
5 - rubert
6 - russian
7 - nli
8 - rte
9 - zero-shot-classification
10 widget:
11 - text: "Я хочу поехать в Австралию"
12 candidate_labels: "спорт,путешествия,музыка,кино,книги,наука,политика"
13 hypothesis_template: "Тема текста - {}."
14 ---
15 # RuBERT for NLI (natural language inference)
16
17 This is the [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) fine-tuned to predict the logical relationship between two short texts: entailment, contradiction, or neutral.
18
19 ## Usage
20 How to run the model for NLI:
21 ```python
22 # !pip install transformers sentencepiece --quiet
23 import torch
24 from transformers import AutoTokenizer, AutoModelForSequenceClassification
25
26 model_checkpoint = 'cointegrated/rubert-base-cased-nli-threeway'
27 tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
28 model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint)
29 if torch.cuda.is_available():
30 model.cuda()
31
32 text1 = 'Сократ - человек, а все люди смертны.'
33 text2 = 'Сократ никогда не умрёт.'
34 with torch.inference_mode():
35 out = model(**tokenizer(text1, text2, return_tensors='pt').to(model.device))
36 proba = torch.softmax(out.logits, -1).cpu().numpy()[0]
37 print({v: proba[k] for k, v in model.config.id2label.items()})
38 # {'entailment': 0.009525929, 'contradiction': 0.9332064, 'neutral': 0.05726764}
39 ```
40
41 You can also use this model for zero-shot short text classification (by labels only), e.g. for sentiment analysis:
42
43 ```python
44 def predict_zero_shot(text, label_texts, model, tokenizer, label='entailment', normalize=True):
45 label_texts
46 tokens = tokenizer([text] * len(label_texts), label_texts, truncation=True, return_tensors='pt', padding=True)
47 with torch.inference_mode():
48 result = torch.softmax(model(**tokens.to(model.device)).logits, -1)
49 proba = result[:, model.config.label2id[label]].cpu().numpy()
50 if normalize:
51 proba /= sum(proba)
52 return proba
53
54 classes = ['Я доволен', 'Я недоволен']
55 predict_zero_shot('Какая гадость эта ваша заливная рыба!', classes, model, tokenizer)
56 # array([0.05609814, 0.9439019 ], dtype=float32)
57 predict_zero_shot('Какая вкусная эта ваша заливная рыба!', classes, model, tokenizer)
58 # array([0.9059292 , 0.09407079], dtype=float32)
59 ```
60
61 Alternatively, you can use [Huggingface pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) for inference.
62
63 ## Sources
64 The model has been trained on a series of NLI datasets automatically translated to Russian from English.
65
66 Most datasets were taken [from the repo of Felipe Salvatore](https://github.com/felipessalvatore/NLI_datasets):
67 [JOCI](https://github.com/sheng-z/JOCI),
68 [MNLI](https://cims.nyu.edu/~sbowman/multinli/),
69 [MPE](https://aclanthology.org/I17-1011/),
70 [SICK](http://www.lrec-conf.org/proceedings/lrec2014/pdf/363_Paper.pdf),
71 [SNLI](https://nlp.stanford.edu/projects/snli/).
72
73 Some datasets obtained from the original sources:
74 [ANLI](https://github.com/facebookresearch/anli),
75 [NLI-style FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md),
76 [IMPPRES](https://github.com/facebookresearch/Imppres).
77
78 ## Performance
79
80 The table below shows ROC AUC (one class vs rest) for five models on the corresponding *dev* sets:
81 - [tiny](https://huggingface.co/cointegrated/rubert-tiny-bilingual-nli): a small BERT predicting entailment vs not_entailment
82 - [twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway): a base-sized BERT predicting entailment vs not_entailment
83 - [threeway](https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway) (**this model**): a base-sized BERT predicting entailment vs contradiction vs neutral
84 - [vicgalle-xlm](https://huggingface.co/vicgalle/xlm-roberta-large-xnli-anli): a large multilingual NLI model
85 - [facebook-bart](https://huggingface.co/facebook/bart-large-mnli): a large multilingual NLI model
86
87
88 |model |add_one_rte|anli_r1|anli_r2|anli_r3|copa|fever|help|iie |imppres|joci|mnli |monli|mpe |scitail|sick|snli|terra|total |
89 |------------------------|-----------|-------|-------|-------|----|-----|----|-----|-------|----|-----|-----|----|-------|----|----|-----|------|
90 |n_observations |387 |1000 |1000 |1200 |200 |20474|3355|31232|7661 |939 |19647|269 |1000|2126 |500 |9831|307 |101128|
91 |tiny/entailment |0.77 |0.59 |0.52 |0.53 |0.53|0.90 |0.81|0.78 |0.93 |0.81|0.82 |0.91 |0.81|0.78 |0.93|0.95|0.67 |0.77 |
92 |twoway/entailment |0.89 |0.73 |0.61 |0.62 |0.58|0.96 |0.92|0.87 |0.99 |0.90|0.90 |0.99 |0.91|0.96 |0.97|0.97|0.87 |0.86 |
93 |threeway/entailment |0.91 |0.75 |0.61 |0.61 |0.57|0.96 |0.56|0.61 |0.99 |0.90|0.91 |0.67 |0.92|0.84 |0.98|0.98|0.90 |0.80 |
94 |vicgalle-xlm/entailment |0.88 |0.79 |0.63 |0.66 |0.57|0.93 |0.56|0.62 |0.77 |0.80|0.90 |0.70 |0.83|0.84 |0.91|0.93|0.93 |0.78 |
95 |facebook-bart/entailment|0.51 |0.41 |0.43 |0.47 |0.50|0.74 |0.55|0.57 |0.60 |0.63|0.70 |0.52 |0.56|0.68 |0.67|0.72|0.64 |0.58 |
96 |threeway/contradiction | |0.71 |0.64 |0.61 | |0.97 | | |1.00 |0.77|0.92 | |0.89| |0.99|0.98| |0.85 |
97 |threeway/neutral | |0.79 |0.70 |0.62 | |0.91 | | |0.99 |0.68|0.86 | |0.79| |0.96|0.96| |0.83 |
98
99 For evaluation (and for training of the [tiny](https://huggingface.co/cointegrated/rubert-tiny-bilingual-nli) and [twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway) models), some extra datasets were used:
100 [Add-one RTE](https://cs.brown.edu/people/epavlick/papers/ans.pdf),
101 [CoPA](https://people.ict.usc.edu/~gordon/copa.html),
102 [IIE](https://aclanthology.org/I17-1100), and
103 [SCITAIL](https://allenai.org/data/scitail) taken from [the repo of Felipe Salvatore](https://github.com/felipessalvatore/NLI_datasets) and translatted,
104 [HELP](https://github.com/verypluming/HELP) and [MoNLI](https://github.com/atticusg/MoNLI) taken from the original sources and translated,
105 and Russian [TERRa](https://russiansuperglue.com/ru/tasks/task_info/TERRa).
106