File size: 1,251 Bytes
2f83883
 
 
f424718
2f83883
 
a843be5
2f83883
 
 
8ee5b08
2f83883
 
 
8ee5b08
2f83883
 
 
9af2e0a
162b39e
 
2f83883
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
---
datasets:
- cjvt/si_nli
- jacinthes/slovene_mnli_snli
language:
- sl
license: cc-by-sa-4.0
---

# CrossEncoder for Slovene NLI
The model was trained using the [SentenceTransformers](https://sbert.net/) [CrossEncoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. <br />
It is based on [SloBerta](https://huggingface.co/EMBEDDIA/sloberta), a monolingual Slovene model.

## Training
This model was trained on the [SI-NLI](https://huggingface.co/datasets/cjvt/si_nli) and the [slovene_mnli_snli](https://huggingface.co/datasets/jacinthes/slovene_mnli_snli) datasets.<br />
More details and the training script are available here: [repo](https://github.com/jacinthes/slovene-nli-benchmark)

## Performance
The model achieves the following metrics:
- Test accuracy: 77.15
- Dev accuracy: 77.51

## Usage
The model can be used for inference using the below code:
```python
from sentence_transformers import CrossEncoder

model = CrossEncoder('jacinthes/cross-encoder-sloberta-si-nli-snli-mnli')
premise = 'Pojdi z menoj v toplice.'
hypothesis = 'Bova lepa bova fit.'
prediction = model.predict([premise, hypothesis])
int2label = {0: 'entailment', 1: 'neutral', 2:'contradiction'}
print(int2label[prediction.argmax()])
```