Zero-Shot Classification
Transformers
PyTorch
Safetensors
bert
text-classification
Inference Endpoints
File size: 2,963 Bytes
d5b763c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
301774e
a86170e
301774e
 
d5b763c
 
301774e
 
 
d5b763c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4dd76dc
 
 
 
d5b763c
69be3c0
301774e
 
4dd76dc
d5b763c
4dd76dc
 
d5b763c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
pipeline_tag: zero-shot-classification
language:
- da
- no
- nb
- sv
license: mit
datasets:
- strombergnlp/danfever
- mnli_da
- mnli_sv
- mnli_nb
- cb_da
- cb_sv
- cb_nb
- fever_sv
- anli_sv
model-index:
- name: nb-bert-large-ner-scandi
  results: []
widget:
- example_title: Danish
  text: Mexicansk bokser advarer Messi - 'Du skal bede til gud, om at jeg ikke finder dig'
  candidate_labels: sundhed, politik, sport, religion
- example_title: Norwegian
  text: Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september.
  candidate_labels: helse, politikk, sport, religion
- example_title: Swedish
  text: ”Öppna, öppna” - hundratals demonstrerade mot hårda covidrestriktioner i Peking
  candidate_labels: hälsa, politik, sport, religion
---

# ScandiNLI - Natural Language Inference model for Scandinavian Languages

This model is a fine-tuned version of [NbAiLab/nb-bert-large](https://huggingface.co/NbAiLab/nb-bert-large) for Natural Language Inference in Danish, Norwegian Bokmål and Swedish.

It has been fine-tuned on a dataset composed of [DanFEVER](https://aclanthology.org/2021.nodalida-main.pdf#page=439) as well as machine translated versions of [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) and [CommitmentBank](https://doi.org/10.18148/sub/2019.v23i2.601) into all three languages, and machine translated versions of [FEVER](https://aclanthology.org/N18-1074/) and [Adversarial NLI](https://aclanthology.org/2020.acl-main.441/) into Swedish.

The three languages are sampled equally during training, and they're validated on validation splits of [DanFEVER](https://aclanthology.org/2021.nodalida-main.pdf#page=439) and machine translated versions of [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) for Swedish and Norwegian Bokmål, sampled equally.


## Quick start

You can use this model in your scripts as follows:

```python
>>> from transformers import pipeline
>>> classifier = pipeline(
...     "zero-shot-classification",
...     model="alexandrainst/nb-bert-large-nli-scandi",
... )
>>> classifier(
...     "Mexicansk bokser advarer Messi - 'Du skal bede til gud, om at jeg ikke finder dig'",
...     candidate_labels=['sundhed', 'politik', 'sport', 'religion'],
...     hypothesis_template="Dette eksempel er {}",
... )
{
    'labels': ['politik', 'helse', 'sport', 'religion'], 
    'scores': [0.303, 0.300, 0.204, 0.193],
    'sequence': 'Folkehelseinstituttets mest optimistiske anslag er at alle over 18 år er ferdigvaksinert innen midten av september.',
}
```


## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 4242
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9, 0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- max_steps: 50,000