justynasikora commited on
Commit
110925c
1 Parent(s): 89c579f

Add model and tokenizer

Browse files
README.md CHANGED
@@ -1 +1,49 @@
1
- # Megatron-BERT-large Swedish 165k for zero-shot classification
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: zero-shot-classification
3
+ tags:
4
+ - zero-shot-classification
5
+ - swedish
6
+ - megatron-bert
7
+ language:
8
+ - sv
9
+ datasets:
10
+ - KBLab/overlim
11
+ widget:
12
+ - example_title: Zero-shot
13
+ text: Många skjuter upp sina tandläkarbesök
14
+ candidate_labels: hälsa, politik, sport, religion
15
+ inference:
16
+ parameters:
17
+ hypothesis_template: Detta exempel handlar om {}.
18
+ ---
19
+
20
+
21
+ # Megatron-BERT-large Swedish 165k for zero-shot classification
22
+
23
+
24
+ This model is based on Megatron-BERT-large-165k](https://huggingface.co/KBLab/megatron-bert-large-swedish-cased-165). It was fine-tuned on the QNLI task and further fine-tuned on the MNLI task.
25
+ The model can be used with the Hugging Face zero-shot classification pipeline.
26
+
27
+
28
+
29
+ ## Usage
30
+
31
+ ```python
32
+ >>> from transformers import pipeline
33
+ >>> classifier = pipeline(
34
+ ... "zero-shot-classification",
35
+ ... model="KBlab/megatron-bert-large-swedish-cased-165k-zero-shot"
36
+ ... )
37
+ >>> classifier(
38
+ ... "Ruben Östlunds ”Triangle of sadness” nomineras till en Golden Globe i kategorin bästa musikal eller komedi.",
39
+ ... candidate_labels=["hälsa", "politik", "sport", "religion", "nöje"],
40
+ ... hypothesis_template="Detta exempel handlar om {}.",
41
+ ... )
42
+ {'sequence': 'Ruben Östlunds ”Triangle of sadness” nomineras till en Golden Globe i kategorin bästa musikal eller komedi.',
43
+ 'labels': ['nöje', 'sport', 'religion', 'hälsa', 'politik'],
44
+ 'scores': [0.9274595379829407,
45
+ 0.025105971843004227,
46
+ 0.018440095707774162,
47
+ 0.017049923539161682,
48
+ 0.011944468133151531]}
49
+ ```
config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "megatron-bert-large-swedish-cased-165k.QM",
3
+ "architectures": [
4
+ "MegatronBertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "finetuning_task": "mnli",
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "id2label": {
12
+ "0": "entailment",
13
+ "1": "neutral",
14
+ "2": "contradiction"
15
+ },
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 4096,
18
+ "label2id": {
19
+ "contradiction": 2,
20
+ "entailment": 0,
21
+ "neutral": 1
22
+ },
23
+ "layer_norm_eps": 1e-12,
24
+ "max_position_embeddings": 512,
25
+ "model_type": "megatron-bert",
26
+ "num_attention_heads": 16,
27
+ "num_hidden_layers": 24,
28
+ "pad_token_id": 0,
29
+ "position_embedding_type": "absolute",
30
+ "problem_type": "single_label_classification",
31
+ "tokenizer_type": "BertWordPieceCase",
32
+ "torch_dtype": "float32",
33
+ "transformers_version": "4.24.0",
34
+ "type_vocab_size": 2,
35
+ "use_cache": true,
36
+ "vocab_size": 64128
37
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44b1af75b8f22314e42c7d49d4b047691c0cc0827ac44e304c495aa48dd9e596
3
+ size 1478367541
special_tokens_map.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "[CLS]",
3
+ "cls_token": "[CLS]",
4
+ "eos_token": "[SEP]",
5
+ "mask_token": "[MASK]",
6
+ "pad_token": "[PAD]",
7
+ "sep_token": "[SEP]",
8
+ "unk_token": "[UNK]"
9
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "name_or_path": "megatron-bert-large-swedish-cased-165k.QM",
3
+ "special_tokens_map_file": "pretrained_model_hf_large_165K/special_tokens_map.json",
4
+ "tokenizer_class": "PreTrainedTokenizerFast"
5
+ }