FredrikMoller commited on
Commit
05d8bea
1 Parent(s): b692d45

first release of the fear target model

Browse files
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: sv
3
+ license: mit
4
+ ---
5
+
6
+ ## Swedish BERT models for sentiment analysis, Sentiment targets.
7
+ [Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for target/role assignment in Swedish. The two models are based on the [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased), the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task.
8
+
9
+ This is a downstream model to be used in conjunction with the [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Violence) or [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Fear). The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on.
10
+
11
+ The NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model.
12
+
13
+ The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
14
+
15
+ The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
16
+
17
+ ### Fear targets
18
+
19
+ The model can be imported from the transformers library by running
20
+
21
+ from transformers import BertForSequenceClassification, BertTokenizerFast
22
+
23
+ tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
24
+ classifier_fear_targets= BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
25
+
26
+ When the model and tokenizer are initialized the model can be used for inference.
27
+
28
+ #### Verification metrics
29
+
30
+ During training the Fear target model had the following verification metrics when using "any overlap" as the evaluation metric.
31
+
32
+
33
+ | F-score | Precision | Recall |
34
+ |:-------------------------:|:-------:|:---------:|:------:|
35
+ | 0.8361 | 0.7903 | 0.8876 |
36
+
37
+ #### Swedish-Sentiment-Violence
38
+ The model be can imported from the transformers library by running
39
+
40
+ from transformers import BertForSequenceClassification, BertTokenizerFast
41
+
42
+ tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
43
+ classifier_violence_targets = BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
44
+
45
+ When the model and tokenizer are initialized the model can be used for inference.
46
+
47
+ #### Verification metrics
48
+ During training the Violence target model had the following verification metrics when using "any overlap" as the evaluation metric.
49
+
50
+ | F-score | Precision | Recall |
51
+ |:-------------------------:|:-------:|:---------:|:------:|
52
+ | 0.7831| 0.9155| 0.8442 |
config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "RecordedFuture/Swedish-Sentiment-Fear-Targets",
3
+ "architectures": [
4
+ "BertForTokenClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "finetuning_task": "ner",
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0",
14
+ "1": "LABEL_1",
15
+ "2": "LABEL_2"
16
+ },
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 3072,
19
+ "label2id": {
20
+ "LABEL_0": 0,
21
+ "LABEL_1": 1,
22
+ "LABEL_2": 2
23
+ },
24
+ "layer_norm_eps": 1e-12,
25
+ "max_position_embeddings": 512,
26
+ "model_type": "bert",
27
+ "num_attention_heads": 12,
28
+ "num_hidden_layers": 12,
29
+ "output_past": true,
30
+ "pad_token_id": 0,
31
+ "position_embedding_type": "absolute",
32
+ "transformers_version": "4.5.1",
33
+ "type_vocab_size": 2,
34
+ "use_cache": true,
35
+ "vocab_size": 50325
36
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebe777c682a3a598a057bbb64f0c52b391b86a2b9d2b20e5392a0fdb6a35dc1e
3
+ size 496497168
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c97f63693224faa166de0721ed1d2024098fb92579ea3ba7d4f471adbb318ff
3
+ size 496679008
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"do_lower_case": false, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": false, "special_tokens_map_file": "/home/fmoller/.cache/huggingface/transformers/37f2eab7cd9b3716ce0160ea9562138ae9247fb3ea61a2fd0190b16d0970444e.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d", "name_or_path": "KB/bert-base-swedish-cased", "do_basic_tokenize": true, "never_split": null}
trainer_state.json ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 10.0,
5
+ "global_step": 320,
6
+ "is_hyper_param_search": false,
7
+ "is_local_process_zero": true,
8
+ "is_world_process_zero": true,
9
+ "log_history": [
10
+ {
11
+ "epoch": 1.0,
12
+ "eval_accuracy": 0.8748680042238648,
13
+ "eval_f1": 0.0,
14
+ "eval_loss": 0.616299569606781,
15
+ "eval_precision": 0.0,
16
+ "eval_recall": 0.0,
17
+ "eval_runtime": 0.1116,
18
+ "eval_samples_per_second": 815.287,
19
+ "step": 32
20
+ },
21
+ {
22
+ "epoch": 2.0,
23
+ "eval_accuracy": 0.8785638859556494,
24
+ "eval_f1": 0.0,
25
+ "eval_loss": 0.4105246365070343,
26
+ "eval_precision": 0.0,
27
+ "eval_recall": 0.0,
28
+ "eval_runtime": 0.1118,
29
+ "eval_samples_per_second": 814.082,
30
+ "step": 64
31
+ },
32
+ {
33
+ "epoch": 3.0,
34
+ "eval_accuracy": 0.883843717001056,
35
+ "eval_f1": 0.3583333333333333,
36
+ "eval_loss": 0.328654408454895,
37
+ "eval_precision": 0.4387755102040816,
38
+ "eval_recall": 0.3028169014084507,
39
+ "eval_runtime": 0.1118,
40
+ "eval_samples_per_second": 814.03,
41
+ "step": 96
42
+ },
43
+ {
44
+ "epoch": 4.0,
45
+ "eval_accuracy": 0.8912354804646251,
46
+ "eval_f1": 0.36909871244635195,
47
+ "eval_loss": 0.31546750664711,
48
+ "eval_precision": 0.4725274725274725,
49
+ "eval_recall": 0.3028169014084507,
50
+ "eval_runtime": 0.1118,
51
+ "eval_samples_per_second": 813.707,
52
+ "step": 128
53
+ },
54
+ {
55
+ "epoch": 5.0,
56
+ "eval_accuracy": 0.8933474128827877,
57
+ "eval_f1": 0.41538461538461535,
58
+ "eval_loss": 0.3068830370903015,
59
+ "eval_precision": 0.4576271186440678,
60
+ "eval_recall": 0.38028169014084506,
61
+ "eval_runtime": 0.1247,
62
+ "eval_samples_per_second": 730.008,
63
+ "step": 160
64
+ },
65
+ {
66
+ "epoch": 6.0,
67
+ "eval_accuracy": 0.8912354804646251,
68
+ "eval_f1": 0.48135593220338985,
69
+ "eval_loss": 0.330695241689682,
70
+ "eval_precision": 0.46405228758169936,
71
+ "eval_recall": 0.5,
72
+ "eval_runtime": 0.1127,
73
+ "eval_samples_per_second": 807.78,
74
+ "step": 192
75
+ },
76
+ {
77
+ "epoch": 7.0,
78
+ "eval_accuracy": 0.895987328405491,
79
+ "eval_f1": 0.4470588235294118,
80
+ "eval_loss": 0.3800097107887268,
81
+ "eval_precision": 0.504424778761062,
82
+ "eval_recall": 0.4014084507042254,
83
+ "eval_runtime": 0.1125,
84
+ "eval_samples_per_second": 808.798,
85
+ "step": 224
86
+ },
87
+ {
88
+ "epoch": 8.0,
89
+ "eval_accuracy": 0.899155227032735,
90
+ "eval_f1": 0.49295774647887325,
91
+ "eval_loss": 0.4225572347640991,
92
+ "eval_precision": 0.49295774647887325,
93
+ "eval_recall": 0.49295774647887325,
94
+ "eval_runtime": 0.1126,
95
+ "eval_samples_per_second": 808.356,
96
+ "step": 256
97
+ },
98
+ {
99
+ "epoch": 9.0,
100
+ "eval_accuracy": 0.8922914466737064,
101
+ "eval_f1": 0.4901960784313726,
102
+ "eval_loss": 0.4346790611743927,
103
+ "eval_precision": 0.4573170731707317,
104
+ "eval_recall": 0.528169014084507,
105
+ "eval_runtime": 0.1123,
106
+ "eval_samples_per_second": 810.599,
107
+ "step": 288
108
+ },
109
+ {
110
+ "epoch": 10.0,
111
+ "eval_accuracy": 0.8870116156282999,
112
+ "eval_f1": 0.47647058823529415,
113
+ "eval_loss": 0.48350322246551514,
114
+ "eval_precision": 0.4090909090909091,
115
+ "eval_recall": 0.5704225352112676,
116
+ "eval_runtime": 0.1129,
117
+ "eval_samples_per_second": 805.916,
118
+ "step": 320
119
+ },
120
+ {
121
+ "epoch": 10.0,
122
+ "step": 320,
123
+ "total_flos": 201896658999468.0,
124
+ "train_runtime": 25.0952,
125
+ "train_samples_per_second": 12.751
126
+ }
127
+ ],
128
+ "max_steps": 320,
129
+ "num_train_epochs": 10,
130
+ "total_flos": 201896658999468.0,
131
+ "trial_name": null,
132
+ "trial_params": null
133
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5efba65dce7e58ef6178bdbdcc23181aa0f6093ee3c26463e169b74b62b6ea48
3
+ size 2351
vocab.txt ADDED
The diff for this file is too large to render. See raw diff