aishanur commited on
Commit
1b87f00
·
verified ·
1 Parent(s): 58bfdd5

aishanur/HV_Roberta_Large_2

Browse files
README.md ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: FacebookAI/roberta-large
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: roberta_large_hv_3
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # roberta_large_hv_3
15
+
16
+ This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.0599
19
+ - F1-macro subtask 1: 0.2635
20
+ - F1-micro subtask 1: 0.3292
21
+ - Roc auc macro subtask 1: 0.6016
22
+ - F1-macro subtask 2: 0.0502
23
+ - F1-micro subtask 2: 0.0513
24
+ - Roc auc macro subtask 2: 0.6128
25
+ - Self-direction: thought1: 0.0588
26
+ - Self-direction: action1: 0.1801
27
+ - Stimulation1: 0.1815
28
+ - Hedonism1: 0.2645
29
+ - Achievement1: 0.2612
30
+ - Power: dominance1: 0.3455
31
+ - Power: resources1: 0.2642
32
+ - Face1: 0.1941
33
+ - Security: personal1: 0.2454
34
+ - Security: societal1: 0.4119
35
+ - Tradition1: 0.4361
36
+ - Conformity: rules1: 0.4578
37
+ - Conformity: interpersonal1: 0.0700
38
+ - Humility1: 0.0833
39
+ - Benevolence: caring1: 0.1117
40
+ - Benevolence: dependability1: 0.2426
41
+ - Universalism: concern1: 0.3619
42
+ - Universalism: nature1: 0.5693
43
+ - Universalism: tolerance1: 0.2667
44
+ - Self-direction: thought attained2: 0.0190
45
+ - Self-direction: thought constrained2: 0.0561
46
+ - Self-direction: action attained2: 0.0525
47
+ - Self-direction: action constrained2: 0.0762
48
+ - Stimulation attained2: 0.0631
49
+ - Stimulation constrained2: 0.0095
50
+ - Hedonism attained2: 0.0143
51
+ - Hedonism constrained2: 0.0071
52
+ - Achievement attained2: 0.1248
53
+ - Achievement constrained2: 0.0840
54
+ - Power: dominance attained2: 0.0701
55
+ - Power: dominance constrained2: 0.0767
56
+ - Power: resources attained2: 0.0806
57
+ - Power: resources constrained2: 0.0807
58
+ - Face attained2: 0.0239
59
+ - Face constrained2: 0.0361
60
+ - Security: personal attained2: 0.0187
61
+ - Security: personal constrained2: 0.0425
62
+ - Security: societal attained2: 0.1038
63
+ - Security: societal constrained2: 0.1802
64
+ - Tradition attained2: 0.0312
65
+ - Tradition constrained2: 0.0338
66
+ - Conformity: rules attained2: 0.1044
67
+ - Conformity: rules constrained2: 0.0914
68
+ - Conformity: interpersonal attained2: 0.0162
69
+ - Conformity: interpersonal constrained2: 0.0294
70
+ - Humility attained2: 0.0053
71
+ - Humility constrained2: 0.0024
72
+ - Benevolence: caring attained2: 0.0438
73
+ - Benevolence: caring constrained2: 0.0137
74
+ - Benevolence: dependability attained2: 0.0362
75
+ - Benevolence: dependability constrained2: 0.016
76
+ - Universalism: concern attained2: 0.0852
77
+ - Universalism: concern constrained2: 0.0583
78
+ - Universalism: nature attained2: 0.0423
79
+ - Universalism: nature constrained2: 0.0516
80
+ - Universalism: tolerance attained2: 0.0142
81
+ - Universalism: tolerance constrained2: 0.0133
82
+
83
+ ## Model description
84
+
85
+ More information needed
86
+
87
+ ## Intended uses & limitations
88
+
89
+ More information needed
90
+
91
+ ## Training and evaluation data
92
+
93
+ More information needed
94
+
95
+ ## Training procedure
96
+
97
+ ### Training hyperparameters
98
+
99
+ The following hyperparameters were used during training:
100
+ - learning_rate: 2e-05
101
+ - train_batch_size: 8
102
+ - eval_batch_size: 8
103
+ - seed: 42
104
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
105
+ - lr_scheduler_type: linear
106
+ - lr_scheduler_warmup_ratio: 0.2
107
+ - num_epochs: 4
108
+
109
+ ### Training results
110
+
111
+ | Training Loss | Epoch | Step | Validation Loss | F1-macro subtask 1 | F1-micro subtask 1 | Roc auc macro subtask 1 | F1-macro subtask 2 | F1-micro subtask 2 | Roc auc macro subtask 2 | Self-direction: thought1 | Self-direction: action1 | Stimulation1 | Hedonism1 | Achievement1 | Power: dominance1 | Power: resources1 | Face1 | Security: personal1 | Security: societal1 | Tradition1 | Conformity: rules1 | Conformity: interpersonal1 | Humility1 | Benevolence: caring1 | Benevolence: dependability1 | Universalism: concern1 | Universalism: nature1 | Universalism: tolerance1 | Self-direction: thought attained2 | Self-direction: thought constrained2 | Self-direction: action attained2 | Self-direction: action constrained2 | Stimulation attained2 | Stimulation constrained2 | Hedonism attained2 | Hedonism constrained2 | Achievement attained2 | Achievement constrained2 | Power: dominance attained2 | Power: dominance constrained2 | Power: resources attained2 | Power: resources constrained2 | Face attained2 | Face constrained2 | Security: personal attained2 | Security: personal constrained2 | Security: societal attained2 | Security: societal constrained2 | Tradition attained2 | Tradition constrained2 | Conformity: rules attained2 | Conformity: rules constrained2 | Conformity: interpersonal attained2 | Conformity: interpersonal constrained2 | Humility attained2 | Humility constrained2 | Benevolence: caring attained2 | Benevolence: caring constrained2 | Benevolence: dependability attained2 | Benevolence: dependability constrained2 | Universalism: concern attained2 | Universalism: concern constrained2 | Universalism: nature attained2 | Universalism: nature constrained2 | Universalism: tolerance attained2 | Universalism: tolerance constrained2 |
112
+ |:-------------:|:-----:|:-----:|:---------------:|:------------------:|:------------------:|:-----------------------:|:------------------:|:------------------:|:-----------------------:|:------------------------:|:-----------------------:|:------------:|:---------:|:------------:|:-----------------:|:-----------------:|:------:|:-------------------:|:-------------------:|:----------:|:------------------:|:--------------------------:|:---------:|:--------------------:|:---------------------------:|:----------------------:|:---------------------:|:------------------------:|:---------------------------------:|:------------------------------------:|:--------------------------------:|:-----------------------------------:|:---------------------:|:------------------------:|:------------------:|:---------------------:|:---------------------:|:------------------------:|:--------------------------:|:-----------------------------:|:--------------------------:|:-----------------------------:|:--------------:|:-----------------:|:----------------------------:|:-------------------------------:|:----------------------------:|:-------------------------------:|:-------------------:|:----------------------:|:---------------------------:|:------------------------------:|:-----------------------------------:|:--------------------------------------:|:------------------:|:---------------------:|:-----------------------------:|:--------------------------------:|:------------------------------------:|:---------------------------------------:|:-------------------------------:|:----------------------------------:|:------------------------------:|:---------------------------------:|:---------------------------------:|:------------------------------------:|
113
+ | 0.0603 | 1.0 | 7183 | 0.0604 | 0.1393 | 0.2079 | 0.5512 | 0.0485 | 0.0509 | 0.6252 | 0.0 | 0.0431 | 0.0 | 0.0 | 0.2512 | 0.1886 | 0.0844 | 0.2393 | 0.0 | 0.2646 | 0.2359 | 0.3725 | 0.1083 | 0.0 | 0.0277 | 0.0385 | 0.0819 | 0.5374 | 0.1734 | 0.0188 | 0.0137 | 0.0550 | 0.0418 | 0.0530 | 0.0141 | 0.0119 | 0.0110 | 0.1262 | 0.0807 | 0.0659 | 0.0600 | 0.0806 | 0.0833 | 0.0201 | 0.0576 | 0.0161 | 0.0506 | 0.1231 | 0.1471 | 0.0345 | 0.0188 | 0.1007 | 0.0982 | 0.0157 | 0.0351 | 0.0069 | 0.0018 | 0.0476 | 0.0151 | 0.0480 | 0.0142 | 0.0836 | 0.0540 | 0.0380 | 0.0727 | 0.0120 | 0.0166 |
114
+ | 0.0411 | 2.0 | 14366 | 0.0607 | 0.2394 | 0.3018 | 0.5948 | 0.0491 | 0.0516 | 0.6270 | 0.0664 | 0.1223 | 0.3371 | 0.3165 | 0.3128 | 0.1460 | 0.1394 | 0.2103 | 0.1218 | 0.4200 | 0.4190 | 0.4399 | 0.0972 | 0.1013 | 0.0874 | 0.1508 | 0.3057 | 0.5369 | 0.2182 | 0.0179 | 0.0375 | 0.0526 | 0.0773 | 0.0589 | 0.0104 | 0.0170 | 0.0059 | 0.1271 | 0.0824 | 0.0694 | 0.0645 | 0.0841 | 0.0744 | 0.0250 | 0.0356 | 0.0180 | 0.0406 | 0.1057 | 0.1696 | 0.0329 | 0.0286 | 0.1114 | 0.0839 | 0.0166 | 0.0295 | 0.0054 | 0.0027 | 0.0439 | 0.0208 | 0.0375 | 0.0203 | 0.0896 | 0.0542 | 0.0477 | 0.0405 | 0.0119 | 0.0158 |
115
+ | 0.0282 | 3.0 | 21549 | 0.0599 | 0.2635 | 0.3292 | 0.6016 | 0.0502 | 0.0513 | 0.6128 | 0.0588 | 0.1801 | 0.1815 | 0.2645 | 0.2612 | 0.3455 | 0.2642 | 0.1941 | 0.2454 | 0.4119 | 0.4361 | 0.4578 | 0.0700 | 0.0833 | 0.1117 | 0.2426 | 0.3619 | 0.5693 | 0.2667 | 0.0190 | 0.0561 | 0.0525 | 0.0762 | 0.0631 | 0.0095 | 0.0143 | 0.0071 | 0.1248 | 0.0840 | 0.0701 | 0.0767 | 0.0806 | 0.0807 | 0.0239 | 0.0361 | 0.0187 | 0.0425 | 0.1038 | 0.1802 | 0.0312 | 0.0338 | 0.1044 | 0.0914 | 0.0162 | 0.0294 | 0.0053 | 0.0024 | 0.0438 | 0.0137 | 0.0362 | 0.016 | 0.0852 | 0.0583 | 0.0423 | 0.0516 | 0.0142 | 0.0133 |
116
+ | 0.0272 | 4.0 | 28732 | 0.0630 | 0.2883 | 0.3504 | 0.6141 | 0.0492 | 0.0515 | 0.6185 | 0.0935 | 0.2240 | 0.2606 | 0.2857 | 0.3633 | 0.3567 | 0.2972 | 0.2133 | 0.3140 | 0.4242 | 0.4379 | 0.4547 | 0.0662 | 0.0385 | 0.2308 | 0.2571 | 0.3798 | 0.5229 | 0.2577 | 0.0190 | 0.0619 | 0.0525 | 0.0757 | 0.0651 | 0.0137 | 0.0158 | 0.0073 | 0.1216 | 0.0894 | 0.0744 | 0.035 | 0.0804 | 0.0828 | 0.0240 | 0.0343 | 0.0205 | 0.0392 | 0.1079 | 0.1682 | 0.0329 | 0.0220 | 0.1066 | 0.0931 | 0.0159 | 0.0265 | 0.0049 | 0.0023 | 0.0449 | 0.0135 | 0.0355 | 0.0200 | 0.0830 | 0.0601 | 0.0465 | 0.0437 | 0.0126 | 0.0152 |
117
+
118
+
119
+ ### Framework versions
120
+
121
+ - Transformers 4.37.2
122
+ - Pytorch 2.3.0
123
+ - Datasets 2.19.0
124
+ - Tokenizers 0.15.1
config.json ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "FacebookAI/roberta-large",
3
+ "architectures": [
4
+ "RobertaForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 1024,
13
+ "id2label": {
14
+ "0": "Self-direction: thought attained",
15
+ "1": "Self-direction: thought constrained",
16
+ "2": "Self-direction: action attained",
17
+ "3": "Self-direction: action constrained",
18
+ "4": "Stimulation attained",
19
+ "5": "Stimulation constrained",
20
+ "6": "Hedonism attained",
21
+ "7": "Hedonism constrained",
22
+ "8": "Achievement attained",
23
+ "9": "Achievement constrained",
24
+ "10": "Power: dominance attained",
25
+ "11": "Power: dominance constrained",
26
+ "12": "Power: resources attained",
27
+ "13": "Power: resources constrained",
28
+ "14": "Face attained",
29
+ "15": "Face constrained",
30
+ "16": "Security: personal attained",
31
+ "17": "Security: personal constrained",
32
+ "18": "Security: societal attained",
33
+ "19": "Security: societal constrained",
34
+ "20": "Tradition attained",
35
+ "21": "Tradition constrained",
36
+ "22": "Conformity: rules attained",
37
+ "23": "Conformity: rules constrained",
38
+ "24": "Conformity: interpersonal attained",
39
+ "25": "Conformity: interpersonal constrained",
40
+ "26": "Humility attained",
41
+ "27": "Humility constrained",
42
+ "28": "Benevolence: caring attained",
43
+ "29": "Benevolence: caring constrained",
44
+ "30": "Benevolence: dependability attained",
45
+ "31": "Benevolence: dependability constrained",
46
+ "32": "Universalism: concern attained",
47
+ "33": "Universalism: concern constrained",
48
+ "34": "Universalism: nature attained",
49
+ "35": "Universalism: nature constrained",
50
+ "36": "Universalism: tolerance attained",
51
+ "37": "Universalism: tolerance constrained"
52
+ },
53
+ "initializer_range": 0.02,
54
+ "intermediate_size": 4096,
55
+ "label2id": {
56
+ "Achievement attained": 8,
57
+ "Achievement constrained": 9,
58
+ "Benevolence: caring attained": 28,
59
+ "Benevolence: caring constrained": 29,
60
+ "Benevolence: dependability attained": 30,
61
+ "Benevolence: dependability constrained": 31,
62
+ "Conformity: interpersonal attained": 24,
63
+ "Conformity: interpersonal constrained": 25,
64
+ "Conformity: rules attained": 22,
65
+ "Conformity: rules constrained": 23,
66
+ "Face attained": 14,
67
+ "Face constrained": 15,
68
+ "Hedonism attained": 6,
69
+ "Hedonism constrained": 7,
70
+ "Humility attained": 26,
71
+ "Humility constrained": 27,
72
+ "Power: dominance attained": 10,
73
+ "Power: dominance constrained": 11,
74
+ "Power: resources attained": 12,
75
+ "Power: resources constrained": 13,
76
+ "Security: personal attained": 16,
77
+ "Security: personal constrained": 17,
78
+ "Security: societal attained": 18,
79
+ "Security: societal constrained": 19,
80
+ "Self-direction: action attained": 2,
81
+ "Self-direction: action constrained": 3,
82
+ "Self-direction: thought attained": 0,
83
+ "Self-direction: thought constrained": 1,
84
+ "Stimulation attained": 4,
85
+ "Stimulation constrained": 5,
86
+ "Tradition attained": 20,
87
+ "Tradition constrained": 21,
88
+ "Universalism: concern attained": 32,
89
+ "Universalism: concern constrained": 33,
90
+ "Universalism: nature attained": 34,
91
+ "Universalism: nature constrained": 35,
92
+ "Universalism: tolerance attained": 36,
93
+ "Universalism: tolerance constrained": 37
94
+ },
95
+ "layer_norm_eps": 1e-05,
96
+ "max_position_embeddings": 514,
97
+ "model_type": "roberta",
98
+ "num_attention_heads": 16,
99
+ "num_hidden_layers": 24,
100
+ "pad_token_id": 1,
101
+ "position_embedding_type": "absolute",
102
+ "problem_type": "multi_label_classification",
103
+ "torch_dtype": "float32",
104
+ "transformers_version": "4.37.2",
105
+ "type_vocab_size": 1,
106
+ "use_cache": true,
107
+ "vocab_size": 50265
108
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d6043bbc95fa388e400cfa930e333fa8436bdc911978acdbc27b122cf99014b
3
+ size 1421643016
special_tokens_map.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "cls_token": "<s>",
4
+ "eos_token": "</s>",
5
+ "mask_token": {
6
+ "content": "<mask>",
7
+ "lstrip": true,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "pad_token": "<pad>",
13
+ "sep_token": "</s>",
14
+ "unk_token": "<unk>"
15
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "50264": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": true,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "mask_token": "<mask>",
51
+ "model_max_length": 512,
52
+ "pad_token": "<pad>",
53
+ "sep_token": "</s>",
54
+ "tokenizer_class": "RobertaTokenizer",
55
+ "trim_offsets": true,
56
+ "unk_token": "<unk>"
57
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5a366126646727a930dbe015ab53e6516163404cdd90b05937c25c724bfa3ff
3
+ size 4728
vocab.json ADDED
The diff for this file is too large to render. See raw diff