Jzuluaga commited on
Commit
a9b258a
1 Parent(s): c734cd7

updating the repo with the fine-tuned model

Browse files
README.md ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - accuracy
7
+ - precision
8
+ - recall
9
+ - f1
10
+ model-index:
11
+ - name: uwb_atcc
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # uwb_atcc
19
+
20
+ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.6191
23
+ - Accuracy: 0.9103
24
+ - Precision: 0.9239
25
+ - Recall: 0.9161
26
+ - F1: 0.9200
27
+ - Report: precision recall f1-score support
28
+
29
+ 0 0.89 0.90 0.90 463
30
+ 1 0.92 0.92 0.92 596
31
+
32
+ accuracy 0.91 1059
33
+ macro avg 0.91 0.91 0.91 1059
34
+ weighted avg 0.91 0.91 0.91 1059
35
+
36
+
37
+ ## Model description
38
+
39
+ More information needed
40
+
41
+ ## Intended uses & limitations
42
+
43
+ More information needed
44
+
45
+ ## Training and evaluation data
46
+
47
+ More information needed
48
+
49
+ ## Training procedure
50
+
51
+ ### Training hyperparameters
52
+
53
+ The following hyperparameters were used during training:
54
+ - learning_rate: 5e-05
55
+ - train_batch_size: 32
56
+ - eval_batch_size: 16
57
+ - seed: 42
58
+ - gradient_accumulation_steps: 2
59
+ - total_train_batch_size: 64
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - lr_scheduler_warmup_steps: 500
63
+ - training_steps: 3000
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Report |
68
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
69
+ | No log | 3.36 | 500 | 0.2346 | 0.9207 | 0.9197 | 0.9413 | 0.9303 | precision recall f1-score support
70
+
71
+ 0 0.92 0.89 0.91 463
72
+ 1 0.92 0.94 0.93 596
73
+
74
+ accuracy 0.92 1059
75
+ macro avg 0.92 0.92 0.92 1059
76
+ weighted avg 0.92 0.92 0.92 1059
77
+ |
78
+ | 0.2212 | 6.71 | 1000 | 0.3161 | 0.9046 | 0.9260 | 0.9027 | 0.9142 | precision recall f1-score support
79
+
80
+ 0 0.88 0.91 0.89 463
81
+ 1 0.93 0.90 0.91 596
82
+
83
+ accuracy 0.90 1059
84
+ macro avg 0.90 0.90 0.90 1059
85
+ weighted avg 0.91 0.90 0.90 1059
86
+ |
87
+ | 0.2212 | 10.07 | 1500 | 0.4337 | 0.9065 | 0.9191 | 0.9144 | 0.9167 | precision recall f1-score support
88
+
89
+ 0 0.89 0.90 0.89 463
90
+ 1 0.92 0.91 0.92 596
91
+
92
+ accuracy 0.91 1059
93
+ macro avg 0.90 0.91 0.91 1059
94
+ weighted avg 0.91 0.91 0.91 1059
95
+ |
96
+ | 0.0651 | 13.42 | 2000 | 0.4743 | 0.9178 | 0.9249 | 0.9295 | 0.9272 | precision recall f1-score support
97
+
98
+ 0 0.91 0.90 0.91 463
99
+ 1 0.92 0.93 0.93 596
100
+
101
+ accuracy 0.92 1059
102
+ macro avg 0.92 0.92 0.92 1059
103
+ weighted avg 0.92 0.92 0.92 1059
104
+ |
105
+ | 0.0651 | 16.78 | 2500 | 0.5538 | 0.9103 | 0.9196 | 0.9211 | 0.9204 | precision recall f1-score support
106
+
107
+ 0 0.90 0.90 0.90 463
108
+ 1 0.92 0.92 0.92 596
109
+
110
+ accuracy 0.91 1059
111
+ macro avg 0.91 0.91 0.91 1059
112
+ weighted avg 0.91 0.91 0.91 1059
113
+ |
114
+ | 0.0296 | 20.13 | 3000 | 0.6191 | 0.9103 | 0.9239 | 0.9161 | 0.9200 | precision recall f1-score support
115
+
116
+ 0 0.89 0.90 0.90 463
117
+ 1 0.92 0.92 0.92 596
118
+
119
+ accuracy 0.91 1059
120
+ macro avg 0.91 0.91 0.91 1059
121
+ weighted avg 0.91 0.91 0.91 1059
122
+ |
123
+
124
+
125
+ ### Framework versions
126
+
127
+ - Transformers 4.24.0
128
+ - Pytorch 1.13.0+cu117
129
+ - Datasets 2.7.0
130
+ - Tokenizers 0.13.2
all_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 20.13,
3
+ "train_loss": 0.10527635129292806,
4
+ "train_runtime": 3964.4436,
5
+ "train_samples_per_second": 48.431,
6
+ "train_steps_per_second": 0.757
7
+ }
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "experiments/results/spk_id/bert-base-uncased/1234/uwb_atcc//",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "atco",
14
+ "1": "pilot"
15
+ },
16
+ "initializer_range": 0.02,
17
+ "intermediate_size": 3072,
18
+ "label2id": {
19
+ "atco": 0,
20
+ "pilot": 1
21
+ },
22
+ "layer_norm_eps": 1e-12,
23
+ "max_position_embeddings": 512,
24
+ "model_type": "bert",
25
+ "num_attention_heads": 12,
26
+ "num_hidden_layers": 12,
27
+ "pad_token_id": 0,
28
+ "position_embedding_type": "absolute",
29
+ "problem_type": "single_label_classification",
30
+ "torch_dtype": "float32",
31
+ "transformers_version": "4.24.0",
32
+ "type_vocab_size": 2,
33
+ "use_cache": true,
34
+ "vocab_size": 30522
35
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e1125e744c982b578daf74f01675ccfefb5bc72751764b40982f3934a197a4e
3
+ size 438005109
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "do_lower_case": true,
4
+ "mask_token": "[MASK]",
5
+ "model_max_length": 512,
6
+ "name_or_path": "experiments/results/spk_id/bert-base-uncased/1234/uwb_atcc//",
7
+ "pad_token": "[PAD]",
8
+ "sep_token": "[SEP]",
9
+ "special_tokens_map_file": null,
10
+ "strip_accents": null,
11
+ "tokenize_chinese_chars": true,
12
+ "tokenizer_class": "BertTokenizer",
13
+ "unk_token": "[UNK]"
14
+ }
train_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 20.13,
3
+ "train_loss": 0.10527635129292806,
4
+ "train_runtime": 3964.4436,
5
+ "train_samples_per_second": 48.431,
6
+ "train_steps_per_second": 0.757
7
+ }
trainer_state.json ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 20.13422818791946,
5
+ "global_step": 3000,
6
+ "is_hyper_param_search": false,
7
+ "is_local_process_zero": true,
8
+ "is_world_process_zero": true,
9
+ "log_history": [
10
+ {
11
+ "epoch": 3.36,
12
+ "eval_accuracy": 0.9206798866855525,
13
+ "eval_f1": 0.9303482587064678,
14
+ "eval_loss": 0.2345595508813858,
15
+ "eval_precision": 0.919672131147541,
16
+ "eval_recall": 0.9412751677852349,
17
+ "eval_report": " precision recall f1-score support\n\n 0 0.92 0.89 0.91 463\n 1 0.92 0.94 0.93 596\n\n accuracy 0.92 1059\n macro avg 0.92 0.92 0.92 1059\nweighted avg 0.92 0.92 0.92 1059\n",
18
+ "eval_runtime": 8.0364,
19
+ "eval_samples_per_second": 131.775,
20
+ "eval_steps_per_second": 8.337,
21
+ "step": 500
22
+ },
23
+ {
24
+ "epoch": 6.71,
25
+ "learning_rate": 4e-05,
26
+ "loss": 0.2212,
27
+ "step": 1000
28
+ },
29
+ {
30
+ "epoch": 6.71,
31
+ "eval_accuracy": 0.9046270066100094,
32
+ "eval_f1": 0.9141886151231945,
33
+ "eval_loss": 0.31608325242996216,
34
+ "eval_precision": 0.9259896729776248,
35
+ "eval_recall": 0.9026845637583892,
36
+ "eval_report": " precision recall f1-score support\n\n 0 0.88 0.91 0.89 463\n 1 0.93 0.90 0.91 596\n\n accuracy 0.90 1059\n macro avg 0.90 0.90 0.90 1059\nweighted avg 0.91 0.90 0.90 1059\n",
37
+ "eval_runtime": 8.0054,
38
+ "eval_samples_per_second": 132.285,
39
+ "eval_steps_per_second": 8.369,
40
+ "step": 1000
41
+ },
42
+ {
43
+ "epoch": 10.07,
44
+ "eval_accuracy": 0.9065155807365439,
45
+ "eval_f1": 0.9167367535744324,
46
+ "eval_loss": 0.43374723196029663,
47
+ "eval_precision": 0.9190556492411467,
48
+ "eval_recall": 0.9144295302013423,
49
+ "eval_report": " precision recall f1-score support\n\n 0 0.89 0.90 0.89 463\n 1 0.92 0.91 0.92 596\n\n accuracy 0.91 1059\n macro avg 0.90 0.91 0.91 1059\nweighted avg 0.91 0.91 0.91 1059\n",
50
+ "eval_runtime": 8.0154,
51
+ "eval_samples_per_second": 132.12,
52
+ "eval_steps_per_second": 8.359,
53
+ "step": 1500
54
+ },
55
+ {
56
+ "epoch": 13.42,
57
+ "learning_rate": 2e-05,
58
+ "loss": 0.0651,
59
+ "step": 2000
60
+ },
61
+ {
62
+ "epoch": 13.42,
63
+ "eval_accuracy": 0.9178470254957507,
64
+ "eval_f1": 0.9271966527196652,
65
+ "eval_loss": 0.47431105375289917,
66
+ "eval_precision": 0.9248747913188647,
67
+ "eval_recall": 0.9295302013422819,
68
+ "eval_report": " precision recall f1-score support\n\n 0 0.91 0.90 0.91 463\n 1 0.92 0.93 0.93 596\n\n accuracy 0.92 1059\n macro avg 0.92 0.92 0.92 1059\nweighted avg 0.92 0.92 0.92 1059\n",
69
+ "eval_runtime": 8.0135,
70
+ "eval_samples_per_second": 132.152,
71
+ "eval_steps_per_second": 8.361,
72
+ "step": 2000
73
+ },
74
+ {
75
+ "epoch": 16.78,
76
+ "eval_accuracy": 0.9102927289896129,
77
+ "eval_f1": 0.9203688181056161,
78
+ "eval_loss": 0.5537705421447754,
79
+ "eval_precision": 0.9195979899497487,
80
+ "eval_recall": 0.9211409395973155,
81
+ "eval_report": " precision recall f1-score support\n\n 0 0.90 0.90 0.90 463\n 1 0.92 0.92 0.92 596\n\n accuracy 0.91 1059\n macro avg 0.91 0.91 0.91 1059\nweighted avg 0.91 0.91 0.91 1059\n",
82
+ "eval_runtime": 8.0263,
83
+ "eval_samples_per_second": 131.941,
84
+ "eval_steps_per_second": 8.348,
85
+ "step": 2500
86
+ },
87
+ {
88
+ "epoch": 20.13,
89
+ "learning_rate": 0.0,
90
+ "loss": 0.0296,
91
+ "step": 3000
92
+ },
93
+ {
94
+ "epoch": 20.13,
95
+ "eval_accuracy": 0.9102927289896129,
96
+ "eval_f1": 0.9199663016006739,
97
+ "eval_loss": 0.6190621256828308,
98
+ "eval_precision": 0.9238578680203046,
99
+ "eval_recall": 0.9161073825503355,
100
+ "eval_report": " precision recall f1-score support\n\n 0 0.89 0.90 0.90 463\n 1 0.92 0.92 0.92 596\n\n accuracy 0.91 1059\n macro avg 0.91 0.91 0.91 1059\nweighted avg 0.91 0.91 0.91 1059\n",
101
+ "eval_runtime": 8.0249,
102
+ "eval_samples_per_second": 131.965,
103
+ "eval_steps_per_second": 8.349,
104
+ "step": 3000
105
+ },
106
+ {
107
+ "epoch": 20.13,
108
+ "step": 3000,
109
+ "total_flos": 5.04436515336192e+16,
110
+ "train_loss": 0.10527635129292806,
111
+ "train_runtime": 3964.4436,
112
+ "train_samples_per_second": 48.431,
113
+ "train_steps_per_second": 0.757
114
+ }
115
+ ],
116
+ "max_steps": 3000,
117
+ "num_train_epochs": 21,
118
+ "total_flos": 5.04436515336192e+16,
119
+ "trial_name": null,
120
+ "trial_params": null
121
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:766182c45bbaddf86724696e93caca89d45786b72cab46c2a9020624460ca63e
3
+ size 3451
vocab.txt ADDED
The diff for this file is too large to render. See raw diff