GuiTap commited on
Commit
68a8152
1 Parent(s): 9c3401e

End of training

Browse files
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: bert-base-cased
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - harem
8
+ metrics:
9
+ - precision
10
+ - recall
11
+ - f1
12
+ - accuracy
13
+ model-index:
14
+ - name: bert-base-cased-finetuned-ner
15
+ results:
16
+ - task:
17
+ name: Token Classification
18
+ type: token-classification
19
+ dataset:
20
+ name: harem
21
+ type: harem
22
+ config: default
23
+ split: validation
24
+ args: default
25
+ metrics:
26
+ - name: Precision
27
+ type: precision
28
+ value: 0.3251366120218579
29
+ - name: Recall
30
+ type: recall
31
+ value: 0.34097421203438394
32
+ - name: F1
33
+ type: f1
34
+ value: 0.3328671328671328
35
+ - name: Accuracy
36
+ type: accuracy
37
+ value: 0.8684278684278685
38
+ ---
39
+
40
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
41
+ should probably proofread and complete it, then remove this comment. -->
42
+
43
+ # bert-base-cased-finetuned-ner
44
+
45
+ This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the harem dataset.
46
+ It achieves the following results on the evaluation set:
47
+ - Loss: 0.5103
48
+ - Precision: 0.3251
49
+ - Recall: 0.3410
50
+ - F1: 0.3329
51
+ - Accuracy: 0.8684
52
+
53
+ ## Model description
54
+
55
+ More information needed
56
+
57
+ ## Intended uses & limitations
58
+
59
+ More information needed
60
+
61
+ ## Training and evaluation data
62
+
63
+ More information needed
64
+
65
+ ## Training procedure
66
+
67
+ ### Training hyperparameters
68
+
69
+ The following hyperparameters were used during training:
70
+ - learning_rate: 2e-05
71
+ - train_batch_size: 32
72
+ - eval_batch_size: 32
73
+ - seed: 42
74
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
75
+ - lr_scheduler_type: linear
76
+ - num_epochs: 40
77
+
78
+ ### Training results
79
+
80
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
81
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
82
+ | No log | 1.0 | 4 | 1.1734 | 0.0 | 0.0 | 0.0 | 0.8083 |
83
+ | No log | 2.0 | 8 | 0.9781 | 0.0 | 0.0 | 0.0 | 0.8086 |
84
+ | No log | 3.0 | 12 | 0.8915 | 0.0 | 0.0 | 0.0 | 0.8086 |
85
+ | No log | 4.0 | 16 | 0.7901 | 0.0 | 0.0 | 0.0 | 0.8086 |
86
+ | No log | 5.0 | 20 | 0.7202 | 0.0 | 0.0 | 0.0 | 0.8086 |
87
+ | No log | 6.0 | 24 | 0.6846 | 0.4286 | 0.0344 | 0.0637 | 0.8130 |
88
+ | No log | 7.0 | 28 | 0.6596 | 0.2014 | 0.0802 | 0.1148 | 0.8306 |
89
+ | No log | 8.0 | 32 | 0.6355 | 0.1615 | 0.0745 | 0.1020 | 0.8324 |
90
+ | No log | 9.0 | 36 | 0.6193 | 0.1571 | 0.0946 | 0.1181 | 0.8345 |
91
+ | No log | 10.0 | 40 | 0.6106 | 0.1295 | 0.1032 | 0.1148 | 0.8335 |
92
+ | No log | 11.0 | 44 | 0.5919 | 0.1680 | 0.1232 | 0.1421 | 0.8350 |
93
+ | No log | 12.0 | 48 | 0.5789 | 0.2051 | 0.1375 | 0.1647 | 0.8384 |
94
+ | No log | 13.0 | 52 | 0.5827 | 0.1611 | 0.1375 | 0.1484 | 0.8355 |
95
+ | No log | 14.0 | 56 | 0.5638 | 0.2281 | 0.1862 | 0.2050 | 0.8433 |
96
+ | No log | 15.0 | 60 | 0.5576 | 0.1879 | 0.1691 | 0.1780 | 0.8420 |
97
+ | No log | 16.0 | 64 | 0.5485 | 0.2110 | 0.1862 | 0.1979 | 0.8456 |
98
+ | No log | 17.0 | 68 | 0.5479 | 0.2401 | 0.2264 | 0.2330 | 0.8500 |
99
+ | No log | 18.0 | 72 | 0.5460 | 0.2406 | 0.2378 | 0.2392 | 0.8503 |
100
+ | No log | 19.0 | 76 | 0.5374 | 0.2531 | 0.2350 | 0.2437 | 0.8542 |
101
+ | No log | 20.0 | 80 | 0.5365 | 0.2364 | 0.2493 | 0.2427 | 0.8539 |
102
+ | No log | 21.0 | 84 | 0.5284 | 0.2462 | 0.2350 | 0.2405 | 0.8552 |
103
+ | No log | 22.0 | 88 | 0.5306 | 0.2812 | 0.2837 | 0.2825 | 0.8601 |
104
+ | No log | 23.0 | 92 | 0.5262 | 0.2722 | 0.2722 | 0.2722 | 0.8573 |
105
+ | No log | 24.0 | 96 | 0.5306 | 0.2447 | 0.2665 | 0.2551 | 0.8555 |
106
+ | No log | 25.0 | 100 | 0.5249 | 0.2785 | 0.3009 | 0.2893 | 0.8594 |
107
+ | No log | 26.0 | 104 | 0.5201 | 0.2801 | 0.2865 | 0.2833 | 0.8586 |
108
+ | No log | 27.0 | 108 | 0.5213 | 0.2806 | 0.2894 | 0.2849 | 0.8604 |
109
+ | No log | 28.0 | 112 | 0.5207 | 0.2732 | 0.2951 | 0.2837 | 0.8612 |
110
+ | No log | 29.0 | 116 | 0.5144 | 0.3027 | 0.3209 | 0.3115 | 0.8630 |
111
+ | No log | 30.0 | 120 | 0.5135 | 0.3073 | 0.3381 | 0.3220 | 0.8648 |
112
+ | No log | 31.0 | 124 | 0.5147 | 0.2953 | 0.3266 | 0.3102 | 0.8651 |
113
+ | No log | 32.0 | 128 | 0.5121 | 0.2937 | 0.3181 | 0.3054 | 0.8645 |
114
+ | No log | 33.0 | 132 | 0.5092 | 0.3061 | 0.3324 | 0.3187 | 0.8645 |
115
+ | No log | 34.0 | 136 | 0.5064 | 0.3342 | 0.3696 | 0.3510 | 0.8677 |
116
+ | No log | 35.0 | 140 | 0.5056 | 0.3191 | 0.3438 | 0.3310 | 0.8674 |
117
+ | No log | 36.0 | 144 | 0.5091 | 0.3023 | 0.3352 | 0.3179 | 0.8661 |
118
+ | No log | 37.0 | 148 | 0.5104 | 0.3061 | 0.3324 | 0.3187 | 0.8658 |
119
+ | No log | 38.0 | 152 | 0.5100 | 0.3152 | 0.3324 | 0.3236 | 0.8677 |
120
+ | No log | 39.0 | 156 | 0.5102 | 0.3243 | 0.3410 | 0.3324 | 0.8684 |
121
+ | No log | 40.0 | 160 | 0.5103 | 0.3251 | 0.3410 | 0.3329 | 0.8684 |
122
+
123
+
124
+ ### Framework versions
125
+
126
+ - Transformers 4.32.1
127
+ - Pytorch 2.0.0
128
+ - Datasets 2.1.0
129
+ - Tokenizers 0.13.3
config.json ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "bert-base-cased",
3
+ "architectures": [
4
+ "BertForTokenClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "id2label": {
13
+ "0": "LABEL_0",
14
+ "1": "LABEL_1",
15
+ "2": "LABEL_2",
16
+ "3": "LABEL_3",
17
+ "4": "LABEL_4",
18
+ "5": "LABEL_5",
19
+ "6": "LABEL_6",
20
+ "7": "LABEL_7",
21
+ "8": "LABEL_8",
22
+ "9": "LABEL_9",
23
+ "10": "LABEL_10",
24
+ "11": "LABEL_11",
25
+ "12": "LABEL_12",
26
+ "13": "LABEL_13",
27
+ "14": "LABEL_14",
28
+ "15": "LABEL_15",
29
+ "16": "LABEL_16",
30
+ "17": "LABEL_17",
31
+ "18": "LABEL_18",
32
+ "19": "LABEL_19",
33
+ "20": "LABEL_20"
34
+ },
35
+ "initializer_range": 0.02,
36
+ "intermediate_size": 3072,
37
+ "label2id": {
38
+ "LABEL_0": 0,
39
+ "LABEL_1": 1,
40
+ "LABEL_10": 10,
41
+ "LABEL_11": 11,
42
+ "LABEL_12": 12,
43
+ "LABEL_13": 13,
44
+ "LABEL_14": 14,
45
+ "LABEL_15": 15,
46
+ "LABEL_16": 16,
47
+ "LABEL_17": 17,
48
+ "LABEL_18": 18,
49
+ "LABEL_19": 19,
50
+ "LABEL_2": 2,
51
+ "LABEL_20": 20,
52
+ "LABEL_3": 3,
53
+ "LABEL_4": 4,
54
+ "LABEL_5": 5,
55
+ "LABEL_6": 6,
56
+ "LABEL_7": 7,
57
+ "LABEL_8": 8,
58
+ "LABEL_9": 9
59
+ },
60
+ "layer_norm_eps": 1e-12,
61
+ "max_position_embeddings": 512,
62
+ "model_type": "bert",
63
+ "num_attention_heads": 12,
64
+ "num_hidden_layers": 12,
65
+ "pad_token_id": 0,
66
+ "position_embedding_type": "absolute",
67
+ "torch_dtype": "float32",
68
+ "transformers_version": "4.32.1",
69
+ "type_vocab_size": 2,
70
+ "use_cache": true,
71
+ "vocab_size": 28996
72
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13cf2f59428f73b464d100b607d4406cae4623ec883cbd5ab2886013aed37b85
3
+ size 431011049
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_lower_case": false,
5
+ "mask_token": "[MASK]",
6
+ "model_max_length": 512,
7
+ "pad_token": "[PAD]",
8
+ "sep_token": "[SEP]",
9
+ "strip_accents": null,
10
+ "tokenize_chinese_chars": true,
11
+ "tokenizer_class": "BertTokenizer",
12
+ "unk_token": "[UNK]"
13
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78377e6ad389c8a64613f61d87f5e58a5ecd8815c979820e1e7a531e9fa02759
3
+ size 4091
vocab.txt ADDED
The diff for this file is too large to render. See raw diff