Jsevisal commited on
Commit
f03eddd
1 Parent(s): aac8e9c

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - precision
7
+ - recall
8
+ - f1
9
+ - accuracy
10
+ model-index:
11
+ - name: balanced-augmented-bert-large-gest-pred-seqeval-partialmatch-2
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # balanced-augmented-bert-large-gest-pred-seqeval-partialmatch-2
19
+
20
+ This model is a fine-tuned version of [bert-large-cased](https://huggingface.co/bert-large-cased) on the None dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 0.5066
23
+ - Precision: 0.9246
24
+ - Recall: 0.9232
25
+ - F1: 0.9156
26
+ - Accuracy: 0.9045
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 2e-05
46
+ - train_batch_size: 16
47
+ - eval_batch_size: 16
48
+ - seed: 42
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - num_epochs: 20
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
57
+ | 2.9285 | 1.0 | 52 | 2.4277 | 0.2289 | 0.1482 | 0.1320 | 0.3077 |
58
+ | 1.9821 | 2.0 | 104 | 1.6710 | 0.5805 | 0.4849 | 0.4419 | 0.5576 |
59
+ | 1.3372 | 3.0 | 156 | 1.1456 | 0.6591 | 0.6279 | 0.6073 | 0.6703 |
60
+ | 0.8553 | 4.0 | 208 | 0.7989 | 0.7950 | 0.7956 | 0.7738 | 0.7864 |
61
+ | 0.4964 | 5.0 | 260 | 0.6153 | 0.8422 | 0.8447 | 0.8281 | 0.8378 |
62
+ | 0.2985 | 6.0 | 312 | 0.4399 | 0.9124 | 0.8982 | 0.8966 | 0.8814 |
63
+ | 0.1825 | 7.0 | 364 | 0.4938 | 0.8936 | 0.9034 | 0.8883 | 0.8829 |
64
+ | 0.1178 | 8.0 | 416 | 0.4713 | 0.9087 | 0.9188 | 0.9069 | 0.8912 |
65
+ | 0.0812 | 9.0 | 468 | 0.4012 | 0.9108 | 0.9274 | 0.9130 | 0.9045 |
66
+ | 0.0579 | 10.0 | 520 | 0.4695 | 0.9120 | 0.9132 | 0.9050 | 0.8942 |
67
+ | 0.0345 | 11.0 | 572 | 0.5327 | 0.9196 | 0.9165 | 0.9083 | 0.8976 |
68
+ | 0.0309 | 12.0 | 624 | 0.5243 | 0.9273 | 0.9207 | 0.9146 | 0.9025 |
69
+ | 0.0234 | 13.0 | 676 | 0.5089 | 0.9271 | 0.9243 | 0.9165 | 0.8996 |
70
+ | 0.0175 | 14.0 | 728 | 0.4750 | 0.9284 | 0.9258 | 0.9190 | 0.9059 |
71
+ | 0.015 | 15.0 | 780 | 0.4891 | 0.9310 | 0.9277 | 0.9210 | 0.9079 |
72
+ | 0.0109 | 16.0 | 832 | 0.5126 | 0.9240 | 0.9222 | 0.9153 | 0.9045 |
73
+ | 0.0085 | 17.0 | 884 | 0.4512 | 0.9320 | 0.9315 | 0.9246 | 0.9123 |
74
+ | 0.0077 | 18.0 | 936 | 0.5363 | 0.9241 | 0.9226 | 0.9149 | 0.9035 |
75
+ | 0.0058 | 19.0 | 988 | 0.5033 | 0.9246 | 0.9232 | 0.9156 | 0.9045 |
76
+ | 0.0062 | 20.0 | 1040 | 0.5066 | 0.9246 | 0.9232 | 0.9156 | 0.9045 |
77
+
78
+
79
+ ### Framework versions
80
+
81
+ - Transformers 4.27.3
82
+ - Pytorch 1.13.1+cu116
83
+ - Datasets 2.10.1
84
+ - Tokenizers 0.13.2