IngeniousArtist commited on
Commit
5f2e867
1 Parent(s): a64a51b

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -12
README.md CHANGED
@@ -7,7 +7,6 @@ datasets:
7
  - financial_phrasebank
8
  metrics:
9
  - accuracy
10
- - f1
11
  model-index:
12
  - name: distilbert-finance
13
  results:
@@ -23,10 +22,7 @@ model-index:
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
- value: 0.753099173553719
27
- - name: F1
28
- type: f1
29
- value: 0.6955188246097337
30
  ---
31
 
32
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -36,9 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
36
 
37
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the financial_phrasebank dataset.
38
  It achieves the following results on the evaluation set:
39
- - Loss: 1.4149
40
- - Accuracy: 0.7531
41
- - F1: 0.6955
42
 
43
  ## Model description
44
 
@@ -57,9 +52,9 @@ More information needed
57
  ### Training hyperparameters
58
 
59
  The following hyperparameters were used during training:
60
- - learning_rate: 2e-05
61
- - train_batch_size: 16
62
- - eval_batch_size: 16
63
  - seed: 42
64
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
  - lr_scheduler_type: linear
@@ -67,11 +62,43 @@ The following hyperparameters were used during training:
67
 
68
  ### Training results
69
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
 
71
 
72
  ### Framework versions
73
 
74
  - Transformers 4.31.0
75
  - Pytorch 2.0.1+cu117
76
- - Datasets 2.14.1
77
  - Tokenizers 0.13.3
 
7
  - financial_phrasebank
8
  metrics:
9
  - accuracy
 
10
  model-index:
11
  - name: distilbert-finance
12
  results:
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.6993801652892562
 
 
 
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the financial_phrasebank dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 1.5012
36
+ - Accuracy: 0.6994
 
37
 
38
  ## Model description
39
 
 
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
+ - learning_rate: 0.0002
56
+ - train_batch_size: 64
57
+ - eval_batch_size: 64
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
 
62
 
63
  ### Training results
64
 
65
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
+ | 0.6623 | 0.33 | 20 | 1.4571 | 0.3709 |
68
+ | 0.424 | 0.66 | 40 | 1.0223 | 0.6126 |
69
+ | 0.4282 | 0.98 | 60 | 1.0824 | 0.6343 |
70
+ | 0.3126 | 1.31 | 80 | 0.9320 | 0.6612 |
71
+ | 0.2464 | 1.64 | 100 | 0.8817 | 0.6963 |
72
+ | 0.2677 | 1.97 | 120 | 0.9278 | 0.6994 |
73
+ | 0.1221 | 2.3 | 140 | 1.7929 | 0.6322 |
74
+ | 0.1392 | 2.62 | 160 | 1.0517 | 0.7004 |
75
+ | 0.1982 | 2.95 | 180 | 1.0295 | 0.6684 |
76
+ | 0.1055 | 3.28 | 200 | 0.9028 | 0.7252 |
77
+ | 0.0704 | 3.61 | 220 | 1.6708 | 0.6188 |
78
+ | 0.0962 | 3.93 | 240 | 1.1233 | 0.7200 |
79
+ | 0.0356 | 4.26 | 260 | 1.1614 | 0.7603 |
80
+ | 0.0553 | 4.59 | 280 | 1.1362 | 0.7149 |
81
+ | 0.0665 | 4.92 | 300 | 1.2905 | 0.6529 |
82
+ | 0.0441 | 5.25 | 320 | 1.5180 | 0.6932 |
83
+ | 0.0425 | 5.57 | 340 | 1.3501 | 0.6808 |
84
+ | 0.0267 | 5.9 | 360 | 1.2062 | 0.7159 |
85
+ | 0.0198 | 6.23 | 380 | 1.3289 | 0.7262 |
86
+ | 0.0228 | 6.56 | 400 | 1.6142 | 0.6694 |
87
+ | 0.0209 | 6.89 | 420 | 1.8779 | 0.6147 |
88
+ | 0.0295 | 7.21 | 440 | 1.1260 | 0.6994 |
89
+ | 0.0138 | 7.54 | 460 | 1.4690 | 0.6756 |
90
+ | 0.0091 | 7.87 | 480 | 1.2774 | 0.7035 |
91
+ | 0.0094 | 8.2 | 500 | 1.8177 | 0.6384 |
92
+ | 0.0075 | 8.52 | 520 | 1.3794 | 0.7004 |
93
+ | 0.0079 | 8.85 | 540 | 1.4167 | 0.6994 |
94
+ | 0.0039 | 9.18 | 560 | 1.4824 | 0.6921 |
95
+ | 0.0023 | 9.51 | 580 | 1.5161 | 0.6932 |
96
+ | 0.005 | 9.84 | 600 | 1.5012 | 0.6994 |
97
 
98
 
99
  ### Framework versions
100
 
101
  - Transformers 4.31.0
102
  - Pytorch 2.0.1+cu117
103
+ - Datasets 2.14.4
104
  - Tokenizers 0.13.3