SoulPerforms commited on
Commit
d7e9ba9
1 Parent(s): 248626f

End of training

Browse files
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.50625
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.3062
36
- - Accuracy: 0.5062
37
 
38
  ## Model description
39
 
@@ -52,74 +52,58 @@ More information needed
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
- - learning_rate: 3e-05
56
  - train_batch_size: 8
57
  - eval_batch_size: 8
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
- - num_epochs: 35
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
- | 2.0442 | 0.62 | 50 | 1.9856 | 0.2875 |
68
- | 1.8729 | 1.25 | 100 | 1.7608 | 0.3937 |
69
- | 1.6329 | 1.88 | 150 | 1.6241 | 0.35 |
70
- | 1.5149 | 2.5 | 200 | 1.5310 | 0.4688 |
71
- | 1.411 | 3.12 | 250 | 1.4633 | 0.4562 |
72
- | 1.338 | 3.75 | 300 | 1.4239 | 0.4875 |
73
- | 1.2318 | 4.38 | 350 | 1.3942 | 0.475 |
74
- | 1.1852 | 5.0 | 400 | 1.3556 | 0.5125 |
75
- | 1.1113 | 5.62 | 450 | 1.3052 | 0.5188 |
76
- | 1.0281 | 6.25 | 500 | 1.2552 | 0.5875 |
77
- | 0.9395 | 6.88 | 550 | 1.2999 | 0.5188 |
78
- | 0.9066 | 7.5 | 600 | 1.2596 | 0.5687 |
79
- | 0.8555 | 8.12 | 650 | 1.2447 | 0.6 |
80
- | 0.7663 | 8.75 | 700 | 1.2201 | 0.5687 |
81
- | 0.6585 | 9.38 | 750 | 1.2271 | 0.5563 |
82
- | 0.6177 | 10.0 | 800 | 1.1780 | 0.5375 |
83
- | 0.5816 | 10.62 | 850 | 1.2330 | 0.5687 |
84
- | 0.5314 | 11.25 | 900 | 1.2980 | 0.525 |
85
- | 0.5697 | 11.88 | 950 | 1.2925 | 0.5625 |
86
- | 0.5052 | 12.5 | 1000 | 1.2276 | 0.5625 |
87
- | 0.4633 | 13.12 | 1050 | 1.3142 | 0.5437 |
88
- | 0.3861 | 13.75 | 1100 | 1.2919 | 0.5687 |
89
- | 0.3902 | 14.38 | 1150 | 1.2494 | 0.6 |
90
- | 0.4023 | 15.0 | 1200 | 1.2697 | 0.575 |
91
- | 0.3407 | 15.62 | 1250 | 1.2314 | 0.6125 |
92
- | 0.311 | 16.25 | 1300 | 1.2066 | 0.6188 |
93
- | 0.3045 | 16.88 | 1350 | 1.3879 | 0.55 |
94
- | 0.3384 | 17.5 | 1400 | 1.2529 | 0.6125 |
95
- | 0.2802 | 18.12 | 1450 | 1.3187 | 0.575 |
96
- | 0.2752 | 18.75 | 1500 | 1.3504 | 0.5625 |
97
- | 0.2279 | 19.38 | 1550 | 1.3941 | 0.5813 |
98
- | 0.3122 | 20.0 | 1600 | 1.4036 | 0.575 |
99
- | 0.2336 | 20.62 | 1650 | 1.3033 | 0.6 |
100
- | 0.2381 | 21.25 | 1700 | 1.2618 | 0.6062 |
101
- | 0.1913 | 21.88 | 1750 | 1.2826 | 0.6188 |
102
- | 0.2335 | 22.5 | 1800 | 1.4944 | 0.5563 |
103
- | 0.2416 | 23.12 | 1850 | 1.3653 | 0.575 |
104
- | 0.2656 | 23.75 | 1900 | 1.5085 | 0.5437 |
105
- | 0.1911 | 24.38 | 1950 | 1.4233 | 0.5625 |
106
- | 0.2286 | 25.0 | 2000 | 1.5085 | 0.5687 |
107
- | 0.1898 | 25.62 | 2050 | 1.3486 | 0.5813 |
108
- | 0.225 | 26.25 | 2100 | 1.3076 | 0.6 |
109
- | 0.2104 | 26.88 | 2150 | 1.3709 | 0.5813 |
110
- | 0.1926 | 27.5 | 2200 | 1.4087 | 0.5875 |
111
- | 0.2011 | 28.12 | 2250 | 1.4502 | 0.5875 |
112
- | 0.1952 | 28.75 | 2300 | 1.2916 | 0.6125 |
113
- | 0.2423 | 29.38 | 2350 | 1.2257 | 0.65 |
114
- | 0.17 | 30.0 | 2400 | 1.4231 | 0.5813 |
115
- | 0.1432 | 30.62 | 2450 | 1.4254 | 0.6062 |
116
- | 0.1901 | 31.25 | 2500 | 1.4495 | 0.5875 |
117
- | 0.1782 | 31.88 | 2550 | 1.4284 | 0.5687 |
118
- | 0.1817 | 32.5 | 2600 | 1.4890 | 0.5687 |
119
- | 0.2076 | 33.12 | 2650 | 1.4242 | 0.5938 |
120
- | 0.1198 | 33.75 | 2700 | 1.4578 | 0.5875 |
121
- | 0.174 | 34.38 | 2750 | 1.3860 | 0.5938 |
122
- | 0.179 | 35.0 | 2800 | 1.4317 | 0.5813 |
123
 
124
 
125
  ### Framework versions
@@ -127,4 +111,4 @@ The following hyperparameters were used during training:
127
  - Transformers 4.35.2
128
  - Pytorch 2.1.0+cu121
129
  - Datasets 2.17.0
130
- - Tokenizers 0.15.1
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.51875
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 1.2429
36
+ - Accuracy: 0.5188
37
 
38
  ## Model description
39
 
 
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
+ - learning_rate: 1e-05
56
  - train_batch_size: 8
57
  - eval_batch_size: 8
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
+ - num_epochs: 50
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
+ | 2.026 | 1.25 | 100 | 2.0071 | 0.275 |
68
+ | 1.8882 | 2.5 | 200 | 1.8921 | 0.3625 |
69
+ | 1.7186 | 3.75 | 300 | 1.7326 | 0.4188 |
70
+ | 1.5892 | 5.0 | 400 | 1.6242 | 0.475 |
71
+ | 1.4942 | 6.25 | 500 | 1.5443 | 0.5125 |
72
+ | 1.3825 | 7.5 | 600 | 1.4763 | 0.5062 |
73
+ | 1.3084 | 8.75 | 700 | 1.4554 | 0.4938 |
74
+ | 1.2388 | 10.0 | 800 | 1.4057 | 0.525 |
75
+ | 1.1519 | 11.25 | 900 | 1.3756 | 0.4938 |
76
+ | 1.1054 | 12.5 | 1000 | 1.3604 | 0.4875 |
77
+ | 1.0605 | 13.75 | 1100 | 1.3597 | 0.4938 |
78
+ | 1.016 | 15.0 | 1200 | 1.3370 | 0.4938 |
79
+ | 0.9601 | 16.25 | 1300 | 1.2981 | 0.4938 |
80
+ | 0.8445 | 17.5 | 1400 | 1.2420 | 0.5563 |
81
+ | 0.8514 | 18.75 | 1500 | 1.2485 | 0.5625 |
82
+ | 0.7899 | 20.0 | 1600 | 1.2861 | 0.4875 |
83
+ | 0.7459 | 21.25 | 1700 | 1.2860 | 0.4875 |
84
+ | 0.6917 | 22.5 | 1800 | 1.2335 | 0.5813 |
85
+ | 0.6864 | 23.75 | 1900 | 1.2726 | 0.5437 |
86
+ | 0.6414 | 25.0 | 2000 | 1.2215 | 0.5375 |
87
+ | 0.5583 | 26.25 | 2100 | 1.2756 | 0.5312 |
88
+ | 0.597 | 27.5 | 2200 | 1.2314 | 0.5375 |
89
+ | 0.5654 | 28.75 | 2300 | 1.3791 | 0.5125 |
90
+ | 0.5798 | 30.0 | 2400 | 1.1890 | 0.5687 |
91
+ | 0.5247 | 31.25 | 2500 | 1.2440 | 0.5687 |
92
+ | 0.5099 | 32.5 | 2600 | 1.2787 | 0.5625 |
93
+ | 0.496 | 33.75 | 2700 | 1.2628 | 0.55 |
94
+ | 0.479 | 35.0 | 2800 | 1.3420 | 0.4875 |
95
+ | 0.4685 | 36.25 | 2900 | 1.2817 | 0.5563 |
96
+ | 0.4375 | 37.5 | 3000 | 1.3122 | 0.525 |
97
+ | 0.4314 | 38.75 | 3100 | 1.1791 | 0.5563 |
98
+ | 0.4174 | 40.0 | 3200 | 1.2322 | 0.55 |
99
+ | 0.4019 | 41.25 | 3300 | 1.3871 | 0.5125 |
100
+ | 0.3738 | 42.5 | 3400 | 1.2854 | 0.5312 |
101
+ | 0.3938 | 43.75 | 3500 | 1.3057 | 0.5375 |
102
+ | 0.369 | 45.0 | 3600 | 1.2792 | 0.5437 |
103
+ | 0.3768 | 46.25 | 3700 | 1.2761 | 0.5625 |
104
+ | 0.3202 | 47.5 | 3800 | 1.2704 | 0.5375 |
105
+ | 0.3859 | 48.75 | 3900 | 1.2746 | 0.5312 |
106
+ | 0.3689 | 50.0 | 4000 | 1.3306 | 0.5563 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
 
108
 
109
  ### Framework versions
 
111
  - Transformers 4.35.2
112
  - Pytorch 2.1.0+cu121
113
  - Datasets 2.17.0
114
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cfff156b1a3ebdd5c41e4e7c156470c4277ed73031cf9f642bd9f3674a282eae
3
  size 343242432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e83111089b0424cf5f01f392da46738d4e908d6ec2926333988e8301b726ba9
3
  size 343242432
runs/Feb16_02-44-04_e2d94e137fbf/events.out.tfevents.1708052852.e2d94e137fbf.2207.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd76211c4e2be433b34880e258f5de1d76c849eb467ed8eab491abf417e072fe
3
+ size 411