kisa-misa commited on
Commit
8f3b3fb
·
verified ·
1 Parent(s): 4468b1c

Model save

Browse files
Files changed (2) hide show
  1. README.md +35 -71
  2. model.safetensors +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.8829787234042553
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.4558
36
- - Accuracy: 0.8830
37
 
38
  ## Model description
39
 
@@ -61,81 +61,45 @@ The following hyperparameters were used during training:
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
- - num_epochs: 500
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-------:|:----:|:---------------:|:--------:|
70
- | No log | 0.8889 | 6 | 0.4592 | 0.8191 |
71
- | 0.2992 | 1.9259 | 13 | 0.4546 | 0.8298 |
72
- | 0.3381 | 2.9630 | 20 | 0.4429 | 0.8191 |
73
- | 0.3381 | 4.0 | 27 | 0.4367 | 0.8191 |
74
- | 0.2989 | 4.8889 | 33 | 0.4390 | 0.8191 |
75
- | 0.3198 | 5.9259 | 40 | 0.4461 | 0.8191 |
76
- | 0.3198 | 6.9630 | 47 | 0.4534 | 0.8191 |
77
- | 0.2724 | 8.0 | 54 | 0.4635 | 0.7979 |
78
- | 0.2775 | 8.8889 | 60 | 0.4414 | 0.8191 |
79
- | 0.2775 | 9.9259 | 67 | 0.4499 | 0.7979 |
80
- | 0.2783 | 10.9630 | 74 | 0.4453 | 0.8191 |
81
- | 0.308 | 12.0 | 81 | 0.4444 | 0.8085 |
82
- | 0.308 | 12.8889 | 87 | 0.4619 | 0.8085 |
83
- | 0.3513 | 13.9259 | 94 | 0.4468 | 0.8191 |
84
- | 0.3091 | 14.9630 | 101 | 0.4597 | 0.8085 |
85
- | 0.3091 | 16.0 | 108 | 0.4513 | 0.8191 |
86
- | 0.3391 | 16.8889 | 114 | 0.4395 | 0.8298 |
87
- | 0.3135 | 17.9259 | 121 | 0.4687 | 0.8191 |
88
- | 0.3135 | 18.9630 | 128 | 0.4215 | 0.8191 |
89
- | 0.3278 | 20.0 | 135 | 0.4365 | 0.8085 |
90
- | 0.3189 | 20.8889 | 141 | 0.4137 | 0.8511 |
91
- | 0.3189 | 21.9259 | 148 | 0.4653 | 0.8191 |
92
- | 0.2914 | 22.9630 | 155 | 0.4888 | 0.8085 |
93
- | 0.3217 | 24.0 | 162 | 0.4933 | 0.7766 |
94
- | 0.3217 | 24.8889 | 168 | 0.4051 | 0.8191 |
95
- | 0.3018 | 25.9259 | 175 | 0.5732 | 0.7766 |
96
- | 0.3078 | 26.9630 | 182 | 0.4401 | 0.8191 |
97
- | 0.3078 | 28.0 | 189 | 0.4048 | 0.8617 |
98
- | 0.2848 | 28.8889 | 195 | 0.5970 | 0.7553 |
99
- | 0.281 | 29.9259 | 202 | 0.4422 | 0.8191 |
100
- | 0.281 | 30.9630 | 209 | 0.4091 | 0.8404 |
101
- | 0.2608 | 32.0 | 216 | 0.4415 | 0.8298 |
102
- | 0.2373 | 32.8889 | 222 | 0.3496 | 0.8617 |
103
- | 0.2373 | 33.9259 | 229 | 0.4510 | 0.8298 |
104
- | 0.2712 | 34.9630 | 236 | 0.4498 | 0.8404 |
105
- | 0.2374 | 36.0 | 243 | 0.5103 | 0.8298 |
106
- | 0.2374 | 36.8889 | 249 | 0.4311 | 0.8404 |
107
- | 0.2471 | 37.9259 | 256 | 0.5993 | 0.8085 |
108
- | 0.2419 | 38.9630 | 263 | 0.5649 | 0.8404 |
109
- | 0.2511 | 40.0 | 270 | 0.5319 | 0.8298 |
110
- | 0.2511 | 40.8889 | 276 | 0.5782 | 0.8191 |
111
- | 0.2184 | 41.9259 | 283 | 0.5105 | 0.8404 |
112
- | 0.2272 | 42.9630 | 290 | 0.5509 | 0.8404 |
113
- | 0.2272 | 44.0 | 297 | 0.4216 | 0.8830 |
114
- | 0.2162 | 44.8889 | 303 | 0.7166 | 0.7872 |
115
- | 0.2201 | 45.9259 | 310 | 0.6365 | 0.8404 |
116
- | 0.2201 | 46.9630 | 317 | 0.5059 | 0.8723 |
117
- | 0.2272 | 48.0 | 324 | 0.4986 | 0.8298 |
118
- | 0.2561 | 48.8889 | 330 | 0.5835 | 0.8617 |
119
- | 0.2561 | 49.9259 | 337 | 0.6940 | 0.8191 |
120
- | 0.2151 | 50.9630 | 344 | 0.5961 | 0.8723 |
121
- | 0.2024 | 52.0 | 351 | 0.6294 | 0.8404 |
122
- | 0.2024 | 52.8889 | 357 | 0.6847 | 0.8723 |
123
- | 0.1881 | 53.9259 | 364 | 0.5811 | 0.8617 |
124
- | 0.1764 | 54.9630 | 371 | 0.7194 | 0.8404 |
125
- | 0.1764 | 56.0 | 378 | 0.4744 | 0.8723 |
126
- | 0.1885 | 56.8889 | 384 | 0.5920 | 0.8404 |
127
- | 0.1898 | 57.9259 | 391 | 0.4570 | 0.8830 |
128
- | 0.1898 | 58.9630 | 398 | 0.4141 | 0.8617 |
129
- | 0.1723 | 60.0 | 405 | 0.5443 | 0.8617 |
130
- | 0.1816 | 60.8889 | 411 | 0.6426 | 0.8085 |
131
- | 0.1816 | 61.9259 | 418 | 0.4732 | 0.8511 |
132
- | 0.1688 | 62.9630 | 425 | 0.5275 | 0.8511 |
133
- | 0.1498 | 64.0 | 432 | 0.4558 | 0.8830 |
134
 
135
 
136
  ### Framework versions
137
 
138
- - Transformers 4.40.2
139
- - Pytorch 2.2.1+cu121
140
- - Datasets 2.19.1
141
  - Tokenizers 0.19.1
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.8557692307692307
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.4043
36
+ - Accuracy: 0.8558
37
 
38
  ## Model description
39
 
 
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
+ - num_epochs: 30
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-------:|:----:|:---------------:|:--------:|
70
+ | No log | 0.9333 | 7 | 0.6743 | 0.5673 |
71
+ | 0.6763 | 2.0 | 15 | 0.6166 | 0.6923 |
72
+ | 0.635 | 2.9333 | 22 | 0.5646 | 0.7404 |
73
+ | 0.5724 | 4.0 | 30 | 0.5074 | 0.7308 |
74
+ | 0.5724 | 4.9333 | 37 | 0.4809 | 0.7692 |
75
+ | 0.527 | 6.0 | 45 | 0.4597 | 0.7692 |
76
+ | 0.5304 | 6.9333 | 52 | 0.4758 | 0.7596 |
77
+ | 0.4597 | 8.0 | 60 | 0.4343 | 0.7885 |
78
+ | 0.4597 | 8.9333 | 67 | 0.4249 | 0.7981 |
79
+ | 0.4606 | 10.0 | 75 | 0.4236 | 0.7981 |
80
+ | 0.4286 | 10.9333 | 82 | 0.4055 | 0.8462 |
81
+ | 0.3857 | 12.0 | 90 | 0.4144 | 0.8269 |
82
+ | 0.3857 | 12.9333 | 97 | 0.4294 | 0.7981 |
83
+ | 0.3801 | 14.0 | 105 | 0.4081 | 0.8462 |
84
+ | 0.3538 | 14.9333 | 112 | 0.4195 | 0.8462 |
85
+ | 0.3585 | 16.0 | 120 | 0.4069 | 0.8558 |
86
+ | 0.3585 | 16.9333 | 127 | 0.3971 | 0.8558 |
87
+ | 0.3258 | 18.0 | 135 | 0.3938 | 0.8654 |
88
+ | 0.3288 | 18.9333 | 142 | 0.3964 | 0.8462 |
89
+ | 0.3276 | 20.0 | 150 | 0.4423 | 0.8558 |
90
+ | 0.3276 | 20.9333 | 157 | 0.4067 | 0.8365 |
91
+ | 0.317 | 22.0 | 165 | 0.4179 | 0.8654 |
92
+ | 0.288 | 22.9333 | 172 | 0.3882 | 0.8558 |
93
+ | 0.2735 | 24.0 | 180 | 0.4215 | 0.8558 |
94
+ | 0.2735 | 24.9333 | 187 | 0.3972 | 0.8462 |
95
+ | 0.2805 | 26.0 | 195 | 0.3943 | 0.8558 |
96
+ | 0.2961 | 26.9333 | 202 | 0.3999 | 0.8558 |
97
+ | 0.2832 | 28.0 | 210 | 0.4043 | 0.8558 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
 
99
 
100
  ### Framework versions
101
 
102
+ - Transformers 4.41.2
103
+ - Pytorch 2.3.0+cu121
104
+ - Datasets 2.19.2
105
  - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:818ec182a430863587edfedfb2cd1a009689e1f2db00d624b42575b5ea00ea64
3
  size 110342832
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab8f13c7b584a8e93af571b13b85ab1c9441a15b929d87670b5e54bbc1d32395
3
  size 110342832