q2-jlbar commited on
Commit
87867e8
1 Parent(s): 4be80ca

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -19,7 +19,7 @@ model-index:
19
  metrics:
20
  - name: Accuracy
21
  type: accuracy
22
- value: 0.9755555555555555
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -29,8 +29,8 @@ should probably proofread and complete it, then remove this comment. -->
29
 
30
  This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
31
  It achieves the following results on the evaluation set:
32
- - Loss: 0.0706
33
- - Accuracy: 0.9756
34
 
35
  ## Model description
36
 
@@ -50,11 +50,11 @@ More information needed
50
 
51
  The following hyperparameters were used during training:
52
  - learning_rate: 5e-05
53
- - train_batch_size: 32
54
- - eval_batch_size: 32
55
  - seed: 42
56
  - gradient_accumulation_steps: 4
57
- - total_train_batch_size: 128
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
  - lr_scheduler_warmup_ratio: 0.1
@@ -64,9 +64,9 @@ The following hyperparameters were used during training:
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
- | 0.2292 | 1.0 | 190 | 0.1030 | 0.9704 |
68
- | 0.1582 | 2.0 | 380 | 0.0802 | 0.9744 |
69
- | 0.1463 | 3.0 | 570 | 0.0706 | 0.9756 |
70
 
71
 
72
  ### Framework versions
 
19
  metrics:
20
  - name: Accuracy
21
  type: accuracy
22
+ value: 0.9618518518518518
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
29
 
30
  This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
31
  It achieves the following results on the evaluation set:
32
+ - Loss: 0.1199
33
+ - Accuracy: 0.9619
34
 
35
  ## Model description
36
 
 
50
 
51
  The following hyperparameters were used during training:
52
  - learning_rate: 5e-05
53
+ - train_batch_size: 128
54
+ - eval_batch_size: 128
55
  - seed: 42
56
  - gradient_accumulation_steps: 4
57
+ - total_train_batch_size: 512
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
  - lr_scheduler_warmup_ratio: 0.1
 
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
+ | 0.3627 | 0.99 | 47 | 0.1988 | 0.9389 |
68
+ | 0.2202 | 1.99 | 94 | 0.1280 | 0.9604 |
69
+ | 0.1948 | 2.99 | 141 | 0.1199 | 0.9619 |
70
 
71
 
72
  ### Framework versions