MazenAmria commited on
Commit
511dc51
1 Parent(s): d5e8927

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -3
README.md CHANGED
@@ -1,11 +1,25 @@
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  datasets:
5
  - cifar100
 
 
6
  model-index:
7
  - name: swin-tiny-finetuned-cifar100
8
- results: []
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -13,7 +27,10 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # swin-tiny-finetuned-cifar100
15
 
16
- This model was trained from scratch on the cifar100 dataset.
 
 
 
17
 
18
  ## Model description
19
 
@@ -41,7 +58,21 @@ The following hyperparameters were used during training:
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_ratio: 0.1
44
- - num_epochs: 20
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
  ### Framework versions
47
 
1
  ---
2
+ license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
  - cifar100
7
+ metrics:
8
+ - accuracy
9
  model-index:
10
  - name: swin-tiny-finetuned-cifar100
11
+ results:
12
+ - task:
13
+ name: Image Classification
14
+ type: image-classification
15
+ dataset:
16
+ name: cifar100
17
+ type: cifar100
18
+ args: cifar100
19
+ metrics:
20
+ - name: Accuracy
21
+ type: accuracy
22
+ value: 0.8735
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
27
 
28
  # swin-tiny-finetuned-cifar100
29
 
30
+ This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the cifar100 dataset.
31
+ It achieves the following results on the evaluation set:
32
+ - Loss: 0.4223
33
+ - Accuracy: 0.8735
34
 
35
  ## Model description
36
 
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
  - lr_scheduler_warmup_ratio: 0.1
61
+ - num_epochs: 20 (with early stopping)
62
+
63
+ ### Training results
64
+
65
+ | Training Loss | Epoch | Step | Accuracy | Validation Loss |
66
+ |:-------------:|:-----:|:----:|:--------:|:---------------:|
67
+ | 0.6439 | 1.0 | 781 | 0.8138 | 0.6126 |
68
+ | 0.6222 | 2.0 | 1562 | 0.8393 | 0.5094 |
69
+ | 0.2912 | 3.0 | 2343 | 0.861 | 0.4452 |
70
+ | 0.2234 | 4.0 | 3124 | 0.8679 | 0.4330 |
71
+ | 0.121 | 5.0 | 3905 | 0.8735 | 0.4223 |
72
+ | 0.2589 | 6.0 | 4686 | 0.8622 | 0.4775 |
73
+ | 0.1419 | 7.0 | 5467 | 0.8642 | 0.4900 |
74
+ | 0.1513 | 8.0 | 6248 | 0.8667 | 0.4956 |
75
+
76
 
77
  ### Framework versions
78