alkzar90 commited on
Commit
227e915
1 Parent(s): fad67aa

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -31
README.md CHANGED
@@ -1,10 +1,9 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - image-classification
5
  - generated_from_trainer
6
  datasets:
7
- - alkzar90/croupier-mtg-dataset
8
  metrics:
9
  - accuracy
10
  model-index:
@@ -14,7 +13,7 @@ model-index:
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
- name: croupier-mtg-dataset
18
  type: imagefolder
19
  config: alkzar90--croupier-mtg-dataset
20
  split: train
@@ -22,16 +21,7 @@ model-index:
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.8058823529411765
26
- widget:
27
- - src: https://huggingface.co/alkzar90/croupier-creature-classifier/resolve/main/examples/crusader_peco_peco.png
28
- example_title: Crusader-Rangarok-Online
29
- - src: https://huggingface.co/alkzar90/croupier-creature-classifier/resolve/main/examples/goblin_wow.png
30
- example_title: Goblin-WoW
31
- - src: https://huggingface.co/alkzar90/croupier-creature-classifier/resolve/main/examples/dobby_harry_potter.jpg
32
- example_title: Dobby-Harry-Potter
33
- - src: https://huggingface.co/alkzar90/croupier-creature-classifier/resolve/main/examples/resident_evil_nemesis.jpeg
34
- example_title: Nemesis-Resident-Evil
35
  ---
36
 
37
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -39,10 +29,10 @@ should probably proofread and complete it, then remove this comment. -->
39
 
40
  # croupier-creature-classifier
41
 
42
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the croupier-mtg-dataset dataset.
43
  It achieves the following results on the evaluation set:
44
- - Loss: 0.6480
45
- - Accuracy: 0.8059
46
 
47
  ## Model description
48
 
@@ -67,31 +57,23 @@ The following hyperparameters were used during training:
67
  - seed: 42
68
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
69
  - lr_scheduler_type: linear
70
- - num_epochs: 15
71
  - mixed_precision_training: Native AMP
72
 
73
  ### Training results
74
 
75
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
76
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
77
- | 0.1967 | 1.1 | 100 | 0.6480 | 0.8059 |
78
- | 0.1047 | 2.2 | 200 | 0.8703 | 0.7529 |
79
- | 0.2249 | 3.3 | 300 | 0.9539 | 0.7588 |
80
- | 0.0984 | 4.4 | 400 | 0.9319 | 0.7529 |
81
- | 0.086 | 5.49 | 500 | 0.9061 | 0.7706 |
82
- | 0.1164 | 6.59 | 600 | 0.7493 | 0.8176 |
83
- | 0.0518 | 7.69 | 700 | 0.8781 | 0.7765 |
84
- | 0.0458 | 8.79 | 800 | 0.8851 | 0.7824 |
85
- | 0.0521 | 9.89 | 900 | 0.9448 | 0.7882 |
86
- | 0.0576 | 10.99 | 1000 | 0.8884 | 0.7824 |
87
- | 0.0442 | 12.09 | 1100 | 0.8965 | 0.7882 |
88
- | 0.0254 | 13.19 | 1200 | 0.9140 | 0.7882 |
89
- | 0.0426 | 14.29 | 1300 | 0.9274 | 0.7882 |
90
 
91
 
92
  ### Framework versions
93
 
94
- - Transformers 4.21.0
95
  - Pytorch 1.12.0+cu113
96
  - Datasets 2.4.0
97
  - Tokenizers 0.12.1
 
1
  ---
2
  license: apache-2.0
3
  tags:
 
4
  - generated_from_trainer
5
  datasets:
6
+ - imagefolder
7
  metrics:
8
  - accuracy
9
  model-index:
 
13
  name: Image Classification
14
  type: image-classification
15
  dataset:
16
+ name: imagefolder
17
  type: imagefolder
18
  config: alkzar90--croupier-mtg-dataset
19
  split: train
 
21
  metrics:
22
  - name: Accuracy
23
  type: accuracy
24
+ value: 0.7058823529411765
 
 
 
 
 
 
 
 
 
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
29
 
30
  # croupier-creature-classifier
31
 
32
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.7184
35
+ - Accuracy: 0.7059
36
 
37
  ## Model description
38
 
 
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
+ - num_epochs: 6
61
  - mixed_precision_training: Native AMP
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
+ | 0.8932 | 1.1 | 100 | 0.9914 | 0.6059 |
68
+ | 0.6608 | 2.2 | 200 | 0.8645 | 0.6588 |
69
+ | 0.6084 | 3.3 | 300 | 0.7326 | 0.7294 |
70
+ | 0.5261 | 4.4 | 400 | 0.7684 | 0.6941 |
71
+ | 0.2511 | 5.49 | 500 | 0.7184 | 0.7059 |
 
 
 
 
 
 
 
 
72
 
73
 
74
  ### Framework versions
75
 
76
+ - Transformers 4.21.1
77
  - Pytorch 1.12.0+cu113
78
  - Datasets 2.4.0
79
  - Tokenizers 0.12.1