agustin228 commited on
Commit
cdd628e
1 Parent(s): 2b90ee4

End of training

Browse files
Files changed (1) hide show
  1. README.md +16 -18
README.md CHANGED
@@ -1,10 +1,10 @@
1
  ---
2
  license: apache-2.0
3
- base_model: google/vit-base-patch16-224-in21k
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
- - imagefolder
8
  metrics:
9
  - accuracy
10
  model-index:
@@ -14,15 +14,15 @@ model-index:
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
- name: imagefolder
18
- type: imagefolder
19
- config: default
20
- split: train
21
- args: default
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.48125
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,10 +30,10 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # image_classification
32
 
33
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.4287
36
- - Accuracy: 0.4813
37
 
38
  ## Model description
39
 
@@ -58,22 +58,20 @@ The following hyperparameters were used during training:
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
- - num_epochs: 5
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
- | No log | 1.0 | 40 | 1.7991 | 0.3 |
68
- | No log | 2.0 | 80 | 1.5897 | 0.425 |
69
- | No log | 3.0 | 120 | 1.5036 | 0.4562 |
70
- | No log | 4.0 | 160 | 1.4381 | 0.5125 |
71
- | No log | 5.0 | 200 | 1.4394 | 0.4813 |
72
 
73
 
74
  ### Framework versions
75
 
76
- - Transformers 4.33.1
77
  - Pytorch 2.0.1+cu118
78
  - Datasets 2.14.5
79
  - Tokenizers 0.13.3
 
1
  ---
2
  license: apache-2.0
3
+ base_model: google/vit-base-patch16-224
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
+ - pokemon-classification
8
  metrics:
9
  - accuracy
10
  model-index:
 
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
+ name: pokemon-classification
18
+ type: pokemon-classification
19
+ config: full
20
+ split: train[:4800]
21
+ args: full
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.8854166666666666
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # image_classification
32
 
33
+ This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the pokemon-classification dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.8072
36
+ - Accuracy: 0.8854
37
 
38
  ## Model description
39
 
 
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
+ - num_epochs: 3
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
+ | No log | 1.0 | 240 | 2.0511 | 0.7427 |
68
+ | No log | 2.0 | 480 | 0.9657 | 0.8792 |
69
+ | 2.3005 | 3.0 | 720 | 0.8118 | 0.8833 |
 
 
70
 
71
 
72
  ### Framework versions
73
 
74
+ - Transformers 4.33.3
75
  - Pytorch 2.0.1+cu118
76
  - Datasets 2.14.5
77
  - Tokenizers 0.13.3