bansilp commited on
Commit
6d24d88
1 Parent(s): 895e818

Model save

Browse files
README.md CHANGED
@@ -2,7 +2,6 @@
2
  license: apache-2.0
3
  base_model: google/vit-base-patch16-224-in21k
4
  tags:
5
- - image-classification
6
  - generated_from_trainer
7
  datasets:
8
  - imagefolder
@@ -10,7 +9,20 @@ metrics:
10
  - accuracy
11
  model-index:
12
  - name: xyz
13
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -18,10 +30,10 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # xyz
20
 
21
- This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the mclr dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.3957
24
- - Accuracy: 0.9157
25
 
26
  ## Model description
27
 
@@ -46,16 +58,19 @@ The following hyperparameters were used during training:
46
  - seed: 42
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
- - num_epochs: 10
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
 
 
 
54
 
55
 
56
  ### Framework versions
57
 
58
  - Transformers 4.35.2
59
- - Pytorch 2.1.0+cu118
60
  - Datasets 2.15.0
61
  - Tokenizers 0.15.0
 
2
  license: apache-2.0
3
  base_model: google/vit-base-patch16-224-in21k
4
  tags:
 
5
  - generated_from_trainer
6
  datasets:
7
  - imagefolder
 
9
  - accuracy
10
  model-index:
11
  - name: xyz
12
+ results:
13
+ - task:
14
+ name: Image Classification
15
+ type: image-classification
16
+ dataset:
17
+ name: imagefolder
18
+ type: imagefolder
19
+ config: default
20
+ split: train
21
+ args: default
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.8972222222222223
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # xyz
32
 
33
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.5568
36
+ - Accuracy: 0.8972
37
 
38
  ## Model description
39
 
 
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
+ - num_epochs: 20
62
  - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
+ | 0.0087 | 11.11 | 3000 | 0.5568 | 0.8972 |
69
 
70
 
71
  ### Framework versions
72
 
73
  - Transformers 4.35.2
74
+ - Pytorch 2.1.0+cu121
75
  - Datasets 2.15.0
76
  - Tokenizers 0.15.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c439d599a6750250ae08db3689c1ec36ecbc505a082810334a926b3ca690c5f3
3
  size 343245508
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6c70879575399cae9bbcd25ab50adfd0eab9c419dabc4533b1d26accaaf65a3
3
  size 343245508
runs/Dec15_01-32-57_d2a76fcee09b/events.out.tfevents.1702603988.d2a76fcee09b.2614.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83bf13d8f6df512e5d43fd34b18ccfae0cb68d02313183d59bee8166565db948
3
+ size 89849
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e94dabd68efd57a821c3f90f56e3aff133ef1e54b5fa5915546c77572135079d
3
  size 4536
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:713ddd4e1f2be2cc30159e4d3378961e122b348387f38fa6745a6f389a7adb72
3
  size 4536