ongkn commited on
Commit
362e1bb
1 Parent(s): 9f3618a

Training in progress, step 15

Browse files
Files changed (5) hide show
  1. .gitattributes +0 -1
  2. README.md +82 -19
  3. model.safetensors +1 -1
  4. pytorch_model.bin +3 -0
  5. training_args.bin +3 -0
.gitattributes CHANGED
@@ -33,4 +33,3 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
- variables/variables.data-00000-of-00001 filter=lfs diff=lfs merge=lfs -text
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
README.md CHANGED
@@ -1,7 +1,40 @@
1
  ---
2
- library_name: keras
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
 
 
 
 
 
 
 
 
 
 
5
  ## Model description
6
 
7
  More information needed
@@ -19,23 +52,53 @@ More information needed
19
  ### Training hyperparameters
20
 
21
  The following hyperparameters were used during training:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
- | Hyperparameters | Value |
24
- | :-- | :-- |
25
- | name | Adam |
26
- | weight_decay | None |
27
- | clipnorm | None |
28
- | global_clipnorm | None |
29
- | clipvalue | None |
30
- | use_ema | False |
31
- | ema_momentum | 0.99 |
32
- | ema_overwrite_frequency | None |
33
- | jit_compile | True |
34
- | is_legacy_optimizer | False |
35
- | learning_rate | 0.0010000000474974513 |
36
- | beta_1 | 0.9 |
37
- | beta_2 | 0.999 |
38
- | epsilon | 1e-07 |
39
- | amsgrad | False |
40
- | training_precision | float32 |
41
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ base_model: google/vit-base-patch16-224-in21k
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - imagefolder
8
+ metrics:
9
+ - accuracy
10
+ model-index:
11
+ - name: attraction-classifier
12
+ results:
13
+ - task:
14
+ name: Image Classification
15
+ type: image-classification
16
+ dataset:
17
+ name: imagefolder
18
+ type: imagefolder
19
+ config: default
20
+ split: train
21
+ args: default
22
+ metrics:
23
+ - name: Accuracy
24
+ type: accuracy
25
+ value: 0.7955974842767296
26
  ---
27
 
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # attraction-classifier
32
+
33
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.4691
36
+ - Accuracy: 0.7956
37
+
38
  ## Model description
39
 
40
  More information needed
 
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
+ - learning_rate: 5e-05
56
+ - train_batch_size: 16
57
+ - eval_batch_size: 16
58
+ - seed: 69
59
+ - gradient_accumulation_steps: 4
60
+ - total_train_batch_size: 64
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: cosine
63
+ - lr_scheduler_warmup_ratio: 0.15
64
+ - num_epochs: 15
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | 0.6703 | 0.34 | 15 | 0.6354 | 0.7327 |
71
+ | 0.5449 | 0.67 | 30 | 0.5836 | 0.7421 |
72
+ | 0.5407 | 1.01 | 45 | 0.5594 | 0.7421 |
73
+ | 0.5255 | 1.34 | 60 | 0.5294 | 0.7547 |
74
+ | 0.5586 | 1.68 | 75 | 0.5171 | 0.7642 |
75
+ | 0.5438 | 2.01 | 90 | 0.5212 | 0.7704 |
76
+ | 0.4807 | 2.35 | 105 | 0.5181 | 0.7390 |
77
+ | 0.6202 | 2.68 | 120 | 0.4972 | 0.7704 |
78
+ | 0.5021 | 3.02 | 135 | 0.4566 | 0.7987 |
79
+ | 0.4313 | 3.35 | 150 | 0.4852 | 0.7925 |
80
+ | 0.3532 | 3.69 | 165 | 0.4378 | 0.8113 |
81
+ | 0.3577 | 4.02 | 180 | 0.4515 | 0.8019 |
82
+ | 0.4736 | 4.36 | 195 | 0.4498 | 0.7893 |
83
+ | 0.3516 | 4.69 | 210 | 0.4408 | 0.8239 |
84
+ | 0.4437 | 5.03 | 225 | 0.4611 | 0.7799 |
85
+ | 0.3543 | 5.36 | 240 | 0.4294 | 0.8208 |
86
+ | 0.4029 | 5.7 | 255 | 0.4155 | 0.8428 |
87
+ | 0.3808 | 6.03 | 270 | 0.4116 | 0.8302 |
88
+ | 0.3211 | 6.37 | 285 | 0.4009 | 0.8302 |
89
+ | 0.2949 | 6.7 | 300 | 0.4321 | 0.8176 |
90
+ | 0.2663 | 7.04 | 315 | 0.4229 | 0.8396 |
91
+ | 0.3049 | 7.37 | 330 | 0.4110 | 0.8365 |
92
+ | 0.1303 | 7.71 | 345 | 0.4288 | 0.8333 |
93
+ | 0.2079 | 8.04 | 360 | 0.4218 | 0.8208 |
94
+ | 0.208 | 8.38 | 375 | 0.3908 | 0.8365 |
95
+ | 0.2067 | 8.72 | 390 | 0.5191 | 0.7862 |
96
+ | 0.1635 | 9.05 | 405 | 0.4691 | 0.7956 |
97
+
98
 
99
+ ### Framework versions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100
 
101
+ - Transformers 4.35.2
102
+ - Pytorch 2.0.1+cu117
103
+ - Datasets 2.15.0
104
+ - Tokenizers 0.15.0
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4709531b391ee40c72112595e4ede3f6e1e4d0c95cd056b7e8268708e44a2062
3
  size 343223968
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59e4f30337a49b985186331920984dac31c03f861a5c6a844393e5a26b46626a
3
  size 343223968
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:351d27808aa08147aecde76008b689ed4a92b8db98029a0a5c64c38ccd90b810
3
+ size 343268717
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60ace28fa66aaba1ade9846845ad59356f4ece11d0dd685b4c5227f93cd96824
3
+ size 4155