--- license: apache-2.0 base_model: google/vit-large-patch16-224 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: Adam_ViTL-16_224-2e-4-batch_16_epoch_4_classes_24 results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9683908045977011 --- # Adam_ViTL-16_224-2e-4-batch_16_epoch_4_classes_24 This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1211 - Accuracy: 0.9684 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6934 | 0.07 | 100 | 0.4138 | 0.8477 | | 0.4815 | 0.14 | 200 | 0.7227 | 0.8103 | | 0.3952 | 0.21 | 300 | 0.5867 | 0.8491 | | 0.6095 | 0.28 | 400 | 0.5975 | 0.8448 | | 0.3448 | 0.35 | 500 | 0.4000 | 0.8721 | | 0.2604 | 0.42 | 600 | 0.3335 | 0.9080 | | 0.3734 | 0.49 | 700 | 0.4264 | 0.875 | | 0.3074 | 0.56 | 800 | 0.3634 | 0.8908 | | 0.312 | 0.63 | 900 | 0.4347 | 0.875 | | 0.1076 | 0.7 | 1000 | 0.3203 | 0.9052 | | 0.2001 | 0.77 | 1100 | 0.2668 | 0.9224 | | 0.0507 | 0.84 | 1200 | 0.2265 | 0.9353 | | 0.0767 | 0.91 | 1300 | 0.2797 | 0.9239 | | 0.3201 | 0.97 | 1400 | 0.2977 | 0.9109 | | 0.0293 | 1.04 | 1500 | 0.2849 | 0.9239 | | 0.0353 | 1.11 | 1600 | 0.2918 | 0.9339 | | 0.0787 | 1.18 | 1700 | 0.3012 | 0.9253 | | 0.0749 | 1.25 | 1800 | 0.2383 | 0.9454 | | 0.0233 | 1.32 | 1900 | 0.3272 | 0.9152 | | 0.1635 | 1.39 | 2000 | 0.2857 | 0.9124 | | 0.0586 | 1.46 | 2100 | 0.3785 | 0.9109 | | 0.0103 | 1.53 | 2200 | 0.2032 | 0.9468 | | 0.0082 | 1.6 | 2300 | 0.2091 | 0.9397 | | 0.0695 | 1.67 | 2400 | 0.1739 | 0.9526 | | 0.0253 | 1.74 | 2500 | 0.2056 | 0.9511 | | 0.0648 | 1.81 | 2600 | 0.1803 | 0.9526 | | 0.0286 | 1.88 | 2700 | 0.2018 | 0.9440 | | 0.0057 | 1.95 | 2800 | 0.2332 | 0.9483 | | 0.0056 | 2.02 | 2900 | 0.3459 | 0.9267 | | 0.0111 | 2.09 | 3000 | 0.1954 | 0.9540 | | 0.0001 | 2.16 | 3100 | 0.1586 | 0.9626 | | 0.0059 | 2.23 | 3200 | 0.1716 | 0.9526 | | 0.0063 | 2.3 | 3300 | 0.1548 | 0.9612 | | 0.0003 | 2.37 | 3400 | 0.1813 | 0.9569 | | 0.0006 | 2.44 | 3500 | 0.1339 | 0.9626 | | 0.0004 | 2.51 | 3600 | 0.1492 | 0.9583 | | 0.0004 | 2.58 | 3700 | 0.1238 | 0.9698 | | 0.0001 | 2.65 | 3800 | 0.1156 | 0.9713 | | 0.0001 | 2.72 | 3900 | 0.1272 | 0.9684 | | 0.0 | 2.79 | 4000 | 0.1303 | 0.9698 | | 0.0001 | 2.86 | 4100 | 0.1269 | 0.9684 | | 0.0001 | 2.92 | 4200 | 0.1209 | 0.9684 | | 0.0 | 2.99 | 4300 | 0.1211 | 0.9684 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2