--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer metrics: - accuracy model-index: - name: ViTForImageClassification results: [] --- # ViTForImageClassification This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the [CIFAR10](https://huggingface.co/datasets/Andron00e/CIFAR10-custom) dataset. It achieves the following results on the evaluation set: - Loss: 0.1199 - Accuracy: 0.9678 ## Model description [A detailed description of model architecture can be found here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/modeling_vit.py#L756) ## Training and evaluation data [CIFAR10](https://huggingface.co/datasets/Andron00e/CIFAR10-custom) ## Training procedure Straightforward tuning of all model's parameters. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2995 | 0.27 | 100 | 0.3419 | 0.9108 | | 0.2289 | 0.53 | 200 | 0.2482 | 0.9288 | | 0.1811 | 0.8 | 300 | 0.2139 | 0.9357 | | 0.0797 | 1.07 | 400 | 0.1813 | 0.946 | | 0.1128 | 1.33 | 500 | 0.1741 | 0.9452 | | 0.086 | 1.6 | 600 | 0.1659 | 0.9513 | | 0.0815 | 1.87 | 700 | 0.1468 | 0.9547 | | 0.048 | 2.13 | 800 | 0.1393 | 0.9592 | | 0.021 | 2.4 | 900 | 0.1399 | 0.9603 | | 0.0271 | 2.67 | 1000 | 0.1334 | 0.9642 | | 0.0231 | 2.93 | 1100 | 0.1228 | 0.9658 | | 0.0101 | 3.2 | 1200 | 0.1229 | 0.9673 | | 0.0041 | 3.47 | 1300 | 0.1189 | 0.9675 | | 0.0043 | 3.73 | 1400 | 0.1165 | 0.9683 | | 0.0067 | 4.0 | 1500 | 0.1145 | 0.9697 | ### Framework versions - Transformers 4.34.1 - Pytorch 2.0.1+cu117 - Datasets 2.12.0 - Tokenizers 0.14.1