--- license: apache-2.0 base_model: google/vit-base-patch16-224-in21k tags: - generated_from_trainer datasets: - food101 metrics: - accuracy model-index: - name: vit-base-patch16-224-in21k-food101-24-12 results: - task: name: Image Classification type: image-classification dataset: name: food101 type: food101 config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9068514851485149 --- # vit-base-patch16-224-in21k-food101-24-12 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the food101 dataset. It achieves the following results on the evaluation set: - Loss: 0.3533 - Accuracy: 0.9069 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 12 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.7927 | 1.0 | 789 | 2.5629 | 0.7693 | | 1.256 | 2.0 | 1578 | 0.9637 | 0.8583 | | 0.94 | 3.0 | 2367 | 0.5866 | 0.8816 | | 0.6693 | 4.0 | 3157 | 0.4752 | 0.8888 | | 0.6337 | 5.0 | 3946 | 0.4282 | 0.8941 | | 0.5811 | 6.0 | 4735 | 0.4110 | 0.8949 | | 0.4661 | 7.0 | 5524 | 0.3875 | 0.8990 | | 0.4188 | 8.0 | 6314 | 0.3776 | 0.9010 | | 0.5045 | 9.0 | 7103 | 0.3633 | 0.9049 | | 0.3437 | 10.0 | 7892 | 0.3611 | 0.9058 | | 0.3494 | 11.0 | 8681 | 0.3568 | 0.9060 | | 0.3381 | 12.0 | 9468 | 0.3533 | 0.9069 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1