--- license: apache-2.0 tags: - image-classification - vision - generated_from_trainer datasets: - AI-Lab-Makerere/beans metrics: - accuracy base_model: google/vit-base-patch16-224-in21k model-index: - name: vit-base-beans results: - task: type: image-classification name: Image Classification dataset: name: beans type: beans config: default split: validation args: default metrics: - type: accuracy value: 0.9849624060150376 name: Accuracy --- # THIS IS A TEST REPO FOR DEBUGGING! This repo is here as a result of playing with and debugging training scripts and push to hub features. As such, the TesnorFlow and PyTorch models will be out of sync and different weights may be push at any time, including pushing models with very low performance. # vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0630 - Accuracy: 0.9850 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1337 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3038 | 1.0 | 130 | 0.2396 | 0.9624 | | 0.1609 | 2.0 | 260 | 0.1130 | 0.9774 | | 0.2313 | 3.0 | 390 | 0.0809 | 0.9850 | | 0.1436 | 4.0 | 520 | 0.0738 | 0.9850 | | 0.1086 | 5.0 | 650 | 0.0630 | 0.9850 | ### Framework versions - Transformers 4.27.0.dev0 - Pytorch 1.14.0.dev20221118 - Datasets 2.9.1.dev0 - Tokenizers 0.13.2