--- license: apache-2.0 base_model: google/vit-base-patch16-224 tags: - generated_from_trainer metrics: - accuracy model-index: - name: vit-base-patch16-224-dmae-va-U5-42 results: [] --- # vit-base-patch16-224-dmae-va-U5-42 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7345 - Accuracy: 0.8333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 42 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.9 | 7 | 1.2858 | 0.5 | | 1.3455 | 1.94 | 15 | 1.1091 | 0.4833 | | 1.3455 | 2.97 | 23 | 0.8518 | 0.5833 | | 1.0067 | 4.0 | 31 | 0.7317 | 0.7167 | | 0.6085 | 4.9 | 38 | 0.6949 | 0.75 | | 0.6085 | 5.94 | 46 | 0.6633 | 0.75 | | 0.3389 | 6.97 | 54 | 0.6791 | 0.7667 | | 0.1977 | 8.0 | 62 | 0.7010 | 0.7333 | | 0.1977 | 8.9 | 69 | 0.6970 | 0.75 | | 0.1496 | 9.94 | 77 | 0.6984 | 0.8 | | 0.1194 | 10.97 | 85 | 0.9061 | 0.7333 | | 0.1194 | 12.0 | 93 | 0.8720 | 0.75 | | 0.109 | 12.9 | 100 | 0.8439 | 0.7833 | | 0.0902 | 13.94 | 108 | 0.7345 | 0.8333 | | 0.0902 | 14.97 | 116 | 0.8420 | 0.7833 | | 0.0938 | 16.0 | 124 | 0.7994 | 0.75 | | 0.0938 | 16.9 | 131 | 0.8341 | 0.8 | | 0.0862 | 17.94 | 139 | 0.7239 | 0.8 | | 0.0864 | 18.97 | 147 | 0.8485 | 0.7833 | | 0.0864 | 20.0 | 155 | 0.8948 | 0.8 | | 0.065 | 20.9 | 162 | 0.8681 | 0.8167 | | 0.0793 | 21.94 | 170 | 0.8226 | 0.8167 | | 0.0793 | 22.97 | 178 | 0.7495 | 0.8333 | | 0.0629 | 24.0 | 186 | 0.8814 | 0.7667 | | 0.0666 | 24.9 | 193 | 0.7739 | 0.8167 | | 0.0666 | 25.94 | 201 | 0.9246 | 0.7833 | | 0.0571 | 26.97 | 209 | 0.8077 | 0.8333 | | 0.0519 | 28.0 | 217 | 0.8975 | 0.7833 | | 0.0519 | 28.9 | 224 | 0.9199 | 0.7833 | | 0.0523 | 29.94 | 232 | 0.8512 | 0.8 | | 0.0548 | 30.97 | 240 | 0.9377 | 0.8167 | | 0.0548 | 32.0 | 248 | 0.8213 | 0.8167 | | 0.0576 | 32.9 | 255 | 0.8384 | 0.8167 | | 0.0576 | 33.94 | 263 | 0.8664 | 0.8 | | 0.0381 | 34.97 | 271 | 0.8818 | 0.8 | | 0.0338 | 36.0 | 279 | 0.9106 | 0.7833 | | 0.0338 | 36.9 | 286 | 0.9057 | 0.7833 | | 0.0443 | 37.94 | 294 | 0.9012 | 0.7833 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2