Edit model card

vit-base-patch16-224-Trial007-YEL_STEM1

This model is a fine-tuned version of google/vit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0269
  • Accuracy: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 60
  • eval_batch_size: 60
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 240
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.8443 0.89 2 0.7813 0.3148
0.7501 1.78 4 0.7087 0.5556
0.6312 2.67 6 0.5306 0.9074
0.4329 4.0 9 0.3618 0.9074
0.4438 4.89 11 0.2699 0.9444
0.3858 5.78 13 0.3650 0.7963
0.339 6.67 15 0.1911 0.9630
0.2852 8.0 18 0.1611 0.9630
0.1866 8.89 20 0.1516 0.9444
0.1748 9.78 22 0.1333 0.9630
0.1996 10.67 24 0.1188 0.9630
0.1604 12.0 27 0.1169 0.9444
0.1319 12.89 29 0.0835 0.9815
0.141 13.78 31 0.0704 0.9815
0.123 14.67 33 0.0574 0.9815
0.0678 16.0 36 0.0604 0.9815
0.1208 16.89 38 0.0385 0.9815
0.0942 17.78 40 0.0269 1.0
0.0822 18.67 42 0.0169 1.0
0.0578 20.0 45 0.0175 1.0
0.0611 20.89 47 0.0220 1.0
0.1053 21.78 49 0.0098 1.0
0.1713 22.67 51 0.0156 1.0
0.0515 24.0 54 0.0111 1.0
0.1227 24.89 56 0.0166 1.0
0.0891 25.78 58 0.0093 1.0
0.0768 26.67 60 0.0090 1.0
0.0755 28.0 63 0.0108 1.0
0.0798 28.89 65 0.0201 1.0
0.1005 29.78 67 0.0118 1.0
0.1113 30.67 69 0.0131 1.0
0.1034 32.0 72 0.0171 1.0
0.0857 32.89 74 0.0158 1.0
0.0864 33.78 76 0.0141 1.0
0.1241 34.67 78 0.0127 1.0
0.0868 36.0 81 0.0118 1.0
0.0704 36.89 83 0.0113 1.0
0.0938 37.78 85 0.0109 1.0
0.1181 38.67 87 0.0120 1.0
0.0509 40.0 90 0.0149 1.0
0.0684 40.89 92 0.0155 1.0
0.0625 41.78 94 0.0151 1.0
0.0746 42.67 96 0.0143 1.0
0.1062 44.0 99 0.0133 1.0
0.0579 44.44 100 0.0132 1.0

Framework versions

  • Transformers 4.30.0.dev0
  • Pytorch 1.12.1
  • Datasets 2.12.0
  • Tokenizers 0.13.1
Downloads last month
3

Evaluation results