Edit model card

vit-base-patch16-224-in21k

This model was trained from scratch on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6306
  • Accuracy: 0.5375

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 40 1.2472 0.5312
No log 2.0 80 1.2878 0.5188
No log 3.0 120 1.3116 0.525
No log 4.0 160 1.2578 0.55
No log 5.0 200 1.2186 0.5563
No log 6.0 240 1.2680 0.5563
No log 7.0 280 1.3674 0.5
No log 8.0 320 1.3814 0.525
No log 9.0 360 1.4394 0.5
No log 10.0 400 1.3710 0.5437
No log 11.0 440 1.3721 0.5437
No log 12.0 480 1.4309 0.5563
0.4861 13.0 520 1.3424 0.575
0.4861 14.0 560 1.4617 0.525
0.4861 15.0 600 1.3964 0.5813
0.4861 16.0 640 1.4751 0.5687
0.4861 17.0 680 1.5296 0.55
0.4861 18.0 720 1.5887 0.5188
0.4861 19.0 760 1.5784 0.5312
0.4861 20.0 800 1.7036 0.5375
0.4861 21.0 840 1.6988 0.5188
0.4861 22.0 880 1.6070 0.5687
0.4861 23.0 920 1.7111 0.55
0.4861 24.0 960 1.6730 0.55
0.2042 25.0 1000 1.6559 0.55
0.2042 26.0 1040 1.7221 0.5563
0.2042 27.0 1080 1.6637 0.5813
0.2042 28.0 1120 1.6806 0.5625
0.2042 29.0 1160 1.5743 0.5938
0.2042 30.0 1200 1.7899 0.4938
0.2042 31.0 1240 1.7422 0.5312
0.2042 32.0 1280 1.7712 0.55
0.2042 33.0 1320 1.7480 0.5188
0.2042 34.0 1360 1.7964 0.5375
0.2042 35.0 1400 1.9687 0.5188
0.2042 36.0 1440 1.7412 0.5813
0.2042 37.0 1480 1.9312 0.4875
0.1342 38.0 1520 1.7944 0.525
0.1342 39.0 1560 1.8180 0.55
0.1342 40.0 1600 1.7720 0.5563
0.1342 41.0 1640 1.9014 0.5312
0.1342 42.0 1680 1.7519 0.55
0.1342 43.0 1720 1.9793 0.5
0.1342 44.0 1760 1.8642 0.55
0.1342 45.0 1800 1.7573 0.5875
0.1342 46.0 1840 1.8508 0.5125
0.1342 47.0 1880 1.9741 0.5625
0.1342 48.0 1920 1.9012 0.525
0.1342 49.0 1960 1.8771 0.5625
0.0926 50.0 2000 1.8728 0.5125

Framework versions

  • Transformers 4.33.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
21

Evaluation results