Edit model card

gockle_v2

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9618
  • Accuracy: 0.7844

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-06
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 11
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.7231 0.64 100 2.6467 0.2279
2.3217 1.28 200 2.4386 0.2288
2.0819 1.92 300 2.2887 0.2815
1.9583 2.56 400 2.1686 0.4501
1.8098 3.21 500 2.0731 0.5085
1.7511 3.85 600 1.9978 0.5320
1.6581 4.49 700 1.9233 0.5584
1.6094 5.13 800 1.8703 0.5706
1.5241 5.77 900 1.8192 0.6017
1.501 6.41 1000 1.7757 0.6111
1.4308 7.05 1100 1.7415 0.6281
1.3985 7.69 1200 1.7015 0.6375
1.3559 8.33 1300 1.6652 0.6403
1.3092 8.97 1400 1.6290 0.6488
1.3059 9.62 1500 1.6142 0.6620
1.2597 10.26 1600 1.5771 0.6704
1.2147 10.9 1700 1.5501 0.6902
1.1942 11.54 1800 1.5288 0.6911
1.1668 12.18 1900 1.5081 0.6902
1.1371 12.82 2000 1.4883 0.6949
1.1256 13.46 2100 1.4770 0.6930
1.0922 14.1 2200 1.4500 0.7081
1.0559 14.74 2300 1.4369 0.7072
1.054 15.38 2400 1.4157 0.7128
1.0465 16.03 2500 1.3899 0.7279
0.9965 16.67 2600 1.3734 0.7194
0.9876 17.31 2700 1.3603 0.7298
0.9791 17.95 2800 1.3422 0.7298
0.9551 18.59 2900 1.3309 0.7373
0.9313 19.23 3000 1.3223 0.7335
0.9211 19.87 3100 1.3052 0.7345
0.9071 20.51 3200 1.2897 0.7420
0.875 21.15 3300 1.2762 0.7561
0.8676 21.79 3400 1.2657 0.7542
0.8498 22.44 3500 1.2575 0.7580
0.8529 23.08 3600 1.2435 0.7542
0.8341 23.72 3700 1.2369 0.7561
0.8056 24.36 3800 1.2306 0.7533
0.8038 25.0 3900 1.2181 0.7665
0.7733 25.64 4000 1.2031 0.7655
0.7834 26.28 4100 1.2015 0.7637
0.7697 26.92 4200 1.1887 0.7637
0.7438 27.56 4300 1.1788 0.7674
0.733 28.21 4400 1.1740 0.7637
0.7244 28.85 4500 1.1671 0.7674
0.7091 29.49 4600 1.1563 0.7693
0.7138 30.13 4700 1.1543 0.7665
0.693 30.77 4800 1.1445 0.7665
0.6837 31.41 4900 1.1348 0.7731
0.6706 32.05 5000 1.1282 0.7702
0.6514 32.69 5100 1.1222 0.7712
0.6513 33.33 5200 1.1323 0.7665
0.6517 33.97 5300 1.1138 0.7693
0.637 34.62 5400 1.1014 0.7712
0.6277 35.26 5500 1.0949 0.7759
0.6103 35.9 5600 1.0882 0.7759
0.5916 36.54 5700 1.0888 0.7693
0.6101 37.18 5800 1.0890 0.7721
0.6042 37.82 5900 1.0779 0.7750
0.5618 38.46 6000 1.0769 0.7750
0.5878 39.1 6100 1.0638 0.7787
0.5522 39.74 6200 1.0611 0.7731
0.557 40.38 6300 1.0639 0.7768
0.5665 41.03 6400 1.0668 0.7740
0.5269 41.67 6500 1.0531 0.7759
0.5672 42.31 6600 1.0493 0.7759
0.5197 42.95 6700 1.0469 0.7759
0.5273 43.59 6800 1.0481 0.7740
0.5149 44.23 6900 1.0434 0.7712
0.5146 44.87 7000 1.0462 0.7787
0.5033 45.51 7100 1.0358 0.7759
0.5073 46.15 7200 1.0322 0.7806
0.4964 46.79 7300 1.0313 0.7815
0.4832 47.44 7400 1.0238 0.7797
0.484 48.08 7500 1.0355 0.7768
0.4856 48.72 7600 1.0263 0.7834
0.4688 49.36 7700 1.0178 0.7815
0.4628 50.0 7800 1.0161 0.7787
0.457 50.64 7900 1.0195 0.7768
0.4547 51.28 8000 1.0064 0.7825
0.4551 51.92 8100 1.0108 0.7806
0.4408 52.56 8200 1.0136 0.7768
0.4471 53.21 8300 1.0016 0.7834
0.4431 53.85 8400 1.0038 0.7863
0.4393 54.49 8500 1.0057 0.7815
0.4246 55.13 8600 0.9961 0.7797
0.4237 55.77 8700 1.0019 0.7806
0.4128 56.41 8800 0.9941 0.7806
0.4285 57.05 8900 0.9946 0.7815
0.4121 57.69 9000 0.9932 0.7806
0.4167 58.33 9100 0.9916 0.7825
0.4001 58.97 9200 0.9915 0.7825
0.4053 59.62 9300 0.9886 0.7815
0.3993 60.26 9400 0.9910 0.7844
0.3881 60.9 9500 0.9856 0.7863
0.3846 61.54 9600 0.9917 0.7806
0.3913 62.18 9700 0.9820 0.7834
0.3897 62.82 9800 0.9806 0.7844
0.3821 63.46 9900 0.9804 0.7825
0.3742 64.1 10000 0.9873 0.7844
0.3835 64.74 10100 0.9807 0.7834
0.3571 65.38 10200 0.9792 0.7844
0.38 66.03 10300 0.9786 0.7844
0.3612 66.67 10400 0.9769 0.7844
0.3628 67.31 10500 0.9991 0.7740
0.3655 67.95 10600 0.9737 0.7806
0.3489 68.59 10700 0.9745 0.7853
0.371 69.23 10800 0.9853 0.7787
0.3454 69.87 10900 0.9676 0.7825
0.3457 70.51 11000 0.9708 0.7853
0.3559 71.15 11100 0.9691 0.7863
0.3523 71.79 11200 0.9690 0.7872
0.3357 72.44 11300 0.9707 0.7815
0.344 73.08 11400 0.9690 0.7863
0.3527 73.72 11500 0.9788 0.7825
0.327 74.36 11600 0.9703 0.7825
0.3376 75.0 11700 0.9770 0.7787
0.3518 75.64 11800 0.9718 0.7834
0.3031 76.28 11900 0.9736 0.7863
0.3404 76.92 12000 0.9661 0.7825
0.3243 77.56 12100 0.9731 0.7853
0.3381 78.21 12200 0.9685 0.7900
0.3258 78.85 12300 0.9691 0.7844
0.3149 79.49 12400 0.9615 0.7844
0.3234 80.13 12500 0.9661 0.7853
0.3296 80.77 12600 0.9722 0.7815
0.3215 81.41 12700 0.9672 0.7834
0.3121 82.05 12800 0.9641 0.7834
0.3163 82.69 12900 0.9636 0.7834
0.3225 83.33 13000 0.9649 0.7853
0.3136 83.97 13100 0.9652 0.7825
0.3172 84.62 13200 0.9639 0.7853
0.3098 85.26 13300 0.9671 0.7834
0.3081 85.9 13400 0.9627 0.7806
0.3099 86.54 13500 0.9626 0.7815
0.3144 87.18 13600 0.9612 0.7815
0.2952 87.82 13700 0.9645 0.7863
0.3092 88.46 13800 0.9604 0.7853
0.3193 89.1 13900 0.9630 0.7844
0.3005 89.74 14000 0.9667 0.7815
0.2928 90.38 14100 0.9638 0.7844
0.315 91.03 14200 0.9644 0.7844
0.3095 91.67 14300 0.9637 0.7834
0.3036 92.31 14400 0.9615 0.7834
0.298 92.95 14500 0.9617 0.7844
0.2944 93.59 14600 0.9658 0.7834
0.3065 94.23 14700 0.9625 0.7834
0.2983 94.87 14800 0.9622 0.7844
0.2953 95.51 14900 0.9626 0.7834
0.3063 96.15 15000 0.9608 0.7853
0.3058 96.79 15100 0.9631 0.7853
0.2974 97.44 15200 0.9614 0.7844
0.3004 98.08 15300 0.9608 0.7844
0.3001 98.72 15400 0.9613 0.7853
0.2968 99.36 15500 0.9623 0.7853
0.2985 100.0 15600 0.9618 0.7844

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Martin-Michael/gockle_v2

Finetuned
(1684)
this model

Evaluation results