Edit model card

meat_calssify_fresh_crop_fixed_epoch100_V_0_2

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7219
  • Accuracy: 0.7975

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.0968 1.0 10 1.0907 0.3797
1.0804 2.0 20 1.0759 0.3924
1.0578 3.0 30 1.0750 0.4241
1.0273 4.0 40 1.0443 0.4684
0.9866 5.0 50 1.0325 0.4747
0.9234 6.0 60 0.9837 0.5886
0.8597 7.0 70 0.9564 0.5443
0.8042 8.0 80 0.9315 0.5633
0.8463 9.0 90 0.9334 0.5380
0.7795 10.0 100 0.9305 0.5443
0.7375 11.0 110 0.8787 0.6076
0.6489 12.0 120 0.8685 0.6392
0.5958 13.0 130 0.8133 0.6582
0.5308 14.0 140 0.8563 0.6519
0.5206 15.0 150 0.7902 0.6709
0.4617 16.0 160 0.8114 0.6456
0.4338 17.0 170 0.8134 0.6646
0.454 18.0 180 0.7283 0.6772
0.5094 19.0 190 0.7035 0.6962
0.4133 20.0 200 0.7652 0.6835
0.3504 21.0 210 0.7225 0.7089
0.3602 22.0 220 0.8140 0.6582
0.32 23.0 230 0.7057 0.7278
0.2849 24.0 240 0.7051 0.6899
0.3051 25.0 250 0.7805 0.7025
0.3099 26.0 260 0.7456 0.6772
0.3305 27.0 270 0.7802 0.6646
0.2508 28.0 280 0.7222 0.7152
0.2842 29.0 290 0.6745 0.7278
0.2584 30.0 300 0.6029 0.7658
0.2324 31.0 310 0.6066 0.7911
0.3014 32.0 320 0.7253 0.7215
0.2279 33.0 330 0.7050 0.7089
0.2363 34.0 340 0.7361 0.7785
0.2085 35.0 350 0.6596 0.7658
0.1808 36.0 360 0.7104 0.7532
0.2051 37.0 370 0.7471 0.7152
0.1911 38.0 380 0.8262 0.7025
0.2027 39.0 390 0.7785 0.7532
0.1944 40.0 400 0.8136 0.6835
0.1627 41.0 410 0.8254 0.7152
0.1619 42.0 420 0.8766 0.6772
0.1619 43.0 430 0.6940 0.7405
0.1635 44.0 440 0.8477 0.7215
0.1323 45.0 450 0.6644 0.7848
0.1253 46.0 460 0.7747 0.7468
0.1254 47.0 470 0.9075 0.6962
0.1494 48.0 480 0.8104 0.7405
0.1702 49.0 490 0.7167 0.7532
0.1591 50.0 500 0.8214 0.6962
0.1105 51.0 510 0.9359 0.7152
0.1354 52.0 520 0.7214 0.7342
0.119 53.0 530 0.7825 0.7342
0.0841 54.0 540 0.7528 0.7595
0.12 55.0 550 0.7002 0.7658
0.1096 56.0 560 0.7747 0.7785
0.1192 57.0 570 0.7368 0.7532
0.1268 58.0 580 0.7098 0.7722
0.1351 59.0 590 0.6097 0.7848
0.1248 60.0 600 0.8102 0.7215
0.1378 61.0 610 0.6786 0.7405
0.1208 62.0 620 0.5467 0.8101
0.0786 63.0 630 0.7059 0.7785
0.1048 64.0 640 0.7945 0.7278
0.0954 65.0 650 0.8258 0.7278
0.121 66.0 660 0.7267 0.7532
0.0921 67.0 670 0.5914 0.7911
0.092 68.0 680 0.6923 0.7722
0.1153 69.0 690 0.6655 0.8038
0.0987 70.0 700 0.6774 0.7722
0.0797 71.0 710 0.6143 0.7975
0.0842 72.0 720 0.7301 0.7595
0.0707 73.0 730 0.7614 0.7405
0.0848 74.0 740 0.7578 0.7785
0.0853 75.0 750 0.7785 0.7405
0.0761 76.0 760 0.8719 0.7532
0.1019 77.0 770 0.5698 0.8165
0.0747 78.0 780 0.7956 0.7278
0.0657 79.0 790 0.5792 0.7975
0.0969 80.0 800 0.5721 0.8101
0.0597 81.0 810 0.7171 0.7785
0.0787 82.0 820 0.7493 0.7595
0.0823 83.0 830 0.6758 0.8038
0.0828 84.0 840 0.8082 0.7722
0.0693 85.0 850 0.7310 0.7911
0.074 86.0 860 0.6492 0.8228
0.0736 87.0 870 0.7373 0.7785
0.0763 88.0 880 0.7254 0.7848
0.0823 89.0 890 0.8261 0.7785
0.0614 90.0 900 0.6919 0.7911
0.0916 91.0 910 0.5884 0.7975
0.0539 92.0 920 0.6960 0.7658
0.0604 93.0 930 0.6502 0.7975
0.0596 94.0 940 0.6058 0.7975
0.0599 95.0 950 0.7166 0.7785
0.0452 96.0 960 0.8093 0.7658
0.0556 97.0 970 0.6589 0.8354
0.0675 98.0 980 0.7471 0.8101
0.0581 99.0 990 0.6568 0.8038
0.0515 100.0 1000 0.7219 0.7975

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
7
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference API
Drag image file here or click to browse from your device
This model can be loaded on Inference API (serverless).

Finetuned from

Evaluation results