Edit model card

meat_calssify_fresh_crop_fixed_epoch100_V_0_9

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5517
  • Accuracy: 0.8038

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1101 1.0 10 1.1019 0.3038
1.087 2.0 20 1.0748 0.4557
1.0593 3.0 30 1.0543 0.5
1.0235 4.0 40 1.0289 0.4873
0.9755 5.0 50 1.0048 0.4873
0.9116 6.0 60 0.9857 0.5
0.9154 7.0 70 0.9614 0.4937
0.8318 8.0 80 0.9839 0.5506
0.795 9.0 90 0.9393 0.5570
0.7544 10.0 100 0.9061 0.5886
0.6596 11.0 110 0.8780 0.6392
0.6111 12.0 120 0.8170 0.6392
0.5791 13.0 130 0.8801 0.6329
0.5287 14.0 140 0.8099 0.6709
0.4966 15.0 150 0.7827 0.6582
0.4842 16.0 160 0.8007 0.6709
0.4059 17.0 170 0.7380 0.6582
0.3905 18.0 180 0.6695 0.7278
0.3681 19.0 190 0.7496 0.6899
0.3618 20.0 200 0.7554 0.7089
0.3446 21.0 210 0.7603 0.6962
0.3625 22.0 220 0.7402 0.6772
0.3356 23.0 230 0.7598 0.6582
0.2758 24.0 240 0.7952 0.6899
0.287 25.0 250 0.8296 0.6772
0.3334 26.0 260 0.9352 0.6456
0.2925 27.0 270 0.8240 0.6772
0.2732 28.0 280 0.7479 0.7278
0.2816 29.0 290 0.7068 0.7152
0.2349 30.0 300 0.6218 0.7658
0.2282 31.0 310 0.6681 0.7342
0.2297 32.0 320 0.9084 0.6709
0.2316 33.0 330 0.8716 0.6772
0.2182 34.0 340 0.7289 0.7342
0.2159 35.0 350 0.6567 0.7405
0.2329 36.0 360 0.6947 0.7468
0.155 37.0 370 0.6736 0.7532
0.1901 38.0 380 0.8000 0.7025
0.1767 39.0 390 0.7780 0.7342
0.1718 40.0 400 0.6616 0.7595
0.1558 41.0 410 0.7514 0.7025
0.1564 42.0 420 0.7801 0.7278
0.2172 43.0 430 0.7421 0.7342
0.1703 44.0 440 0.7043 0.7595
0.1475 45.0 450 0.6865 0.7658
0.1174 46.0 460 0.5958 0.7975
0.1586 47.0 470 0.6927 0.7785
0.1515 48.0 480 0.8407 0.7089
0.1593 49.0 490 0.6465 0.7658
0.1777 50.0 500 0.7899 0.7215
0.1205 51.0 510 0.5897 0.7722
0.1375 52.0 520 0.6837 0.7785
0.1564 53.0 530 0.7868 0.7152
0.1481 54.0 540 0.7252 0.7722
0.1073 55.0 550 0.6796 0.7658
0.1549 56.0 560 0.7610 0.7152
0.1351 57.0 570 0.7985 0.7342
0.1235 58.0 580 0.6534 0.7595
0.1306 59.0 590 0.7046 0.7975
0.1464 60.0 600 0.7280 0.7595
0.1724 61.0 610 0.7066 0.7848
0.115 62.0 620 0.7080 0.7532
0.0842 63.0 630 0.6463 0.7848
0.0883 64.0 640 0.8290 0.7342
0.0901 65.0 650 0.7097 0.7595
0.1174 66.0 660 0.6627 0.7658
0.1167 67.0 670 0.7519 0.7722
0.0795 68.0 680 0.6104 0.7975
0.0583 69.0 690 0.7621 0.7848
0.0973 70.0 700 0.7309 0.7658
0.0909 71.0 710 0.9068 0.7215
0.0931 72.0 720 0.7453 0.7658
0.1101 73.0 730 0.8395 0.7089
0.0867 74.0 740 0.6816 0.7722
0.1154 75.0 750 0.7723 0.7405
0.1016 76.0 760 0.7334 0.7785
0.0821 77.0 770 0.7354 0.7722
0.0624 78.0 780 0.5303 0.8544
0.0698 79.0 790 0.7409 0.7658
0.086 80.0 800 0.6524 0.8038
0.072 81.0 810 0.7530 0.7848
0.0656 82.0 820 0.7409 0.7785
0.0909 83.0 830 0.7190 0.7848
0.0821 84.0 840 0.7085 0.7848
0.0618 85.0 850 0.6801 0.7658
0.0943 86.0 860 0.6859 0.7595
0.0787 87.0 870 0.6259 0.7975
0.0691 88.0 880 0.7148 0.7911
0.0494 89.0 890 0.7675 0.7785
0.0767 90.0 900 0.7293 0.7911
0.0861 91.0 910 0.6653 0.7975
0.0535 92.0 920 0.6421 0.8038
0.0574 93.0 930 0.7444 0.7911
0.0567 94.0 940 0.4409 0.8671
0.0759 95.0 950 0.5884 0.7975
0.0407 96.0 960 0.6606 0.7848
0.0624 97.0 970 0.5409 0.8354
0.0586 98.0 980 0.5585 0.7975
0.0413 99.0 990 0.6347 0.7911
0.0597 100.0 1000 0.5517 0.8038

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.1
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
85.8M params
Tensor type
F32
·
Inference API
Drag image file here or click to browse from your device
This model can be loaded on Inference API (serverless).

Finetuned from

Evaluation results