Edit model card

plant-seedlings-model-beit-free-0-6

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7557
  • Accuracy: 0.7475

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.4892 0.2 100 2.4909 0.0751
2.4906 0.39 200 2.4886 0.0756
2.3925 0.59 300 2.3344 0.1537
2.31 0.79 400 2.3306 0.1464
2.2355 0.98 500 2.2335 0.1778
2.2642 1.18 600 2.1889 0.1807
2.0806 1.38 700 2.3229 0.1680
2.1013 1.57 800 2.1519 0.2004
2.0094 1.77 900 2.0611 0.2146
2.0387 1.96 1000 2.0413 0.2210
2.0032 2.16 1100 1.9758 0.2618
1.986 2.36 1200 1.9238 0.2638
2.0885 2.55 1300 1.8944 0.2942
1.8808 2.75 1400 1.9330 0.2868
1.915 2.95 1500 1.8919 0.2814
1.958 3.14 1600 1.8762 0.3114
1.9001 3.34 1700 1.8389 0.3232
1.8572 3.54 1800 1.7978 0.3487
1.9969 3.73 1900 1.9371 0.3089
1.9186 3.93 2000 1.8055 0.3502
1.7591 4.13 2100 1.7695 0.3428
1.8368 4.32 2200 1.7498 0.3502
1.9842 4.52 2300 1.8049 0.3193
1.7606 4.72 2400 1.6730 0.3954
1.7787 4.91 2500 1.7104 0.3777
1.6377 5.11 2600 1.6647 0.3870
1.8834 5.3 2700 1.6325 0.3973
1.6149 5.5 2800 1.6722 0.3787
1.7038 5.7 2900 1.6425 0.3973
1.682 5.89 3000 1.5927 0.4180
1.6326 6.09 3100 1.4982 0.4622
1.5687 6.29 3200 1.4440 0.4774
1.3637 6.48 3300 1.4477 0.4877
1.4079 6.68 3400 1.3827 0.5020
1.3721 6.88 3500 1.4069 0.5010
1.5675 7.07 3600 1.3595 0.5083
1.5725 7.27 3700 1.3790 0.4956
1.4522 7.47 3800 1.3116 0.5378
1.4692 7.66 3900 1.3729 0.4980
1.5073 7.86 4000 1.3799 0.5216
1.2529 8.06 4100 1.2706 0.5486
1.3727 8.25 4200 1.2519 0.5535
1.2451 8.45 4300 1.2595 0.5648
1.339 8.64 4400 1.3614 0.5172
1.2858 8.84 4500 1.3028 0.5393
1.1039 9.04 4600 1.2309 0.5771
1.0351 9.23 4700 1.2678 0.5609
1.1125 9.43 4800 1.2786 0.5624
1.1667 9.63 4900 1.2131 0.5840
1.1386 9.82 5000 1.1359 0.6154
1.1888 10.02 5100 1.1309 0.6041
1.1777 10.22 5200 1.1288 0.6287
1.3693 10.41 5300 1.3827 0.5182
1.1016 10.61 5400 1.2255 0.5594
1.1527 10.81 5500 1.0772 0.6434
1.1039 11.0 5600 1.1032 0.6100
1.2502 11.2 5700 1.1230 0.6169
1.0818 11.39 5800 1.0750 0.6302
1.0872 11.59 5900 1.0397 0.6331
1.0425 11.79 6000 1.0231 0.6483
1.0791 11.98 6100 1.0250 0.6636
0.9736 12.18 6200 1.0879 0.6267
0.9788 12.38 6300 1.1334 0.5968
0.8982 12.57 6400 0.9934 0.6528
1.077 12.77 6500 0.9698 0.6812
1.0347 12.97 6600 1.0265 0.6513
0.9159 13.16 6700 0.9442 0.6788
1.1187 13.36 6800 0.9738 0.6685
0.9624 13.56 6900 1.0008 0.6699
0.922 13.75 7000 0.9502 0.6906
0.9317 13.95 7100 0.9687 0.6758
0.9979 14.15 7200 0.9869 0.6768
0.8362 14.34 7300 0.9220 0.6994
0.8449 14.54 7400 0.9181 0.6861
0.9678 14.73 7500 0.9789 0.6729
0.9119 14.93 7600 0.8879 0.7009
0.9517 15.13 7700 0.8816 0.6994
0.9688 15.32 7800 0.8803 0.7117
0.8625 15.52 7900 0.8782 0.7038
0.9121 15.72 8000 0.8225 0.7191
0.9035 15.91 8100 0.8649 0.7087
0.8762 16.11 8200 0.8427 0.7102
0.7708 16.31 8300 0.8685 0.7117
0.8893 16.5 8400 0.8178 0.7264
0.9584 16.7 8500 0.8709 0.7092
0.757 16.9 8600 0.8244 0.7254
0.8184 17.09 8700 0.8128 0.7240
0.8858 17.29 8800 0.8360 0.7156
0.7116 17.49 8900 0.7952 0.7279
0.9579 17.68 9000 0.8263 0.7274
0.7037 17.88 9100 0.7884 0.7348
1.0359 18.07 9200 0.8118 0.7402
1.067 18.27 9300 0.8203 0.7186
0.8503 18.47 9400 0.7918 0.7362
0.8552 18.66 9500 0.7972 0.7382
0.7498 18.86 9600 0.8038 0.7343
0.8542 19.06 9700 0.7799 0.7333
0.9539 19.25 9800 0.7795 0.7333
0.7369 19.45 9900 0.8103 0.7269
0.6637 19.65 10000 0.7597 0.7441
0.6712 19.84 10100 0.7557 0.7475

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu118
  • Datasets 2.11.0
  • Tokenizers 0.13.3
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results