--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: plant-seedlings-model-ConvNet-all-train results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9429097605893186 --- # plant-seedlings-model-ConvNet-all-train This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.2056 - Accuracy: 0.9429 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 14 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4109 | 0.25 | 100 | 0.5246 | 0.8195 | | 0.248 | 0.49 | 200 | 0.4594 | 0.8459 | | 0.3389 | 0.74 | 300 | 0.4443 | 0.8551 | | 0.4217 | 0.98 | 400 | 0.4500 | 0.8490 | | 0.2815 | 1.23 | 500 | 0.3939 | 0.8588 | | 0.3077 | 1.47 | 600 | 0.3813 | 0.8643 | | 0.5098 | 1.72 | 700 | 0.4276 | 0.8576 | | 0.3191 | 1.97 | 800 | 0.4218 | 0.8570 | | 0.2761 | 2.21 | 900 | 0.3404 | 0.8883 | | 0.2184 | 2.46 | 1000 | 0.3226 | 0.8889 | | 0.3106 | 2.7 | 1100 | 0.3621 | 0.8729 | | 0.3118 | 2.95 | 1200 | 0.3656 | 0.8797 | | 0.2857 | 3.19 | 1300 | 0.3123 | 0.9012 | | 0.2193 | 3.44 | 1400 | 0.2907 | 0.9048 | | 0.2959 | 3.69 | 1500 | 0.3544 | 0.8840 | | 0.3176 | 3.93 | 1600 | 0.3389 | 0.8877 | | 0.2927 | 4.18 | 1700 | 0.3418 | 0.8864 | | 0.2719 | 4.42 | 1800 | 0.3558 | 0.8821 | | 0.2176 | 4.67 | 1900 | 0.3374 | 0.8981 | | 0.1912 | 4.91 | 2000 | 0.3092 | 0.8999 | | 0.2272 | 5.16 | 2100 | 0.2902 | 0.9128 | | 0.175 | 5.41 | 2200 | 0.3002 | 0.9134 | | 0.1513 | 5.65 | 2300 | 0.3356 | 0.8999 | | 0.1439 | 5.9 | 2400 | 0.2954 | 0.9061 | | 0.2341 | 6.14 | 2500 | 0.3343 | 0.8993 | | 0.2178 | 6.39 | 2600 | 0.2891 | 0.9122 | | 0.1731 | 6.63 | 2700 | 0.3235 | 0.9030 | | 0.19 | 6.88 | 2800 | 0.2938 | 0.9042 | | 0.1168 | 7.13 | 2900 | 0.2937 | 0.9110 | | 0.1528 | 7.37 | 3000 | 0.2963 | 0.9104 | | 0.1374 | 7.62 | 3100 | 0.2929 | 0.9085 | | 0.2204 | 7.86 | 3200 | 0.3257 | 0.9048 | | 0.1519 | 8.11 | 3300 | 0.2683 | 0.9171 | | 0.0711 | 8.35 | 3400 | 0.2609 | 0.9251 | | 0.1019 | 8.6 | 3500 | 0.2523 | 0.9251 | | 0.1764 | 8.85 | 3600 | 0.2769 | 0.9202 | | 0.0849 | 9.09 | 3700 | 0.2668 | 0.9214 | | 0.2077 | 9.34 | 3800 | 0.2914 | 0.9165 | | 0.2543 | 9.58 | 3900 | 0.2507 | 0.9251 | | 0.0347 | 9.83 | 4000 | 0.2333 | 0.9269 | | 0.0731 | 10.07 | 4100 | 0.2598 | 0.9269 | | 0.238 | 10.32 | 4200 | 0.2675 | 0.9294 | | 0.1114 | 10.57 | 4300 | 0.2317 | 0.9269 | | 0.0836 | 10.81 | 4400 | 0.2344 | 0.9288 | | 0.0598 | 11.06 | 4500 | 0.2499 | 0.9276 | | 0.0488 | 11.3 | 4600 | 0.2361 | 0.9288 | | 0.1437 | 11.55 | 4700 | 0.2551 | 0.9282 | | 0.0773 | 11.79 | 4800 | 0.2276 | 0.9294 | | 0.1013 | 12.04 | 4900 | 0.2537 | 0.9288 | | 0.0943 | 12.29 | 5000 | 0.2368 | 0.9331 | | 0.0538 | 12.53 | 5100 | 0.2157 | 0.9349 | | 0.0425 | 12.78 | 5200 | 0.2330 | 0.9411 | | 0.1301 | 13.02 | 5300 | 0.2564 | 0.9331 | | 0.062 | 13.27 | 5400 | 0.2193 | 0.9417 | | 0.1012 | 13.51 | 5500 | 0.1873 | 0.9466 | | 0.1643 | 13.76 | 5600 | 0.2056 | 0.9429 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3