--- license: other base_model: nvidia/mit-b0 tags: - generated_from_trainer model-index: - name: segformer-b0-finetuned-segments-toolwear results: [] --- # segformer-b0-finetuned-segments-toolwear This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0522 - Mean Iou: 0.3485 - Mean Accuracy: 0.6969 - Overall Accuracy: 0.6969 - Accuracy Unlabeled: nan - Accuracy Mass: 0.6969 - Iou Unlabeled: 0.0 - Iou Mass: 0.6969 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 45 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Mass | Iou Unlabeled | Iou Mass | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:-------------:|:-------------:|:--------:| | 0.4117 | 1.25 | 20 | 0.4147 | 0.0579 | 0.1158 | 0.1158 | nan | 0.1158 | 0.0 | 0.1158 | | 0.2756 | 2.5 | 40 | 0.2454 | 0.1520 | 0.3040 | 0.3040 | nan | 0.3040 | 0.0 | 0.3040 | | 0.2029 | 3.75 | 60 | 0.1873 | 0.3150 | 0.6301 | 0.6301 | nan | 0.6301 | 0.0 | 0.6301 | | 0.1506 | 5.0 | 80 | 0.1403 | 0.3616 | 0.7232 | 0.7232 | nan | 0.7232 | 0.0 | 0.7232 | | 0.1177 | 6.25 | 100 | 0.1077 | 0.1634 | 0.3269 | 0.3269 | nan | 0.3269 | 0.0 | 0.3269 | | 0.088 | 7.5 | 120 | 0.0972 | 0.2268 | 0.4536 | 0.4536 | nan | 0.4536 | 0.0 | 0.4536 | | 0.0796 | 8.75 | 140 | 0.0895 | 0.3776 | 0.7551 | 0.7551 | nan | 0.7551 | 0.0 | 0.7551 | | 0.0702 | 10.0 | 160 | 0.0754 | 0.3785 | 0.7570 | 0.7570 | nan | 0.7570 | 0.0 | 0.7570 | | 0.0643 | 11.25 | 180 | 0.0654 | 0.3207 | 0.6414 | 0.6414 | nan | 0.6414 | 0.0 | 0.6414 | | 0.0566 | 12.5 | 200 | 0.0635 | 0.3408 | 0.6815 | 0.6815 | nan | 0.6815 | 0.0 | 0.6815 | | 0.0467 | 13.75 | 220 | 0.0684 | 0.3971 | 0.7942 | 0.7942 | nan | 0.7942 | 0.0 | 0.7942 | | 0.0481 | 15.0 | 240 | 0.0599 | 0.3713 | 0.7425 | 0.7425 | nan | 0.7425 | 0.0 | 0.7425 | | 0.0465 | 16.25 | 260 | 0.0603 | 0.3121 | 0.6241 | 0.6241 | nan | 0.6241 | 0.0 | 0.6241 | | 0.0409 | 17.5 | 280 | 0.0569 | 0.3441 | 0.6882 | 0.6882 | nan | 0.6882 | 0.0 | 0.6882 | | 0.0392 | 18.75 | 300 | 0.0565 | 0.3568 | 0.7135 | 0.7135 | nan | 0.7135 | 0.0 | 0.7135 | | 0.0287 | 20.0 | 320 | 0.0571 | 0.3237 | 0.6474 | 0.6474 | nan | 0.6474 | 0.0 | 0.6474 | | 0.032 | 21.25 | 340 | 0.0574 | 0.3209 | 0.6419 | 0.6419 | nan | 0.6419 | 0.0 | 0.6419 | | 0.0308 | 22.5 | 360 | 0.0551 | 0.3371 | 0.6742 | 0.6742 | nan | 0.6742 | 0.0 | 0.6742 | | 0.0274 | 23.75 | 380 | 0.0546 | 0.3561 | 0.7122 | 0.7122 | nan | 0.7122 | 0.0 | 0.7122 | | 0.0246 | 25.0 | 400 | 0.0534 | 0.3491 | 0.6981 | 0.6981 | nan | 0.6981 | 0.0 | 0.6981 | | 0.0252 | 26.25 | 420 | 0.0533 | 0.3661 | 0.7322 | 0.7322 | nan | 0.7322 | 0.0 | 0.7322 | | 0.0251 | 27.5 | 440 | 0.0542 | 0.3507 | 0.7014 | 0.7014 | nan | 0.7014 | 0.0 | 0.7014 | | 0.027 | 28.75 | 460 | 0.0527 | 0.3531 | 0.7062 | 0.7062 | nan | 0.7062 | 0.0 | 0.7062 | | 0.0259 | 30.0 | 480 | 0.0539 | 0.3757 | 0.7514 | 0.7514 | nan | 0.7514 | 0.0 | 0.7514 | | 0.0212 | 31.25 | 500 | 0.0537 | 0.3283 | 0.6565 | 0.6565 | nan | 0.6565 | 0.0 | 0.6565 | | 0.0223 | 32.5 | 520 | 0.0517 | 0.3511 | 0.7022 | 0.7022 | nan | 0.7022 | 0.0 | 0.7022 | | 0.027 | 33.75 | 540 | 0.0504 | 0.3552 | 0.7103 | 0.7103 | nan | 0.7103 | 0.0 | 0.7103 | | 0.026 | 35.0 | 560 | 0.0516 | 0.3596 | 0.7192 | 0.7192 | nan | 0.7192 | 0.0 | 0.7192 | | 0.0239 | 36.25 | 580 | 0.0525 | 0.3559 | 0.7119 | 0.7119 | nan | 0.7119 | 0.0 | 0.7119 | | 0.0218 | 37.5 | 600 | 0.0532 | 0.3374 | 0.6748 | 0.6748 | nan | 0.6748 | 0.0 | 0.6748 | | 0.0214 | 38.75 | 620 | 0.0513 | 0.3591 | 0.7183 | 0.7183 | nan | 0.7183 | 0.0 | 0.7183 | | 0.0187 | 40.0 | 640 | 0.0517 | 0.3660 | 0.7320 | 0.7320 | nan | 0.7320 | 0.0 | 0.7320 | | 0.0201 | 41.25 | 660 | 0.0521 | 0.3647 | 0.7295 | 0.7295 | nan | 0.7295 | 0.0 | 0.7295 | | 0.024 | 42.5 | 680 | 0.0520 | 0.3485 | 0.6970 | 0.6970 | nan | 0.6970 | 0.0 | 0.6970 | | 0.0198 | 43.75 | 700 | 0.0516 | 0.3623 | 0.7247 | 0.7247 | nan | 0.7247 | 0.0 | 0.7247 | | 0.0236 | 45.0 | 720 | 0.0522 | 0.3485 | 0.6969 | 0.6969 | nan | 0.6969 | 0.0 | 0.6969 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2