segformer-finetuned-4ss1st3r_s3gs3m_24Jan_gris-10k-steps
This model is a fine-tuned version of nvidia/mit-b0 on the blzncz/4ss1st3r_s3gs3m_24Jan_gris dataset. It achieves the following results on the evaluation set:
- Loss: 0.2539
- Mean Iou: 0.5001
- Mean Accuracy: 0.7682
- Overall Accuracy: 0.9671
- Accuracy Bg: nan
- Accuracy Fallo cohesivo: 0.9929
- Accuracy Fallo malla: 0.5837
- Accuracy Fallo adhesivo: 0.8806
- Accuracy Fallo burbuja: 0.6154
- Iou Bg: 0.0
- Iou Fallo cohesivo: 0.9663
- Iou Fallo malla: 0.5505
- Iou Fallo adhesivo: 0.4321
- Iou Fallo burbuja: 0.5515
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
Training results
Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Bg | Accuracy Fallo cohesivo | Accuracy Fallo malla | Accuracy Fallo adhesivo | Accuracy Fallo burbuja | Iou Bg | Iou Fallo cohesivo | Iou Fallo malla | Iou Fallo adhesivo | Iou Fallo burbuja |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.1186 | 1.0 | 259 | 0.1688 | 0.5045 | 0.6750 | 0.9611 | nan | 0.9940 | 0.5027 | 0.8539 | 0.3494 | 0.0 | 0.9600 | 0.4717 | 0.7563 | 0.3344 |
0.0669 | 2.0 | 518 | 0.1603 | 0.4270 | 0.7755 | 0.9501 | nan | 0.9685 | 0.7091 | 0.8964 | 0.5282 | 0.0 | 0.9490 | 0.5466 | 0.1627 | 0.4767 |
0.0608 | 3.0 | 777 | 0.1863 | 0.4142 | 0.7612 | 0.9458 | nan | 0.9703 | 0.5906 | 0.9321 | 0.5517 | 0.0 | 0.9467 | 0.5366 | 0.0891 | 0.4985 |
0.0551 | 4.0 | 1036 | 0.1654 | 0.4515 | 0.7496 | 0.9620 | nan | 0.9879 | 0.5881 | 0.8763 | 0.5462 | 0.0 | 0.9620 | 0.5560 | 0.2349 | 0.5043 |
0.0462 | 5.0 | 1295 | 0.2067 | 0.4267 | 0.7598 | 0.9487 | nan | 0.9752 | 0.5450 | 0.8796 | 0.6392 | 0.0 | 0.9502 | 0.5377 | 0.0838 | 0.5620 |
0.0445 | 6.0 | 1554 | 0.1565 | 0.4557 | 0.7685 | 0.9627 | nan | 0.9873 | 0.5954 | 0.8571 | 0.6343 | 0.0 | 0.9636 | 0.5689 | 0.1837 | 0.5623 |
0.039 | 7.0 | 1813 | 0.1523 | 0.4576 | 0.8005 | 0.9609 | nan | 0.9817 | 0.6535 | 0.9036 | 0.6630 | 0.0 | 0.9612 | 0.5885 | 0.1643 | 0.5738 |
0.0367 | 8.0 | 2072 | 0.1954 | 0.4573 | 0.7462 | 0.9614 | nan | 0.9917 | 0.4963 | 0.8762 | 0.6206 | 0.0 | 0.9612 | 0.4850 | 0.2790 | 0.5612 |
0.0352 | 9.0 | 2331 | 0.2244 | 0.4757 | 0.7542 | 0.9636 | nan | 0.9932 | 0.5098 | 0.8867 | 0.6269 | 0.0 | 0.9629 | 0.5013 | 0.3466 | 0.5674 |
0.0357 | 10.0 | 2590 | 0.2119 | 0.4687 | 0.7394 | 0.9645 | nan | 0.9934 | 0.5378 | 0.8710 | 0.5552 | 0.0 | 0.9641 | 0.5209 | 0.3377 | 0.5207 |
0.0352 | 11.0 | 2849 | 0.1957 | 0.4469 | 0.7903 | 0.9584 | nan | 0.9791 | 0.6656 | 0.9237 | 0.5927 | 0.0 | 0.9591 | 0.5829 | 0.1459 | 0.5465 |
0.032 | 12.0 | 3108 | 0.1811 | 0.4521 | 0.8058 | 0.9594 | nan | 0.9797 | 0.6634 | 0.9338 | 0.6464 | 0.0 | 0.9608 | 0.5929 | 0.1397 | 0.5671 |
0.0299 | 13.0 | 3367 | 0.2403 | 0.4298 | 0.7596 | 0.9557 | nan | 0.9827 | 0.5553 | 0.9271 | 0.5733 | 0.0 | 0.9572 | 0.5336 | 0.1272 | 0.5311 |
0.0292 | 14.0 | 3626 | 0.2233 | 0.4667 | 0.7638 | 0.9642 | nan | 0.9900 | 0.5759 | 0.8508 | 0.6385 | 0.0 | 0.9638 | 0.5475 | 0.2511 | 0.5709 |
0.0264 | 15.0 | 3885 | 0.2382 | 0.4431 | 0.7690 | 0.9594 | nan | 0.9865 | 0.5492 | 0.9139 | 0.6267 | 0.0 | 0.9602 | 0.5326 | 0.1568 | 0.5658 |
0.0273 | 16.0 | 4144 | 0.2339 | 0.4382 | 0.7751 | 0.9570 | nan | 0.9818 | 0.5876 | 0.9193 | 0.6115 | 0.0 | 0.9584 | 0.5419 | 0.1352 | 0.5554 |
0.0249 | 17.0 | 4403 | 0.2078 | 0.4950 | 0.7846 | 0.9669 | nan | 0.9925 | 0.5784 | 0.9197 | 0.6477 | 0.0 | 0.9663 | 0.5508 | 0.3921 | 0.5658 |
0.0242 | 18.0 | 4662 | 0.2495 | 0.4809 | 0.7706 | 0.9645 | nan | 0.9922 | 0.5392 | 0.9007 | 0.6503 | 0.0 | 0.9640 | 0.5147 | 0.3577 | 0.5682 |
0.0241 | 19.0 | 4921 | 0.2117 | 0.4491 | 0.7954 | 0.9589 | nan | 0.9815 | 0.6243 | 0.9423 | 0.6336 | 0.0 | 0.9597 | 0.5703 | 0.1508 | 0.5647 |
0.0243 | 20.0 | 5180 | 0.1989 | 0.4754 | 0.8013 | 0.9656 | nan | 0.9875 | 0.6416 | 0.9194 | 0.6568 | 0.0 | 0.9658 | 0.5879 | 0.2482 | 0.5751 |
0.0251 | 21.0 | 5439 | 0.2095 | 0.4607 | 0.7962 | 0.9629 | nan | 0.9853 | 0.6324 | 0.9337 | 0.6334 | 0.0 | 0.9634 | 0.5732 | 0.2073 | 0.5598 |
0.0238 | 22.0 | 5698 | 0.2063 | 0.4747 | 0.7927 | 0.9650 | nan | 0.9873 | 0.6383 | 0.9158 | 0.6293 | 0.0 | 0.9645 | 0.5779 | 0.2744 | 0.5569 |
0.0225 | 23.0 | 5957 | 0.2260 | 0.4656 | 0.7915 | 0.9640 | nan | 0.9880 | 0.6003 | 0.9106 | 0.6672 | 0.0 | 0.9642 | 0.5647 | 0.2241 | 0.5752 |
0.0231 | 24.0 | 6216 | 0.2454 | 0.4688 | 0.7783 | 0.9645 | nan | 0.9891 | 0.6019 | 0.9197 | 0.6024 | 0.0 | 0.9643 | 0.5591 | 0.2766 | 0.5442 |
0.0218 | 25.0 | 6475 | 0.2482 | 0.5143 | 0.7752 | 0.9665 | nan | 0.9919 | 0.5896 | 0.9136 | 0.6057 | 0.0 | 0.9655 | 0.5433 | 0.5236 | 0.5390 |
0.0223 | 26.0 | 6734 | 0.2474 | 0.4648 | 0.7660 | 0.9642 | nan | 0.9903 | 0.5784 | 0.9054 | 0.5898 | 0.0 | 0.9639 | 0.5502 | 0.2768 | 0.5334 |
0.0238 | 27.0 | 6993 | 0.2475 | 0.4717 | 0.7669 | 0.9651 | nan | 0.9920 | 0.5597 | 0.9019 | 0.6138 | 0.0 | 0.9648 | 0.5379 | 0.3087 | 0.5470 |
0.021 | 28.0 | 7252 | 0.2490 | 0.4740 | 0.7708 | 0.9649 | nan | 0.9919 | 0.5573 | 0.9116 | 0.6222 | 0.0 | 0.9645 | 0.5362 | 0.3142 | 0.5553 |
0.0208 | 29.0 | 7511 | 0.2369 | 0.4633 | 0.7669 | 0.9653 | nan | 0.9896 | 0.6134 | 0.8846 | 0.5799 | 0.0 | 0.9652 | 0.5762 | 0.2422 | 0.5327 |
0.0202 | 30.0 | 7770 | 0.2498 | 0.4863 | 0.7654 | 0.9655 | nan | 0.9930 | 0.5488 | 0.8931 | 0.6267 | 0.0 | 0.9647 | 0.5273 | 0.3811 | 0.5582 |
0.021 | 31.0 | 8029 | 0.2534 | 0.4799 | 0.7729 | 0.9657 | nan | 0.9915 | 0.5794 | 0.9043 | 0.6164 | 0.0 | 0.9652 | 0.5474 | 0.3368 | 0.5502 |
0.0202 | 32.0 | 8288 | 0.2626 | 0.4771 | 0.7627 | 0.9653 | nan | 0.9930 | 0.5464 | 0.9014 | 0.6098 | 0.0 | 0.9647 | 0.5272 | 0.3464 | 0.5474 |
0.0201 | 33.0 | 8547 | 0.2710 | 0.4903 | 0.7673 | 0.9659 | nan | 0.9936 | 0.5431 | 0.8994 | 0.6329 | 0.0 | 0.9653 | 0.5221 | 0.3997 | 0.5645 |
0.0195 | 34.0 | 8806 | 0.2589 | 0.4915 | 0.7662 | 0.9663 | nan | 0.9930 | 0.5644 | 0.8895 | 0.6177 | 0.0 | 0.9656 | 0.5368 | 0.4014 | 0.5537 |
0.0194 | 35.0 | 9065 | 0.2304 | 0.5092 | 0.7801 | 0.9675 | nan | 0.9919 | 0.6048 | 0.8941 | 0.6295 | 0.0 | 0.9667 | 0.5615 | 0.4576 | 0.5603 |
0.0188 | 36.0 | 9324 | 0.2674 | 0.5022 | 0.7629 | 0.9670 | nan | 0.9933 | 0.5783 | 0.8819 | 0.5982 | 0.0 | 0.9662 | 0.5461 | 0.4567 | 0.5418 |
0.0188 | 37.0 | 9583 | 0.2580 | 0.4897 | 0.7702 | 0.9665 | nan | 0.9925 | 0.5791 | 0.8884 | 0.6207 | 0.0 | 0.9660 | 0.5485 | 0.3793 | 0.5548 |
0.0192 | 38.0 | 9842 | 0.2556 | 0.5065 | 0.7656 | 0.9673 | nan | 0.9933 | 0.5823 | 0.8739 | 0.6130 | 0.0 | 0.9665 | 0.5494 | 0.4667 | 0.5500 |
0.019 | 38.61 | 10000 | 0.2539 | 0.5001 | 0.7682 | 0.9671 | nan | 0.9929 | 0.5837 | 0.8806 | 0.6154 | 0.0 | 0.9663 | 0.5505 | 0.4321 | 0.5515 |
Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.