Edit model card

SegFormer_Mixed_Set2_788images_mit-b5_RGB

This model is a fine-tuned version of nvidia/mit-b5 on the Hasano20/Mixed_Set2_788images dataset. It achieves the following results on the evaluation set:

  • Train-Loss: 0.0099
  • Loss: 0.0150
  • Mean Iou: 0.9788
  • Mean Accuracy: 0.9887
  • Overall Accuracy: 0.9948
  • Accuracy Background: 0.9958
  • Accuracy Melt: 0.9735
  • Accuracy Substrate: 0.9969
  • Iou Background: 0.9926
  • Iou Melt: 0.9509
  • Iou Substrate: 0.9927

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Accuracy Background Accuracy Melt Accuracy Substrate Iou Background Iou Melt Iou Substrate
0.1619 0.7042 50 0.1799 0.7782 0.8306 0.9444 0.9902 0.5371 0.9645 0.9436 0.4720 0.9192
0.062 1.4085 100 0.1065 0.8361 0.8630 0.9638 0.9833 0.6084 0.9972 0.9720 0.5922 0.9441
0.1757 2.1127 150 0.1157 0.8551 0.8896 0.9617 0.9803 0.7065 0.9820 0.9484 0.6731 0.9438
0.0872 2.8169 200 0.0446 0.9302 0.9539 0.9844 0.9938 0.8760 0.9920 0.9846 0.8282 0.9777
0.0336 3.5211 250 0.0338 0.9469 0.9751 0.9877 0.9913 0.9431 0.9910 0.9857 0.8719 0.9831
0.0417 4.2254 300 0.0488 0.9281 0.9820 0.9830 0.9941 0.9765 0.9753 0.9877 0.8233 0.9732
0.0273 4.9296 350 0.0295 0.9516 0.9628 0.9892 0.9952 0.8960 0.9973 0.9895 0.8819 0.9835
0.0249 5.6338 400 0.0228 0.9627 0.9807 0.9913 0.9916 0.9544 0.9960 0.9890 0.9112 0.9879
0.0247 6.3380 450 0.0234 0.9642 0.9886 0.9915 0.9919 0.9814 0.9925 0.9894 0.9151 0.9881
0.0219 7.0423 500 0.0220 0.9656 0.9768 0.9920 0.9943 0.9386 0.9975 0.9908 0.9178 0.9882
0.0172 7.7465 550 0.0206 0.9672 0.9888 0.9923 0.9951 0.9792 0.9919 0.9913 0.9215 0.9888
0.018 8.4507 600 0.0169 0.9747 0.9859 0.9937 0.9944 0.9665 0.9969 0.9910 0.9420 0.9911
0.0152 9.1549 650 0.0180 0.9726 0.9856 0.9932 0.9968 0.9659 0.9942 0.9909 0.9366 0.9902
0.016 9.8592 700 0.0180 0.9729 0.9877 0.9936 0.9955 0.9726 0.9949 0.9917 0.9360 0.9909
0.0132 10.5634 750 0.0169 0.9746 0.9872 0.9938 0.9944 0.9708 0.9965 0.9914 0.9410 0.9913
0.0115 11.2676 800 0.0156 0.9761 0.9898 0.9941 0.9952 0.9789 0.9954 0.9920 0.9446 0.9917
0.0143 11.9718 850 0.0155 0.9765 0.9895 0.9943 0.9962 0.9772 0.9952 0.9923 0.9452 0.9920
0.0106 12.6761 900 0.0146 0.9778 0.9898 0.9946 0.9959 0.9777 0.9959 0.9924 0.9485 0.9925
0.0106 13.3803 950 0.0146 0.9780 0.9888 0.9947 0.9967 0.9736 0.9959 0.9923 0.9490 0.9928
0.0068 14.0845 1000 0.0147 0.9784 0.9883 0.9947 0.9966 0.9718 0.9964 0.9924 0.9501 0.9928
0.0115 14.7887 1050 0.0163 0.9759 0.9901 0.9942 0.9958 0.9795 0.9950 0.9925 0.9436 0.9917
0.0099 15.4930 1100 0.0150 0.9788 0.9887 0.9948 0.9958 0.9735 0.9969 0.9926 0.9509 0.9927

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
120
Safetensors
Model size
84.6M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from