Edit model card

SegFormer_mit-b5_Clean-Set3-Grayscale_Augmented_Medium_16

This model is a fine-tuned version of nvidia/mit-b5 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0121
  • Mean Iou: 0.9823
  • Mean Accuracy: 0.9920
  • Overall Accuracy: 0.9954
  • Accuracy Background: 0.9974
  • Accuracy Melt: 0.9828
  • Accuracy Substrate: 0.9958
  • Iou Background: 0.9943
  • Iou Melt: 0.9594
  • Iou Substrate: 0.9932

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Accuracy Background Accuracy Melt Accuracy Substrate Iou Background Iou Melt Iou Substrate
0.1196 0.7937 50 0.1076 0.8582 0.8965 0.9626 0.9674 0.7265 0.9955 0.9625 0.6696 0.9424
0.2728 1.5873 100 0.0878 0.8762 0.9239 0.9665 0.9622 0.8161 0.9935 0.9611 0.7150 0.9525
0.2668 2.3810 150 0.1131 0.8710 0.9238 0.9639 0.9971 0.8140 0.9602 0.9620 0.7076 0.9432
0.0337 3.1746 200 0.0610 0.9173 0.9613 0.9778 0.9709 0.9208 0.9923 0.9685 0.8110 0.9723
0.0443 3.9683 250 0.0295 0.9527 0.9665 0.9885 0.9924 0.9095 0.9977 0.9902 0.8867 0.9812
0.0283 4.7619 300 0.0220 0.9652 0.9781 0.9915 0.9965 0.9429 0.9950 0.9910 0.9175 0.9871
0.0166 5.5556 350 0.0193 0.9683 0.9837 0.9922 0.9972 0.9609 0.9929 0.9925 0.9249 0.9876
0.0218 6.3492 400 0.0190 0.9691 0.9871 0.9922 0.9975 0.9730 0.9909 0.9919 0.9277 0.9879
0.0178 7.1429 450 0.0157 0.9752 0.9853 0.9938 0.9981 0.9626 0.9951 0.9925 0.9424 0.9909
0.0165 7.9365 500 0.0151 0.9771 0.9878 0.9941 0.9966 0.9711 0.9957 0.9931 0.9470 0.9911
0.0136 8.7302 550 0.0137 0.9785 0.9902 0.9945 0.9955 0.9792 0.9959 0.9930 0.9508 0.9918
0.0127 9.5238 600 0.0128 0.9798 0.9896 0.9948 0.9977 0.9758 0.9955 0.9937 0.9536 0.9923
0.0117 10.3175 650 0.0123 0.9809 0.9895 0.9951 0.9974 0.9747 0.9964 0.9939 0.9561 0.9927
0.011 11.1111 700 0.0125 0.9805 0.9923 0.9950 0.9974 0.9848 0.9946 0.9938 0.9552 0.9925
0.0108 11.9048 750 0.0123 0.9809 0.9915 0.9951 0.9975 0.9818 0.9952 0.9940 0.9561 0.9926
0.0135 12.6984 800 0.0126 0.9808 0.9920 0.9950 0.9979 0.9834 0.9946 0.9941 0.9558 0.9924
0.0089 13.4921 850 0.0123 0.9814 0.9923 0.9952 0.9968 0.9844 0.9957 0.9940 0.9574 0.9929
0.0077 14.2857 900 0.0119 0.9819 0.9911 0.9953 0.9976 0.9797 0.9959 0.9942 0.9586 0.9930
0.0069 15.0794 950 0.0122 0.9822 0.9914 0.9954 0.9973 0.9807 0.9961 0.9943 0.9591 0.9931
0.0069 15.8730 1000 0.0120 0.9822 0.9920 0.9954 0.9975 0.9828 0.9957 0.9944 0.9592 0.9931
0.0089 16.6667 1050 0.0120 0.9824 0.9914 0.9955 0.9976 0.9807 0.9961 0.9943 0.9595 0.9932
0.0072 17.4603 1100 0.0121 0.9823 0.9920 0.9954 0.9974 0.9828 0.9958 0.9943 0.9594 0.9932

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
84.6M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Hasano20/SegFormer_mit-b5_Clean-Set3-Grayscale_Augmented_Medium_16

Base model

nvidia/mit-b5
Finetuned
this model