Edit model card

segformer-b0-scene-parse-150-lr-4-e-15

This model is a fine-tuned version of DiTo97/binarization-segformer-b3 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1773
  • Mean Iou: 0.5116
  • Mean Accuracy: 0.5539
  • Overall Accuracy: 0.9486
  • Per Category Iou: [0.07467818861526594, 0.9484318643687625]
  • Per Category Accuracy: [0.13278359055139496, 0.9749314802690082]

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Per Category Iou Per Category Accuracy
No log 1.0 112 0.3321 0.4844 0.5000 0.9686 [5.913750483660308e-05, 0.968644931717587] [5.9410243004868244e-05, 0.9998514102409571]
No log 2.0 224 0.1448 0.4844 0.5 0.9688 [0.0, 0.9687870873345269] [0.0, 1.0]
No log 3.0 336 0.1467 0.4855 0.5011 0.9687 [0.0024028604839131528, 0.9686745745655791] [0.002417148172540925, 0.9998084247604243]
No log 4.0 448 0.1597 0.4974 0.5136 0.9673 [0.02761431295696444, 0.9672534071470754] [0.029766229180953417, 0.9974892869900998]
0.4196 5.0 560 0.1483 0.4945 0.5101 0.9683 [0.02072799899238894, 0.9682597471616551] [0.021509902838791155, 0.9987846484301768]
0.4196 6.0 672 0.1300 0.4973 0.5131 0.9682 [0.026546808517533143, 0.9681413453315052] [0.02781078346833604, 0.9984659761718246]
0.4196 7.0 784 0.1407 0.5063 0.5244 0.9659 [0.04665771796171021, 0.9658509666995633] [0.05345563922026602, 0.995305832396877]
0.4196 8.0 896 0.1377 0.5014 0.5186 0.9662 [0.036728661127978124, 0.9661516368135028] [0.041295211194926705, 0.995994201663374]
0.174 9.0 1008 0.1632 0.5096 0.5382 0.9570 [0.06234910880338227, 0.9568704542992275] [0.09161908189107895, 0.984874907876537]
0.174 10.0 1120 0.1424 0.5102 0.5323 0.9627 [0.05773026579725805, 0.9625824124413115] [0.07327829115771892, 0.9913228393342741]
0.174 11.0 1232 0.1553 0.5035 0.5223 0.9644 [0.04268206669259935, 0.9643468862627879] [0.05084668083459509, 0.9938369430563793]
0.174 12.0 1344 0.1607 0.5086 0.5330 0.9600 [0.057171934641356385, 0.95994904570909] [0.07762033120361757, 0.9884765551939039]
0.174 13.0 1456 0.1619 0.5095 0.5358 0.9589 [0.060308850859297915, 0.958769171435925] [0.08455435528004292, 0.9870474246884537]
0.1457 14.0 1568 0.1625 0.5123 0.5476 0.9534 [0.07133326653200926, 0.9531840662639103] [0.11479756384054969, 0.9803688154229716]
0.1457 15.0 1680 0.1773 0.5116 0.5539 0.9486 [0.07467818861526594, 0.9484318643687625] [0.13278359055139496, 0.9749314802690082]

Framework versions

  • Transformers 4.37.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
1
Safetensors
Model size
47.2M params
Tensor type
F32
·
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from