Edit model card

segformer-b0-scene-parse-150-lr-4-e-30-new-9img

This model is a fine-tuned version of DiTo97/binarization-segformer-b3 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1623
  • Mean Iou: 0.5077
  • Mean Accuracy: 0.5342
  • Overall Accuracy: 0.9575
  • Per Category Iou: [0.05804449685867197, 0.9574223586207944]
  • Per Category Accuracy: [0.0820083385013113, 0.9863919593485183]

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Per Category Iou Per Category Accuracy
No log 1.0 132 0.2043 0.4839 0.4998 0.9662 [0.001549543069893383, 0.9662390544491327] [0.001641922637237834, 0.9980349550926039]
No log 2.0 264 0.1582 0.4840 0.5000 0.9680 [6.710408589322995e-05, 0.9680253773280134] [6.724597831690788e-05, 0.999930303583307]
No log 3.0 396 0.1382 0.4841 0.5001 0.9681 [0.0001139354228965373, 0.9680917294976413] [0.00011394457437031613, 0.9999973525212741]
0.3872 4.0 528 0.1699 0.4951 0.5117 0.9660 [0.024248402101243276, 0.9659487595053663] [0.02649678340070384, 0.9969437628725957]
0.3872 5.0 660 0.1275 0.4860 0.5019 0.9680 [0.003997176715331179, 0.9680342203165365] [0.004019815148277382, 0.9998133219651828]
0.3872 6.0 792 0.1469 0.4942 0.5106 0.9665 [0.021996863056051138, 0.9664511446344629] [0.023629489603024575, 0.9975536065186591]
0.3872 7.0 924 0.1472 0.5057 0.5257 0.9633 [0.048193367074049226, 0.9632320238098082] [0.058235017222442224, 0.9931321939077451]
0.1503 8.0 1056 0.1385 0.4982 0.5147 0.9668 [0.02966123083403392, 0.9667378174750562] [0.0318316310138452, 0.997588146880642]
0.1503 9.0 1188 0.1401 0.4950 0.5115 0.9663 [0.023698989853648766, 0.9662662110495017] [0.025641265121005404, 0.9972986481604209]
0.1503 10.0 1320 0.1440 0.4968 0.5141 0.9649 [0.02877413033473597, 0.9648685554910891] [0.03258441238222614, 0.9956352926892399]
0.1503 11.0 1452 0.1532 0.5117 0.5385 0.9588 [0.06479279776445032, 0.9586699704883279] [0.08948011386985662, 0.9874412228938199]
0.1334 12.0 1584 0.1425 0.5083 0.5310 0.9612 [0.05548374916494815, 0.9611461513466955] [0.0713647944888185, 0.9905656172060997]
0.1334 13.0 1716 0.1457 0.5052 0.5266 0.9615 [0.04888450573004655, 0.9614673387061856] [0.06194288574908284, 0.9911952245886988]
0.1334 14.0 1848 0.1510 0.5087 0.5322 0.9605 [0.05697498023630964, 0.960444678467155] [0.07471588574161106, 0.9897365869492352]
0.1334 15.0 1980 0.1445 0.5063 0.5262 0.9635 [0.049058901751704166, 0.9634450432339506] [0.0589896665346653, 0.993327861056376]
0.1218 16.0 2112 0.1488 0.5037 0.5242 0.9621 [0.04539026025348957, 0.9619926039074217] [0.05653332038225603, 0.9919082583358022]
0.1218 17.0 2244 0.1554 0.5125 0.5458 0.9542 [0.07088184464528238, 0.9540476640217492] [0.1094801885876103, 0.98205126405411]
0.1218 18.0 2376 0.1484 0.5048 0.5257 0.9621 [0.04769031825474748, 0.9620025903307378] [0.059520162585832016, 0.9918238468629347]
0.1136 19.0 2508 0.1478 0.5065 0.5278 0.9621 [0.05100673744263626, 0.9620083632427995] [0.063863132018799, 0.9916920886193632]
0.1136 20.0 2640 0.1482 0.5102 0.5366 0.9585 [0.06195357859711988, 0.9583808144125242] [0.08590673729984982, 0.9872562687986381]
0.1136 21.0 2772 0.1479 0.5043 0.5255 0.9615 [0.04721944072757992, 0.9614743147367695] [0.059721900520782745, 0.9912728018722969]
0.1136 22.0 2904 0.1528 0.5084 0.5334 0.9590 [0.057761084324792794, 0.958942151209778] [0.07867966257462436, 0.9880629494095138]
0.1053 23.0 3036 0.1642 0.5118 0.5549 0.9476 [0.07620148345702486, 0.9474144999535162] [0.13534934285735634, 0.9744155659927787]
0.1053 24.0 3168 0.1583 0.5085 0.5367 0.9564 [0.060741142425379466, 0.9563097838863447] [0.08829583747394218, 0.9850475327099075]
0.1053 25.0 3300 0.1627 0.5101 0.5380 0.9573 [0.0631439155292623, 0.9571557211026566] [0.09023663112592183, 0.9858576612000541]
0.1053 26.0 3432 0.1646 0.5101 0.5441 0.9527 [0.06773853824581035, 0.9525613225447881] [0.10765147156615884, 0.9805787117590169]
0.1024 27.0 3564 0.1616 0.5089 0.5403 0.9540 [0.06385837245316543, 0.9538794921181668] [0.09828560114168727, 0.9822301227912947]
0.1024 28.0 3696 0.1614 0.5077 0.5352 0.9566 [0.058942135207107095, 0.9565120386869382] [0.0851296726615211, 0.9853556869197558]
0.1024 29.0 3828 0.1628 0.5076 0.5353 0.9564 [0.05886607703633537, 0.9562951940049768] [0.08543601545163146, 0.9851226472225978]
0.1024 30.0 3960 0.1623 0.5077 0.5342 0.9575 [0.05804449685867197, 0.9574223586207944] [0.0820083385013113, 0.9863919593485183]

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
1
Safetensors
Model size
47.2M params
Tensor type
F32
·
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from