Edit model card

segformer-b0-scene-parse-150-lr-3-e-15

This model is a fine-tuned version of DiTo97/binarization-segformer-b3 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1523
  • Mean Iou: 0.5014
  • Mean Accuracy: 0.5220
  • Overall Accuracy: 0.9615
  • Per Category Iou: [0.04132646470292031, 0.9614038983247747]
  • Per Category Accuracy: [0.053216300812732126, 0.9907305584765508]

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Per Category Iou Per Category Accuracy
No log 1.0 112 0.1629 0.4844 0.5 0.9688 [0.0, 0.9687870873345269] [0.0, 1.0]
No log 2.0 224 0.1437 0.4844 0.5000 0.9688 [2.03629353850122e-05, 0.968778060560053] [2.0369226173097684e-05, 0.9999900466190115]
No log 3.0 336 0.1551 0.4844 0.5 0.9688 [0.0, 0.9687870873345269] [0.0, 1.0]
No log 4.0 448 0.1536 0.4873 0.5029 0.9674 [0.0072237010873418455, 0.967349403560223] [0.0076096034111664095, 0.998278830733678]
0.254 5.0 560 0.1730 0.4844 0.5000 0.9688 [1.697363485298286e-06, 0.9687858141149847] [1.697435514424807e-06, 0.9999986327773367]
0.254 6.0 672 0.1726 0.4844 0.5000 0.9688 [0.0, 0.9687868224249946] [0.0, 0.9999997265554673]
0.254 7.0 784 0.1418 0.4886 0.5042 0.9679 [0.009270700532836455, 0.9678754695078028] [0.009627854237817505, 0.998758780577388]
0.254 8.0 896 0.1618 0.4844 0.5 0.9688 [0.0, 0.9687870873345269] [0.0, 1.0]
0.2012 9.0 1008 0.1350 0.4868 0.5023 0.9685 [0.005035086692148778, 0.9684816005292253] [0.005109280898418669, 0.9995252456024103]
0.2012 10.0 1120 0.1429 0.4975 0.5137 0.9673 [0.027791805303191197, 0.967227089869692] [0.02998689579782864, 0.997455270490238]
0.2012 11.0 1232 0.1419 0.4852 0.5008 0.9688 [0.0015964088435281823, 0.9688182225729328] [0.0015972868190737434, 0.9999822807942842]
0.2012 12.0 1344 0.1339 0.4872 0.5028 0.9686 [0.00582435621561196, 0.968612834428971] [0.00589010123505408, 0.9996363187715734]
0.2012 13.0 1456 0.1422 0.4990 0.5165 0.9652 [0.03289244256624029, 0.9651360857253766] [0.03794447348945214, 0.9950514742926044]
0.1837 14.0 1568 0.1423 0.4928 0.5087 0.9673 [0.01828545458590366, 0.9672482875211772] [0.019532390464486255, 0.9978029278690511]
0.1837 15.0 1680 0.1523 0.5014 0.5220 0.9615 [0.04132646470292031, 0.9614038983247747] [0.053216300812732126, 0.9907305584765508]

Framework versions

  • Transformers 4.37.0
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
1
Safetensors
Model size
47.2M params
Tensor type
F32
·
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from