Edit model card

segformer-b5-finetuned-segments-instryde-foot-test

This model is a fine-tuned version of nvidia/mit-b5 on the inStryde/inStrydeSegmentationFoot dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0149
  • Mean Iou: 0.4800
  • Mean Accuracy: 0.9599
  • Overall Accuracy: 0.9599
  • Per Category Iou: [0.0, 0.9599216842864238]
  • Per Category Accuracy: [nan, 0.9599216842864238]

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Per Category Iou Per Category Accuracy
0.1024 0.27 20 0.2085 0.4534 0.9067 0.9067 [0.0, 0.9067344993758137] [nan, 0.9067344993758137]
0.0431 0.53 40 0.0487 0.4604 0.9207 0.9207 [0.0, 0.9207331455341442] [nan, 0.9207331455341442]
0.0354 0.8 60 0.0319 0.4577 0.9155 0.9155 [0.0, 0.9154662028576415] [nan, 0.9154662028576415]
0.0389 1.07 80 0.0276 0.4629 0.9257 0.9257 [0.0, 0.9257162800419576] [nan, 0.9257162800419576]
0.0208 1.33 100 0.0244 0.4702 0.9404 0.9404 [0.0, 0.9403945317069335] [nan, 0.9403945317069335]
0.0241 1.6 120 0.0212 0.4703 0.9406 0.9406 [0.0, 0.9406131407017349] [nan, 0.9406131407017349]
0.0167 1.87 140 0.0208 0.4761 0.9521 0.9521 [0.0, 0.9521215619420916] [nan, 0.9521215619420916]
0.0156 2.13 160 0.0205 0.4612 0.9224 0.9224 [0.0, 0.9224359945462809] [nan, 0.9224359945462809]
0.0156 2.4 180 0.0208 0.4734 0.9468 0.9468 [0.0, 0.9467575875538612] [nan, 0.9467575875538612]
0.0167 2.67 200 0.0182 0.4833 0.9667 0.9667 [0.0, 0.9666659635383208] [nan, 0.9666659635383208]
0.0145 2.93 220 0.0243 0.4351 0.8702 0.8702 [0.0, 0.8702122233110058] [nan, 0.8702122233110058]
0.0114 3.2 240 0.0176 0.4686 0.9373 0.9373 [0.0, 0.93726765603217] [nan, 0.93726765603217]
0.0155 3.47 260 0.0161 0.4770 0.9541 0.9541 [0.0, 0.9540767701096305] [nan, 0.9540767701096305]
0.0158 3.73 280 0.0169 0.4684 0.9368 0.9368 [0.0, 0.9368239181251786] [nan, 0.9368239181251786]
0.0114 4.0 300 0.0162 0.4777 0.9554 0.9554 [0.0, 0.9554348305492647] [nan, 0.9554348305492647]
0.0112 4.27 320 0.0159 0.4839 0.9678 0.9678 [0.0, 0.9677532556440432] [nan, 0.9677532556440432]
0.0131 4.53 340 0.0154 0.4811 0.9622 0.9622 [0.0, 0.9622032718479555] [nan, 0.9622032718479555]
0.0101 4.8 360 0.0156 0.4683 0.9367 0.9367 [0.0, 0.9366846987126999] [nan, 0.9366846987126999]
0.0102 5.07 380 0.0152 0.4758 0.9517 0.9517 [0.0, 0.9516509773164403] [nan, 0.9516509773164403]
0.0101 5.33 400 0.0169 0.4884 0.9768 0.9768 [0.0, 0.9768393358121804] [nan, 0.9768393358121804]
0.0082 5.6 420 0.0150 0.4761 0.9522 0.9522 [0.0, 0.9522462074215836] [nan, 0.9522462074215836]
0.01 5.87 440 0.0152 0.4788 0.9576 0.9576 [0.0, 0.9575745140264517] [nan, 0.9575745140264517]
0.0098 6.13 460 0.0148 0.4783 0.9565 0.9565 [0.0, 0.9565489693736469] [nan, 0.9565489693736469]
0.0088 6.4 480 0.0153 0.4795 0.9591 0.9591 [0.0, 0.959051850601846] [nan, 0.959051850601846]
0.0091 6.67 500 0.0152 0.4828 0.9656 0.9656 [0.0, 0.965590177169167] [nan, 0.965590177169167]
0.0102 6.93 520 0.0149 0.4800 0.9599 0.9599 [0.0, 0.9599216842864238] [nan, 0.9599216842864238]

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.0.1
  • Datasets 2.16.1
  • Tokenizers 0.15.1
Downloads last month
1
Safetensors
Model size
84.6M params
Tensor type
F32
·

Finetuned from