Edit model card

segformer-b0-finetuned-raw_img_ready2train_patches

This model is a fine-tuned version of nvidia/mit-b0 on the raw_img_ready2train_patches dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6829
  • Mean Iou: 0.4110
  • Mean Accuracy: 0.7629
  • Overall Accuracy: 0.7631
  • Accuracy Unlabeled: nan
  • Accuracy Eczema: 0.7673
  • Accuracy Background: 0.7585
  • Iou Unlabeled: 0.0
  • Iou Eczema: 0.6284
  • Iou Background: 0.6047

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Accuracy Unlabeled Accuracy Eczema Accuracy Background Iou Unlabeled Iou Eczema Iou Background
1.0753 0.0312 5 1.0925 0.2358 0.4682 0.4698 nan 0.5042 0.4322 0.0 0.3705 0.3367
0.9863 0.0625 10 1.0697 0.2994 0.6182 0.6306 nan 0.8979 0.3385 0.0 0.5784 0.3198
1.0056 0.0938 15 1.0377 0.3303 0.6678 0.6792 nan 0.9236 0.4121 0.0 0.6064 0.3844
1.0133 0.125 20 1.0006 0.3478 0.6869 0.6950 nan 0.8710 0.5027 0.0 0.6008 0.4425
0.9748 0.1562 25 0.9689 0.3543 0.6947 0.7022 nan 0.8647 0.5246 0.0 0.6043 0.4586
0.9367 0.1875 30 0.9417 0.3566 0.6950 0.6965 nan 0.7290 0.6610 0.0 0.5583 0.5114
0.8363 0.2188 35 0.9118 0.3557 0.6940 0.6959 nan 0.7366 0.6514 0.0 0.5600 0.5069
1.1431 0.25 40 0.8830 0.3575 0.6963 0.6989 nan 0.7556 0.6370 0.0 0.5686 0.5039
0.7312 0.2812 45 0.8592 0.3680 0.7098 0.7133 nan 0.7888 0.6307 0.0 0.5907 0.5133
0.8135 0.3125 50 0.8268 0.3559 0.6994 0.7083 nan 0.8992 0.4997 0.0 0.6173 0.4505
0.7528 0.3438 55 0.8110 0.3525 0.6960 0.7053 nan 0.9055 0.4866 0.0 0.6162 0.4412
0.8405 0.375 60 0.7967 0.3518 0.6950 0.7041 nan 0.9008 0.4893 0.0 0.6140 0.4415
0.7865 0.4062 65 0.7791 0.3561 0.6992 0.7075 nan 0.8869 0.5116 0.0 0.6130 0.4553
0.8309 0.4375 70 0.7650 0.3652 0.7083 0.7147 nan 0.8512 0.5655 0.0 0.6090 0.4864
0.6775 0.4688 75 0.7615 0.3613 0.7044 0.7115 nan 0.8651 0.5437 0.0 0.6102 0.4738
0.7033 0.5 80 0.7498 0.3737 0.7179 0.7227 nan 0.8260 0.6099 0.0 0.6087 0.5125
0.8377 0.5312 85 0.7443 0.3790 0.7243 0.7290 nan 0.8303 0.6184 0.0 0.6154 0.5217
0.825 0.5625 90 0.7547 0.3676 0.7125 0.7201 nan 0.8840 0.5411 0.0 0.6225 0.4802
0.7408 0.5938 95 0.7415 0.3767 0.7228 0.7295 nan 0.8747 0.5708 0.0 0.6281 0.5021
0.8087 0.625 100 0.7201 0.3926 0.7404 0.7445 nan 0.8318 0.6491 0.0 0.6296 0.5483
0.7146 0.6562 105 0.7096 0.4002 0.7493 0.7520 nan 0.8109 0.6877 0.0 0.6307 0.5699
0.6875 0.6875 110 0.7047 0.4010 0.7502 0.7541 nan 0.8398 0.6606 0.0 0.6407 0.5621
0.6382 0.7188 115 0.7031 0.3982 0.7471 0.7519 nan 0.8543 0.6400 0.0 0.6426 0.5521
0.6551 0.75 120 0.6953 0.4018 0.7512 0.7553 nan 0.8450 0.6573 0.0 0.6433 0.5621
0.7074 0.7812 125 0.6912 0.4054 0.7553 0.7583 nan 0.8236 0.6871 0.0 0.6402 0.5760
0.768 0.8125 130 0.6866 0.4048 0.7546 0.7579 nan 0.8278 0.6814 0.0 0.6410 0.5736
0.7543 0.8438 135 0.6851 0.4031 0.7526 0.7564 nan 0.8374 0.6679 0.0 0.6422 0.5671
0.7107 0.875 140 0.6803 0.6122 0.7586 0.7608 nan 0.8071 0.7101 nan 0.6379 0.5865
0.7054 0.9062 145 0.6799 0.4098 0.7608 0.7622 nan 0.7924 0.7292 0.0 0.6350 0.5943
1.1302 0.9375 150 0.6801 0.4103 0.7616 0.7626 nan 0.7840 0.7393 0.0 0.6330 0.5981
0.6037 0.9688 155 0.6827 0.4111 0.7628 0.7632 nan 0.7721 0.7534 0.0 0.6300 0.6032
0.8577 1.0 160 0.6829 0.4110 0.7629 0.7631 nan 0.7673 0.7585 0.0 0.6284 0.6047

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.3.0
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
3.72M params
Tensor type
F32
·

Finetuned from