Edit model card

safety-utcustom-train-SF30-RGBD-b5

This model is a fine-tuned version of nvidia/mit-b5 on the sam1120/safety-utcustom-TRAIN-30 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1952
  • Mean Iou: 0.6486
  • Mean Accuracy: 0.7199
  • Overall Accuracy: 0.9704
  • Accuracy Unlabeled: nan
  • Accuracy Safe: 0.4523
  • Accuracy Unsafe: 0.9874
  • Iou Unlabeled: nan
  • Iou Safe: 0.3271
  • Iou Unsafe: 0.9700

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Accuracy Unlabeled Accuracy Safe Accuracy Unsafe Iou Unlabeled Iou Safe Iou Unsafe
0.8758 5.0 10 0.9831 0.3415 0.6100 0.9154 nan 0.2839 0.9362 0.0 0.1099 0.9147
0.7637 10.0 20 0.7236 0.3771 0.6275 0.9582 nan 0.2745 0.9806 0.0 0.1735 0.9578
0.6698 15.0 30 0.5510 0.3789 0.6286 0.9593 nan 0.2755 0.9818 0.0 0.1776 0.9590
0.5935 20.0 40 0.4632 0.3822 0.6388 0.9591 nan 0.2967 0.9809 0.0 0.1877 0.9588
0.5108 25.0 50 0.4239 0.3814 0.6492 0.9560 nan 0.3214 0.9769 0.0 0.1887 0.9556
0.4597 30.0 60 0.4134 0.3845 0.6422 0.9596 nan 0.3034 0.9811 0.0 0.1943 0.9592
0.4307 35.0 70 0.3918 0.3900 0.6516 0.9594 nan 0.3229 0.9803 0.0 0.2111 0.9590
0.367 40.0 80 0.3578 0.3885 0.6600 0.9582 nan 0.3415 0.9784 0.0 0.2077 0.9577
0.3249 45.0 90 0.3395 0.3921 0.6587 0.9607 nan 0.3360 0.9813 0.0 0.2161 0.9603
0.292 50.0 100 0.3124 0.3969 0.6622 0.9633 nan 0.3408 0.9837 0.0 0.2280 0.9629
0.2766 55.0 110 0.2820 0.4078 0.6878 0.9644 nan 0.3925 0.9831 0.0 0.2594 0.9639
0.2347 60.0 120 0.2673 0.6169 0.7000 0.9641 nan 0.4181 0.9820 nan 0.2701 0.9636
0.226 65.0 130 0.2350 0.6280 0.6854 0.9698 nan 0.3818 0.9891 nan 0.2865 0.9694
0.3262 70.0 140 0.2354 0.6338 0.7125 0.9674 nan 0.4402 0.9848 nan 0.3006 0.9670
0.1991 75.0 150 0.2231 0.6363 0.7169 0.9676 nan 0.4492 0.9846 nan 0.3056 0.9671
0.2106 80.0 160 0.2089 0.6399 0.7152 0.9688 nan 0.4444 0.9860 nan 0.3114 0.9683
0.1995 85.0 170 0.1969 0.6493 0.7179 0.9709 nan 0.4478 0.9880 nan 0.3281 0.9704
0.1981 90.0 180 0.1909 0.6503 0.7136 0.9716 nan 0.4381 0.9892 nan 0.3293 0.9712
0.1875 95.0 190 0.1965 0.6473 0.7231 0.9697 nan 0.4598 0.9864 nan 0.3254 0.9692
0.2088 100.0 200 0.1952 0.6486 0.7199 0.9704 nan 0.4523 0.9874 nan 0.3271 0.9700

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
2