Edit model card

safety-utcustom-train-SF30-RGB-b0

This model is a fine-tuned version of nvidia/mit-b0 on the sam1120/safety-utcustom-TRAIN-30 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7492
  • Mean Iou: 0.3878
  • Mean Accuracy: 0.8431
  • Overall Accuracy: 0.9233
  • Accuracy Unlabeled: nan
  • Accuracy Safe: 0.7575
  • Accuracy Unsafe: 0.9287
  • Iou Unlabeled: 0.0
  • Iou Safe: 0.2418
  • Iou Unsafe: 0.9214

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 9e-06
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 120

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Accuracy Unlabeled Accuracy Safe Accuracy Unsafe Iou Unlabeled Iou Safe Iou Unsafe
1.1527 5.0 10 1.1085 0.0590 0.4585 0.1664 nan 0.7704 0.1465 0.0 0.0307 0.1464
1.1326 10.0 20 1.1091 0.0963 0.6082 0.2699 nan 0.9695 0.2470 0.0 0.0419 0.2470
1.0981 15.0 30 1.0980 0.1530 0.6989 0.4242 nan 0.9922 0.4055 0.0 0.0535 0.4055
1.086 20.0 40 1.0822 0.1916 0.7515 0.5256 nan 0.9927 0.5103 0.0 0.0644 0.5103
1.0466 25.0 50 1.0541 0.2226 0.7909 0.6043 nan 0.9902 0.5917 0.0 0.0761 0.5917
1.0533 30.0 60 1.0249 0.2444 0.8167 0.6580 nan 0.9863 0.6472 0.0 0.0861 0.6471
0.9779 35.0 70 1.0010 0.2607 0.8322 0.6966 nan 0.9771 0.6874 0.0 0.0951 0.6871
0.9161 40.0 80 0.9695 0.2808 0.8487 0.7412 nan 0.9635 0.7339 0.0 0.1091 0.7334
0.9843 45.0 90 0.9403 0.3004 0.8631 0.7823 nan 0.9494 0.7768 0.0 0.1254 0.7759
0.9568 50.0 100 0.9071 0.3176 0.8663 0.8169 nan 0.9191 0.8135 0.0 0.1412 0.8117
0.8443 55.0 110 0.8627 0.3403 0.8656 0.8576 nan 0.8742 0.8570 0.0 0.1672 0.8537
0.8765 60.0 120 0.8488 0.3450 0.8625 0.8657 nan 0.8591 0.8659 0.0 0.1729 0.8620
0.899 65.0 130 0.8429 0.3481 0.8629 0.8705 nan 0.8548 0.8710 0.0 0.1772 0.8669
0.7713 70.0 140 0.8085 0.3632 0.8497 0.8939 nan 0.8026 0.8969 0.0 0.1983 0.8912
0.8505 75.0 150 0.7821 0.3762 0.8465 0.9102 nan 0.7786 0.9145 0.0 0.2208 0.9079
0.7352 80.0 160 0.7841 0.3819 0.8392 0.9173 nan 0.7557 0.9226 0.0 0.2304 0.9153
0.7205 85.0 170 0.7502 0.3974 0.8400 0.9325 nan 0.7413 0.9388 0.0 0.2613 0.9309
0.711 90.0 180 0.7417 0.3962 0.8428 0.9313 nan 0.7484 0.9373 0.0 0.2591 0.9296
0.7855 95.0 190 0.7281 0.4003 0.8439 0.9343 nan 0.7473 0.9404 0.0 0.2683 0.9327
0.7632 100.0 200 0.7494 0.3883 0.8419 0.9237 nan 0.7545 0.9293 0.0 0.2430 0.9219
0.8145 105.0 210 0.7495 0.3862 0.8412 0.9219 nan 0.7551 0.9274 0.0 0.2387 0.9201
0.8217 110.0 220 0.7355 0.3933 0.8422 0.9282 nan 0.7502 0.9341 0.0 0.2533 0.9265
0.7784 115.0 230 0.7258 0.4088 0.8411 0.9413 nan 0.7340 0.9481 0.0 0.2864 0.9400
0.8349 120.0 240 0.7492 0.3878 0.8431 0.9233 nan 0.7575 0.9287 0.0 0.2418 0.9214

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
2