Edit model card

securebert-finetuned-autoisac

This model is a fine-tuned version of ehsanaghaei/SecureBERT on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5774

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 100
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
2.4541 1.0 2 2.1295
2.3899 2.0 4 3.1051
2.384 3.0 6 2.3916
2.461 4.0 8 2.5481
2.3104 5.0 10 1.9451
2.3225 6.0 12 2.4900
2.1623 7.0 14 2.1504
2.2753 8.0 16 2.2117
2.1934 9.0 18 2.2114
2.2003 10.0 20 2.5221
2.1598 11.0 22 2.0404
2.1319 12.0 24 1.9068
2.1139 13.0 26 1.8526
1.9242 14.0 28 1.6899
1.8706 15.0 30 2.2340
1.9503 16.0 32 2.1700
1.939 17.0 34 1.7180
1.998 18.0 36 1.9487
1.9129 19.0 38 2.3239
1.8028 20.0 40 2.4939
2.0098 21.0 42 2.1276
1.8822 22.0 44 1.5615
1.8569 23.0 46 2.2414
1.7875 24.0 48 1.7774
1.8278 25.0 50 2.5106
1.8141 26.0 52 1.9493
1.8379 27.0 54 1.9589
1.8965 28.0 56 2.2619
1.8251 29.0 58 1.7368
1.6857 30.0 60 1.7609
1.7867 31.0 62 2.1918
1.7636 32.0 64 2.2292
1.632 33.0 66 1.9211
1.6702 34.0 68 2.3036
1.6825 35.0 70 2.3332
1.6613 36.0 72 1.9210
1.5195 37.0 74 1.7967
1.6362 38.0 76 1.8938
1.652 39.0 78 1.8180
1.7578 40.0 80 2.0958
1.7971 41.0 82 2.3873
1.5767 42.0 84 1.4808
1.6922 43.0 86 2.1077
1.5517 44.0 88 1.6335
1.6198 45.0 90 1.7669
1.5966 46.0 92 2.0056
1.588 47.0 94 1.8835
1.5696 48.0 96 2.1344
1.5497 49.0 98 1.9380
1.5754 50.0 100 1.9710
1.5357 51.0 102 1.9916
1.5488 52.0 104 1.9536
1.5625 53.0 106 2.0705
1.5039 54.0 108 2.0675
1.5423 55.0 110 2.0393
1.5478 56.0 112 1.9174
1.571 57.0 114 1.6184
1.506 58.0 116 2.0959
1.4856 59.0 118 2.2757
1.5077 60.0 120 2.2091
1.607 61.0 122 2.1535
1.558 62.0 124 1.7893
1.5304 63.0 126 2.4471
1.533 64.0 128 1.7384
1.424 65.0 130 1.7157
1.5778 66.0 132 1.9103
1.4301 67.0 134 1.6906
1.5053 68.0 136 1.6810
1.4954 69.0 138 1.8924
1.5213 70.0 140 1.5374
1.4771 71.0 142 1.6301
1.3914 72.0 144 1.9411
1.466 73.0 146 1.6775
1.4342 74.0 148 1.5887
1.4158 75.0 150 1.9451
1.4845 76.0 152 1.7925
1.447 77.0 154 1.6508
1.3285 78.0 156 2.3469
1.4416 79.0 158 1.9387
1.3357 80.0 160 1.9829
1.4197 81.0 162 2.1912
1.4183 82.0 164 1.7065
1.5176 83.0 166 1.8547
1.4922 84.0 168 1.7672
1.4131 85.0 170 1.8707
1.4281 86.0 172 1.9953
1.439 87.0 174 1.7536
1.4848 88.0 176 1.9255
1.4845 89.0 178 1.5462
1.4587 90.0 180 1.3696
1.366 91.0 182 2.1685
1.5134 92.0 184 2.1314
1.4547 93.0 186 2.1088
1.3936 94.0 188 1.8491
1.4802 95.0 190 1.8716
1.3974 96.0 192 2.1149
1.4762 97.0 194 1.9697
1.4287 98.0 196 1.6517
1.5177 99.0 198 2.0683
1.3889 100.0 200 1.5774

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.0.1+cu117
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
4