Edit model card

bert_baseline_prompt_adherence_task4_fold1

This model is a fine-tuned version of google-bert/bert-base-cased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3376
  • Qwk: 0.7136
  • Mse: 0.3347

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0299 2 1.2352 0.0 1.2327
No log 0.0597 4 0.9657 0.0 0.9636
No log 0.0896 6 0.8614 0.2376 0.8596
No log 0.1194 8 0.8177 0.3472 0.8159
No log 0.1493 10 0.7773 0.3551 0.7756
No log 0.1791 12 0.7212 0.3577 0.7196
No log 0.2090 14 0.7108 0.3493 0.7094
No log 0.2388 16 0.6580 0.3495 0.6568
No log 0.2687 18 0.6153 0.3520 0.6140
No log 0.2985 20 0.5757 0.3622 0.5740
No log 0.3284 22 0.5514 0.3998 0.5492
No log 0.3582 24 0.5541 0.3725 0.5516
No log 0.3881 26 0.5877 0.3478 0.5850
No log 0.4179 28 0.5115 0.3637 0.5088
No log 0.4478 30 0.4795 0.3784 0.4767
No log 0.4776 32 0.4555 0.4686 0.4527
No log 0.5075 34 0.4509 0.6119 0.4480
No log 0.5373 36 0.4494 0.5913 0.4465
No log 0.5672 38 0.4459 0.5273 0.4431
No log 0.5970 40 0.4446 0.4820 0.4421
No log 0.6269 42 0.4337 0.5212 0.4310
No log 0.6567 44 0.4555 0.4669 0.4523
No log 0.6866 46 0.5375 0.3796 0.5338
No log 0.7164 48 0.5162 0.4192 0.5125
No log 0.7463 50 0.4909 0.4316 0.4873
No log 0.7761 52 0.4538 0.4563 0.4505
No log 0.8060 54 0.3986 0.6088 0.3957
No log 0.8358 56 0.3970 0.6362 0.3945
No log 0.8657 58 0.4051 0.6160 0.4029
No log 0.8955 60 0.4402 0.6915 0.4385
No log 0.9254 62 0.4589 0.7131 0.4574
No log 0.9552 64 0.4155 0.6957 0.4136
No log 0.9851 66 0.3925 0.6686 0.3902
No log 1.0149 68 0.3803 0.6230 0.3779
No log 1.0448 70 0.3849 0.5866 0.3823
No log 1.0746 72 0.3933 0.5621 0.3907
No log 1.1045 74 0.3767 0.6134 0.3742
No log 1.1343 76 0.3974 0.6952 0.3954
No log 1.1642 78 0.4197 0.7059 0.4180
No log 1.1940 80 0.4126 0.7019 0.4106
No log 1.2239 82 0.3825 0.6743 0.3801
No log 1.2537 84 0.3870 0.5898 0.3841
No log 1.2836 86 0.4077 0.5308 0.4046
No log 1.3134 88 0.3785 0.6068 0.3754
No log 1.3433 90 0.3659 0.6831 0.3634
No log 1.3731 92 0.4110 0.7143 0.4094
No log 1.4030 94 0.4019 0.7165 0.4003
No log 1.4328 96 0.3639 0.6585 0.3616
No log 1.4627 98 0.3611 0.5696 0.3580
No log 1.4925 100 0.4004 0.4979 0.3969
No log 1.5224 102 0.4052 0.4842 0.4016
No log 1.5522 104 0.3603 0.5786 0.3571
No log 1.5821 106 0.3606 0.7021 0.3581
No log 1.6119 108 0.3934 0.7166 0.3912
No log 1.6418 110 0.3772 0.7016 0.3746
No log 1.6716 112 0.3629 0.6278 0.3595
No log 1.7015 114 0.4135 0.4969 0.4097
No log 1.7313 116 0.4136 0.5040 0.4096
No log 1.7612 118 0.3709 0.6049 0.3670
No log 1.7910 120 0.3694 0.6999 0.3659
No log 1.8209 122 0.3852 0.7076 0.3820
No log 1.8507 124 0.4129 0.7210 0.4100
No log 1.8806 126 0.4104 0.7182 0.4071
No log 1.9104 128 0.3831 0.7150 0.3795
No log 1.9403 130 0.3645 0.7005 0.3605
No log 1.9701 132 0.3586 0.6678 0.3545
No log 2.0 134 0.3508 0.6547 0.3467
No log 2.0299 136 0.3474 0.6867 0.3438
No log 2.0597 138 0.3686 0.7233 0.3658
No log 2.0896 140 0.3793 0.7348 0.3771
No log 2.1194 142 0.3694 0.7303 0.3672
No log 2.1493 144 0.3552 0.7217 0.3529
No log 2.1791 146 0.3436 0.6527 0.3411
No log 2.2090 148 0.3436 0.6231 0.3408
No log 2.2388 150 0.3448 0.6403 0.3422
No log 2.2687 152 0.3592 0.6930 0.3570
No log 2.2985 154 0.3549 0.6682 0.3526
No log 2.3284 156 0.3531 0.6565 0.3509
No log 2.3582 158 0.3577 0.5698 0.3554
No log 2.3881 160 0.3661 0.5155 0.3635
No log 2.4179 162 0.3571 0.5463 0.3543
No log 2.4478 164 0.3388 0.6163 0.3358
No log 2.4776 166 0.3428 0.6843 0.3400
No log 2.5075 168 0.3889 0.7374 0.3864
No log 2.5373 170 0.4104 0.7415 0.4078
No log 2.5672 172 0.3920 0.7343 0.3890
No log 2.5970 174 0.3589 0.7071 0.3553
No log 2.6269 176 0.3418 0.6973 0.3379
No log 2.6567 178 0.3379 0.6729 0.3338
No log 2.6866 180 0.3366 0.6531 0.3325
No log 2.7164 182 0.3335 0.6938 0.3297
No log 2.7463 184 0.3515 0.7081 0.3485
No log 2.7761 186 0.3844 0.7387 0.3820
No log 2.8060 188 0.3874 0.7441 0.3852
No log 2.8358 190 0.3495 0.7224 0.3468
No log 2.8657 192 0.3242 0.6783 0.3208
No log 2.8955 194 0.3322 0.6336 0.3286
No log 2.9254 196 0.3354 0.6577 0.3316
No log 2.9552 198 0.3368 0.6913 0.3332
No log 2.9851 200 0.3676 0.7275 0.3647
No log 3.0149 202 0.4124 0.7294 0.4101
No log 3.0448 204 0.4165 0.7388 0.4142
No log 3.0746 206 0.3928 0.7287 0.3903
No log 3.1045 208 0.3558 0.7149 0.3527
No log 3.1343 210 0.3375 0.6970 0.3341
No log 3.1642 212 0.3306 0.6716 0.3272
No log 3.1940 214 0.3266 0.6814 0.3234
No log 3.2239 216 0.3316 0.7124 0.3287
No log 3.2537 218 0.3329 0.7182 0.3301
No log 3.2836 220 0.3393 0.7243 0.3367
No log 3.3134 222 0.3680 0.7327 0.3658
No log 3.3433 224 0.3882 0.7483 0.3862
No log 3.3731 226 0.3790 0.7355 0.3770
No log 3.4030 228 0.3458 0.7312 0.3432
No log 3.4328 230 0.3360 0.7308 0.3333
No log 3.4627 232 0.3497 0.7328 0.3473
No log 3.4925 234 0.3563 0.7328 0.3539
No log 3.5224 236 0.3456 0.7253 0.3428
No log 3.5522 238 0.3342 0.7016 0.3307
No log 3.5821 240 0.3457 0.6459 0.3418
No log 3.6119 242 0.3569 0.6425 0.3529
No log 3.6418 244 0.3518 0.6463 0.3478
No log 3.6716 246 0.3437 0.6589 0.3400
No log 3.7015 248 0.3393 0.6991 0.3360
No log 3.7313 250 0.3454 0.7089 0.3423
No log 3.7612 252 0.3429 0.7128 0.3398
No log 3.7910 254 0.3331 0.7014 0.3299
No log 3.8209 256 0.3270 0.6810 0.3237
No log 3.8507 258 0.3264 0.6792 0.3231
No log 3.8806 260 0.3264 0.6870 0.3233
No log 3.9104 262 0.3303 0.6999 0.3274
No log 3.9403 264 0.3387 0.7057 0.3359
No log 3.9701 266 0.3403 0.7095 0.3376
No log 4.0 268 0.3329 0.7027 0.3301
No log 4.0299 270 0.3325 0.6986 0.3296
No log 4.0597 272 0.3292 0.6847 0.3262
No log 4.0896 274 0.3278 0.6556 0.3247
No log 4.1194 276 0.3274 0.6594 0.3243
No log 4.1493 278 0.3292 0.7012 0.3262
No log 4.1791 280 0.3352 0.7056 0.3323
No log 4.2090 282 0.3480 0.7221 0.3453
No log 4.2388 284 0.3556 0.7342 0.3530
No log 4.2687 286 0.3560 0.7362 0.3534
No log 4.2985 288 0.3612 0.7380 0.3586
No log 4.3284 290 0.3578 0.7358 0.3552
No log 4.3582 292 0.3490 0.7266 0.3463
No log 4.3881 294 0.3390 0.7071 0.3360
No log 4.4179 296 0.3362 0.6972 0.3331
No log 4.4478 298 0.3333 0.7030 0.3301
No log 4.4776 300 0.3326 0.6942 0.3293
No log 4.5075 302 0.3322 0.6888 0.3288
No log 4.5373 304 0.3335 0.6910 0.3301
No log 4.5672 306 0.3350 0.6892 0.3316
No log 4.5970 308 0.3375 0.7055 0.3343
No log 4.6269 310 0.3425 0.7088 0.3395
No log 4.6567 312 0.3475 0.7169 0.3447
No log 4.6866 314 0.3502 0.7266 0.3474
No log 4.7164 316 0.3509 0.7285 0.3482
No log 4.7463 318 0.3496 0.7266 0.3469
No log 4.7761 320 0.3480 0.7266 0.3452
No log 4.8060 322 0.3461 0.7179 0.3433
No log 4.8358 324 0.3440 0.7159 0.3412
No log 4.8657 326 0.3411 0.7152 0.3383
No log 4.8955 328 0.3397 0.7165 0.3368
No log 4.9254 330 0.3389 0.7177 0.3360
No log 4.9552 332 0.3382 0.7134 0.3353
No log 4.9851 334 0.3376 0.7136 0.3347

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.1.2
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
6
Safetensors
Model size
108M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/bert_baseline_prompt_adherence_task4_fold1

Finetuned
(1752)
this model