Edit model card

jobdescription

This model is a fine-tuned version of bert-base-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4789
  • F1: 0.5701
  • Roc Auc: 0.7465
  • Accuracy: 0.2801

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss F1 Roc Auc Accuracy
0.2983 0.82 500 0.2704 0.2291 0.5655 0.0767
0.2502 1.65 1000 0.2516 0.3179 0.5999 0.1206
0.2354 2.47 1500 0.2390 0.3442 0.6093 0.1651
0.2169 3.29 2000 0.2327 0.4040 0.6366 0.2022
0.1988 4.12 2500 0.2310 0.4561 0.6669 0.2127
0.1809 4.94 3000 0.2332 0.4599 0.6655 0.2226
0.1637 5.77 3500 0.2331 0.5096 0.7112 0.2226
0.1499 6.59 4000 0.2331 0.5159 0.7101 0.2239
0.1384 7.41 4500 0.2404 0.5121 0.6987 0.2319
0.1253 8.24 5000 0.2443 0.5177 0.7048 0.2288
0.1108 9.06 5500 0.2509 0.5352 0.7272 0.2319
0.0974 9.88 6000 0.2669 0.5309 0.7214 0.2375
0.0844 10.71 6500 0.2650 0.5420 0.7334 0.2393
0.076 11.53 7000 0.2793 0.5263 0.7158 0.2344
0.0672 12.36 7500 0.2904 0.5453 0.7340 0.2369
0.0607 13.18 8000 0.3024 0.5424 0.7270 0.2529
0.0549 14.0 8500 0.3026 0.5524 0.7311 0.2684
0.0464 14.83 9000 0.3211 0.5538 0.7386 0.2505
0.0411 15.65 9500 0.3292 0.5591 0.7408 0.2672
0.0356 16.47 10000 0.3417 0.5633 0.7537 0.2492
0.0335 17.3 10500 0.3447 0.5601 0.7463 0.2536
0.0295 18.12 11000 0.3447 0.5678 0.7465 0.2715
0.0262 18.95 11500 0.3539 0.5642 0.7437 0.2653
0.0237 19.77 12000 0.3709 0.5631 0.7393 0.2801
0.0206 20.59 12500 0.3715 0.5617 0.7443 0.2783
0.0181 21.42 13000 0.3783 0.5672 0.7513 0.2641
0.0192 22.24 13500 0.3931 0.5622 0.7402 0.2672
0.0173 23.06 14000 0.3902 0.5665 0.7471 0.2709
0.0166 23.89 14500 0.4031 0.5649 0.7452 0.2740
0.0141 24.71 15000 0.4120 0.5632 0.7421 0.2764
0.0131 25.54 15500 0.4071 0.5644 0.7428 0.2845
0.013 26.36 16000 0.4122 0.5668 0.7412 0.2857
0.0121 27.18 16500 0.4253 0.5714 0.7505 0.2771
0.0109 28.01 17000 0.4323 0.5687 0.7462 0.2764
0.0112 28.83 17500 0.4433 0.5600 0.7401 0.2839
0.0099 29.65 18000 0.4374 0.5670 0.7446 0.2814
0.0106 30.48 18500 0.4395 0.5644 0.7488 0.2690
0.0104 31.3 19000 0.4369 0.5724 0.7498 0.2752
0.0085 32.13 19500 0.4469 0.5660 0.7430 0.2777
0.0093 32.95 20000 0.4483 0.5698 0.7463 0.2808
0.0085 33.77 20500 0.4549 0.5704 0.7580 0.2653
0.0093 34.6 21000 0.4579 0.5664 0.7420 0.2863
0.009 35.42 21500 0.4560 0.5726 0.7486 0.2808
0.0075 36.24 22000 0.4650 0.5635 0.7502 0.2715
0.0081 37.07 22500 0.4647 0.5659 0.7502 0.2715
0.0074 37.89 23000 0.4662 0.5674 0.7503 0.2758
0.0077 38.71 23500 0.4710 0.5676 0.7460 0.2771
0.0065 39.54 24000 0.4701 0.5659 0.7461 0.2801
0.0076 40.36 24500 0.4673 0.5687 0.7452 0.2777
0.0075 41.19 25000 0.4692 0.5643 0.7430 0.2715
0.0071 42.01 25500 0.4743 0.5697 0.7490 0.2771
0.0071 42.83 26000 0.4705 0.5678 0.7459 0.2703
0.0063 43.66 26500 0.4711 0.5682 0.7448 0.2777
0.0071 44.48 27000 0.4722 0.5671 0.7442 0.2715
0.0061 45.3 27500 0.4714 0.5680 0.7441 0.2789
0.0065 46.13 28000 0.4781 0.5712 0.7487 0.2764
0.0067 46.95 28500 0.4770 0.5699 0.7439 0.2764
0.0065 47.78 29000 0.4790 0.5697 0.7463 0.2789
0.006 48.6 29500 0.4782 0.5698 0.7463 0.2801
0.0058 49.42 30000 0.4789 0.5701 0.7465 0.2801

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
10
Safetensors
Model size
110M params
Tensor type
F32
·

Finetuned from