Edit model card

predict-perception-bert-blame-assassin

This model is a fine-tuned version of dbmdz/bert-base-italian-xxl-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5128
  • Rmse: 1.0287
  • Rmse Blame::a L'assassino: 1.0287
  • Mae: 0.8883
  • Mae Blame::a L'assassino: 0.8883
  • R2: 0.5883
  • R2 Blame::a L'assassino: 0.5883
  • Cos: 0.6522
  • Pair: 0.0
  • Rank: 0.5
  • Neighbors: 0.5795
  • Rsa: nan

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 1996
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rmse Rmse Blame::a L'assassino Mae Mae Blame::a L'assassino R2 R2 Blame::a L'assassino Cos Pair Rank Neighbors Rsa
1.0184 1.0 15 1.2219 1.5879 1.5879 1.4308 1.4308 0.0191 0.0191 0.3913 0.0 0.5 0.3781 nan
0.9214 2.0 30 1.0927 1.5017 1.5017 1.3634 1.3634 0.1227 0.1227 0.5652 0.0 0.5 0.4512 nan
0.7809 3.0 45 0.8206 1.3013 1.3013 1.1808 1.1808 0.3412 0.3412 0.4783 0.0 0.5 0.3819 nan
0.6593 4.0 60 0.5894 1.1029 1.1029 1.0145 1.0145 0.5268 0.5268 0.7391 0.0 0.5 0.6408 nan
0.4672 5.0 75 0.4759 0.9910 0.9910 0.8868 0.8868 0.6180 0.6180 0.7391 0.0 0.5 0.4884 nan
0.3356 6.0 90 0.4220 0.9332 0.9332 0.8083 0.8083 0.6612 0.6612 0.6522 0.0 0.5 0.4249 nan
0.2782 7.0 105 0.4477 0.9612 0.9612 0.8046 0.8046 0.6406 0.6406 0.6522 0.0 0.5 0.6101 nan
0.2075 8.0 120 0.4389 0.9518 0.9518 0.8050 0.8050 0.6476 0.6476 0.6522 0.0 0.5 0.5795 nan
0.1725 9.0 135 0.4832 0.9985 0.9985 0.8356 0.8356 0.6121 0.6121 0.7391 0.0 0.5 0.6616 nan
0.1642 10.0 150 0.4368 0.9494 0.9494 0.8060 0.8060 0.6493 0.6493 0.6522 0.0 0.5 0.5795 nan
0.1172 11.0 165 0.4538 0.9677 0.9677 0.8174 0.8174 0.6357 0.6357 0.7391 0.0 0.5 0.4884 nan
0.104 12.0 180 0.4672 0.9819 0.9819 0.8384 0.8384 0.6249 0.6249 0.7391 0.0 0.5 0.4884 nan
0.0822 13.0 195 0.4401 0.9530 0.9530 0.8107 0.8107 0.6467 0.6467 0.7391 0.0 0.5 0.4884 nan
0.0755 14.0 210 0.4464 0.9598 0.9598 0.8251 0.8251 0.6416 0.6416 0.7391 0.0 0.5 0.4884 nan
0.0801 15.0 225 0.4834 0.9988 0.9988 0.8604 0.8604 0.6119 0.6119 0.7391 0.0 0.5 0.4884 nan
0.053 16.0 240 0.4846 1.0001 1.0001 0.8651 0.8651 0.6109 0.6109 0.7391 0.0 0.5 0.4884 nan
0.0573 17.0 255 0.4970 1.0128 1.0128 0.8743 0.8743 0.6010 0.6010 0.7391 0.0 0.5 0.4884 nan
0.0571 18.0 270 0.4803 0.9956 0.9956 0.8503 0.8503 0.6144 0.6144 0.6522 0.0 0.5 0.5795 nan
0.0483 19.0 285 0.4936 1.0093 1.0093 0.8740 0.8740 0.6037 0.6037 0.6522 0.0 0.5 0.5795 nan
0.0414 20.0 300 0.5138 1.0297 1.0297 0.8943 0.8943 0.5875 0.5875 0.6522 0.0 0.5 0.5795 nan
0.0513 21.0 315 0.5240 1.0399 1.0399 0.9050 0.9050 0.5793 0.5793 0.7391 0.0 0.5 0.4884 nan
0.0499 22.0 330 0.5275 1.0434 1.0434 0.9048 0.9048 0.5765 0.5765 0.7391 0.0 0.5 0.4884 nan
0.0423 23.0 345 0.5350 1.0508 1.0508 0.8872 0.8872 0.5705 0.5705 0.6522 0.0 0.5 0.5795 nan
0.0447 24.0 360 0.4963 1.0120 1.0120 0.8754 0.8754 0.6016 0.6016 0.7391 0.0 0.5 0.4884 nan
0.0364 25.0 375 0.5009 1.0167 1.0167 0.8809 0.8809 0.5979 0.5979 0.6522 0.0 0.5 0.5795 nan
0.0412 26.0 390 0.5060 1.0219 1.0219 0.8781 0.8781 0.5938 0.5938 0.6522 0.0 0.5 0.5795 nan
0.0297 27.0 405 0.5027 1.0185 1.0185 0.8838 0.8838 0.5964 0.5964 0.7391 0.0 0.5 0.4884 nan
0.0416 28.0 420 0.5071 1.0230 1.0230 0.8867 0.8867 0.5929 0.5929 0.7391 0.0 0.5 0.4884 nan
0.0327 29.0 435 0.5124 1.0283 1.0283 0.8883 0.8883 0.5887 0.5887 0.6522 0.0 0.5 0.5795 nan
0.0383 30.0 450 0.5128 1.0287 1.0287 0.8883 0.8883 0.5883 0.5883 0.6522 0.0 0.5 0.5795 nan

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.2+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0
Downloads last month
11