Edit model card

predict-perception-bert-blame-none

This model is a fine-tuned version of dbmdz/bert-base-italian-xxl-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8646
  • Rmse: 1.1072
  • Rmse Blame::a Nessuno: 1.1072
  • Mae: 0.8721
  • Mae Blame::a Nessuno: 0.8721
  • R2: 0.3083
  • R2 Blame::a Nessuno: 0.3083
  • Cos: 0.5652
  • Pair: 0.0
  • Rank: 0.5
  • Neighbors: 0.5070
  • Rsa: nan

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 1996
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rmse Rmse Blame::a Nessuno Mae Mae Blame::a Nessuno R2 R2 Blame::a Nessuno Cos Pair Rank Neighbors Rsa
1.007 1.0 15 1.2585 1.3358 1.3358 1.1752 1.1752 -0.0068 -0.0068 -0.0435 0.0 0.5 0.2970 nan
0.927 2.0 30 1.1310 1.2663 1.2663 1.0633 1.0633 0.0952 0.0952 0.4783 0.0 0.5 0.4012 nan
0.8376 3.0 45 1.0603 1.2261 1.2261 1.0574 1.0574 0.1518 0.1518 0.1304 0.0 0.5 0.2970 nan
0.7154 4.0 60 0.8347 1.0879 1.0879 0.8854 0.8854 0.3323 0.3323 0.6522 0.0 0.5 0.5209 nan
0.5766 5.0 75 0.7426 1.0261 1.0261 0.8340 0.8340 0.4059 0.4059 0.6522 0.0 0.5 0.5209 nan
0.4632 6.0 90 0.6671 0.9725 0.9725 0.7932 0.7932 0.4663 0.4663 0.6522 0.0 0.5 0.5209 nan
0.3854 7.0 105 0.6447 0.9561 0.9561 0.7424 0.7424 0.4842 0.4842 0.6522 0.0 0.5 0.4307 nan
0.3154 8.0 120 0.7198 1.0102 1.0102 0.8113 0.8113 0.4241 0.4241 0.6522 0.0 0.5 0.4307 nan
0.2637 9.0 135 0.7221 1.0118 1.0118 0.8319 0.8319 0.4223 0.4223 0.5652 0.0 0.5 0.4150 nan
0.1962 10.0 150 0.6999 0.9962 0.9962 0.7945 0.7945 0.4401 0.4401 0.4783 0.0 0.5 0.4056 nan
0.1784 11.0 165 0.7335 1.0198 1.0198 0.7969 0.7969 0.4132 0.4132 0.5652 0.0 0.5 0.4150 nan
0.1531 12.0 180 0.8277 1.0833 1.0833 0.8839 0.8839 0.3378 0.3378 0.4783 0.0 0.5 0.4440 nan
0.1425 13.0 195 0.8644 1.1070 1.1070 0.8726 0.8726 0.3085 0.3085 0.5652 0.0 0.5 0.5070 nan
0.0921 14.0 210 0.8874 1.1217 1.1217 0.9024 0.9024 0.2900 0.2900 0.4783 0.0 0.5 0.4440 nan
0.0913 15.0 225 0.8663 1.1083 1.1083 0.8914 0.8914 0.3070 0.3070 0.5652 0.0 0.5 0.5070 nan
0.08 16.0 240 0.8678 1.1093 1.1093 0.8762 0.8762 0.3057 0.3057 0.6522 0.0 0.5 0.5931 nan
0.0725 17.0 255 0.8497 1.0976 1.0976 0.8868 0.8868 0.3202 0.3202 0.4783 0.0 0.5 0.4440 nan
0.0696 18.0 270 0.8533 1.1000 1.1000 0.8796 0.8796 0.3173 0.3173 0.5652 0.0 0.5 0.5070 nan
0.0632 19.0 285 0.8563 1.1018 1.1018 0.8768 0.8768 0.3150 0.3150 0.5652 0.0 0.5 0.5070 nan
0.0511 20.0 300 0.8433 1.0935 1.0935 0.8684 0.8684 0.3254 0.3254 0.5652 0.0 0.5 0.5070 nan
0.0517 21.0 315 0.8449 1.0945 1.0945 0.8758 0.8758 0.3240 0.3240 0.4783 0.0 0.5 0.4440 nan
0.0556 22.0 330 0.8305 1.0851 1.0851 0.8469 0.8469 0.3356 0.3356 0.5652 0.0 0.5 0.5070 nan
0.0457 23.0 345 0.8369 1.0893 1.0893 0.8555 0.8555 0.3305 0.3305 0.5652 0.0 0.5 0.5070 nan
0.0496 24.0 360 0.8441 1.0940 1.0940 0.8648 0.8648 0.3247 0.3247 0.5652 0.0 0.5 0.5070 nan
0.0467 25.0 375 0.8470 1.0959 1.0959 0.8633 0.8633 0.3224 0.3224 0.5652 0.0 0.5 0.5070 nan
0.0446 26.0 390 0.8562 1.1018 1.1018 0.8708 0.8708 0.3151 0.3151 0.4783 0.0 0.5 0.4440 nan
0.0476 27.0 405 0.8600 1.1042 1.1042 0.8714 0.8714 0.3120 0.3120 0.5652 0.0 0.5 0.5070 nan
0.042 28.0 420 0.8657 1.1079 1.1079 0.8763 0.8763 0.3074 0.3074 0.4783 0.0 0.5 0.4440 nan
0.0431 29.0 435 0.8654 1.1077 1.1077 0.8734 0.8734 0.3077 0.3077 0.5652 0.0 0.5 0.5070 nan
0.0423 30.0 450 0.8646 1.1072 1.1072 0.8721 0.8721 0.3083 0.3083 0.5652 0.0 0.5 0.5070 nan

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.2+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.