Edit model card

predict-perception-xlmr-blame-none

This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8941
  • Rmse: 1.1259
  • Rmse Blame::a Nessuno: 1.1259
  • Mae: 0.8559
  • Mae Blame::a Nessuno: 0.8559
  • R2: 0.2847
  • R2 Blame::a Nessuno: 0.2847
  • Cos: 0.3043
  • Pair: 0.0
  • Rank: 0.5
  • Neighbors: 0.3537
  • Rsa: nan

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 1996
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rmse Rmse Blame::a Nessuno Mae Mae Blame::a Nessuno R2 R2 Blame::a Nessuno Cos Pair Rank Neighbors Rsa
1.042 1.0 15 1.2746 1.3443 1.3443 1.1788 1.1788 -0.0197 -0.0197 0.0435 0.0 0.5 0.2970 nan
0.9994 2.0 30 1.3264 1.3714 1.3714 1.1967 1.1967 -0.0612 -0.0612 -0.0435 0.0 0.5 0.2961 nan
0.9123 3.0 45 1.2511 1.3319 1.3319 1.0932 1.0932 -0.0009 -0.0009 0.1304 0.0 0.5 0.2681 nan
0.741 4.0 60 1.0204 1.2028 1.2028 0.9818 0.9818 0.1836 0.1836 0.3043 0.0 0.5 0.3686 nan
0.6337 5.0 75 0.8607 1.1047 1.1047 0.8145 0.8145 0.3115 0.3115 0.3913 0.0 0.5 0.4044 nan
0.4974 6.0 90 0.8574 1.1026 1.1026 0.8095 0.8095 0.3140 0.3140 0.3913 0.0 0.5 0.4044 nan
0.4929 7.0 105 0.8548 1.1009 1.1009 0.8560 0.8560 0.3161 0.3161 0.3043 0.0 0.5 0.3686 nan
0.4378 8.0 120 0.6974 0.9944 0.9944 0.7503 0.7503 0.4421 0.4421 0.3043 0.0 0.5 0.3686 nan
0.3999 9.0 135 0.7955 1.0620 1.0620 0.7907 0.7907 0.3636 0.3636 0.3913 0.0 0.5 0.4044 nan
0.3715 10.0 150 0.8954 1.1267 1.1267 0.8036 0.8036 0.2837 0.2837 0.4783 0.0 0.5 0.4058 nan
0.3551 11.0 165 0.8449 1.0945 1.0945 0.8748 0.8748 0.3241 0.3241 0.3913 0.0 0.5 0.3931 nan
0.3428 12.0 180 0.7960 1.0624 1.0624 0.8000 0.8000 0.3632 0.3632 0.3913 0.0 0.5 0.4044 nan
0.2923 13.0 195 0.9027 1.1313 1.1313 0.8441 0.8441 0.2778 0.2778 0.3043 0.0 0.5 0.3537 nan
0.2236 14.0 210 0.8914 1.1242 1.1242 0.8998 0.8998 0.2869 0.2869 0.2174 0.0 0.5 0.3324 nan
0.2553 15.0 225 0.9184 1.1411 1.1411 0.8633 0.8633 0.2652 0.2652 0.3043 0.0 0.5 0.3537 nan
0.2064 16.0 240 0.9284 1.1473 1.1473 0.8919 0.8919 0.2573 0.2573 0.3043 0.0 0.5 0.3537 nan
0.1972 17.0 255 0.9495 1.1602 1.1602 0.8768 0.8768 0.2404 0.2404 0.3043 0.0 0.5 0.3537 nan
0.1622 18.0 270 0.9850 1.1818 1.1818 0.9303 0.9303 0.2120 0.2120 0.2174 0.0 0.5 0.3324 nan
0.1685 19.0 285 0.9603 1.1669 1.1669 0.8679 0.8679 0.2317 0.2317 0.3043 0.0 0.5 0.3537 nan
0.1773 20.0 300 0.9269 1.1464 1.1464 0.8391 0.8391 0.2585 0.2585 0.3043 0.0 0.5 0.3537 nan
0.1716 21.0 315 0.8936 1.1256 1.1256 0.8357 0.8357 0.2851 0.2851 0.3043 0.0 0.5 0.3537 nan
0.161 22.0 330 0.8894 1.1230 1.1230 0.8593 0.8593 0.2884 0.2884 0.3043 0.0 0.5 0.3537 nan
0.1297 23.0 345 0.8997 1.1294 1.1294 0.8568 0.8568 0.2802 0.2802 0.3043 0.0 0.5 0.3537 nan
0.15 24.0 360 0.8748 1.1137 1.1137 0.8541 0.8541 0.3002 0.3002 0.2174 0.0 0.5 0.3324 nan
0.1149 25.0 375 0.9264 1.1461 1.1461 0.8682 0.8682 0.2588 0.2588 0.3913 0.0 0.5 0.3901 nan
0.1354 26.0 390 0.8829 1.1188 1.1188 0.8608 0.8608 0.2937 0.2937 0.2174 0.0 0.5 0.3324 nan
0.1321 27.0 405 0.9137 1.1382 1.1382 0.8656 0.8656 0.2691 0.2691 0.3043 0.0 0.5 0.3537 nan
0.1154 28.0 420 0.8774 1.1154 1.1154 0.8488 0.8488 0.2980 0.2980 0.2174 0.0 0.5 0.3324 nan
0.1112 29.0 435 0.8985 1.1287 1.1287 0.8562 0.8562 0.2812 0.2812 0.3043 0.0 0.5 0.3537 nan
0.1525 30.0 450 0.8941 1.1259 1.1259 0.8559 0.8559 0.2847 0.2847 0.3043 0.0 0.5 0.3537 nan

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.2+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0
Downloads last month
21
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.