predict-perception-xlmr-blame-assassin

This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4439
  • Rmse: 0.9571
  • Rmse Blame::a L'assassino: 0.9571
  • Mae: 0.7260
  • Mae Blame::a L'assassino: 0.7260
  • R2: 0.6437
  • R2 Blame::a L'assassino: 0.6437
  • Cos: 0.7391
  • Pair: 0.0
  • Rank: 0.5
  • Neighbors: 0.6287
  • Rsa: nan

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 1996
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rmse Rmse Blame::a L'assassino Mae Mae Blame::a L'assassino R2 R2 Blame::a L'assassino Cos Pair Rank Neighbors Rsa
1.0317 1.0 15 1.1311 1.5278 1.5278 1.3893 1.3893 0.0919 0.0919 0.5652 0.0 0.5 0.4512 nan
0.9475 2.0 30 1.0795 1.4926 1.4926 1.3387 1.3387 0.1334 0.1334 0.8261 0.0 0.5 0.6184 nan
0.9146 3.0 45 1.1092 1.5130 1.5130 1.4078 1.4078 0.1095 0.1095 0.4783 0.0 0.5 0.3116 nan
0.9539 4.0 60 1.1734 1.5561 1.5561 1.4238 1.4238 0.0580 0.0580 0.3913 0.0 0.5 0.3614 nan
0.8665 5.0 75 0.8910 1.3560 1.3560 1.2350 1.2350 0.2847 0.2847 0.5652 0.0 0.5 0.4136 nan
0.6564 6.0 90 0.8469 1.3220 1.3220 1.1570 1.1570 0.3201 0.3201 0.3913 0.0 0.5 0.3931 nan
0.5241 7.0 105 0.6429 1.1519 1.1519 0.9757 0.9757 0.4838 0.4838 0.5652 0.0 0.5 0.4222 nan
0.4589 8.0 120 0.5781 1.0923 1.0923 0.8714 0.8714 0.5359 0.5359 0.6522 0.0 0.5 0.4641 nan
0.4043 9.0 135 0.4525 0.9664 0.9664 0.8257 0.8257 0.6367 0.6367 0.5652 0.0 0.5 0.4263 nan
0.3498 10.0 150 0.4490 0.9627 0.9627 0.8272 0.8272 0.6395 0.6395 0.6522 0.0 0.5 0.5144 nan
0.3505 11.0 165 0.3721 0.8763 0.8763 0.7471 0.7471 0.7013 0.7013 0.7391 0.0 0.5 0.6287 nan
0.3426 12.0 180 0.4117 0.9218 0.9218 0.7477 0.7477 0.6695 0.6695 0.7391 0.0 0.5 0.6287 nan
0.3074 13.0 195 0.3761 0.8810 0.8810 0.7109 0.7109 0.6981 0.6981 0.7391 0.0 0.5 0.6287 nan
0.2261 14.0 210 0.3818 0.8877 0.8877 0.7042 0.7042 0.6935 0.6935 0.7391 0.0 0.5 0.6287 nan
0.2399 15.0 225 0.3893 0.8964 0.8964 0.7108 0.7108 0.6874 0.6874 0.7391 0.0 0.5 0.6287 nan
0.2014 16.0 240 0.4606 0.9750 0.9750 0.8046 0.8046 0.6302 0.6302 0.7391 0.0 0.5 0.6287 nan
0.1937 17.0 255 0.4549 0.9689 0.9689 0.7679 0.7679 0.6348 0.6348 0.7391 0.0 0.5 0.6287 nan
0.1831 18.0 270 0.4113 0.9213 0.9213 0.6746 0.6746 0.6698 0.6698 0.7391 0.0 0.5 0.6287 nan
0.1758 19.0 285 0.4154 0.9259 0.9259 0.7053 0.7053 0.6665 0.6665 0.7391 0.0 0.5 0.6287 nan
0.1577 20.0 300 0.3970 0.9051 0.9051 0.7163 0.7163 0.6813 0.6813 0.7391 0.0 0.5 0.6287 nan
0.1597 21.0 315 0.4199 0.9309 0.9309 0.7270 0.7270 0.6629 0.6629 0.7391 0.0 0.5 0.6287 nan
0.1145 22.0 330 0.4250 0.9365 0.9365 0.6971 0.6971 0.6588 0.6588 0.8261 0.0 0.5 0.6594 nan
0.1349 23.0 345 0.4168 0.9275 0.9275 0.7126 0.7126 0.6654 0.6654 0.7391 0.0 0.5 0.6287 nan
0.1481 24.0 360 0.4421 0.9552 0.9552 0.7441 0.7441 0.6451 0.6451 0.7391 0.0 0.5 0.6287 nan
0.1188 25.0 375 0.4356 0.9481 0.9481 0.7444 0.7444 0.6503 0.6503 0.7391 0.0 0.5 0.6287 nan
0.1119 26.0 390 0.4456 0.9590 0.9590 0.7139 0.7139 0.6422 0.6422 0.7391 0.0 0.5 0.6287 nan
0.1282 27.0 405 0.4456 0.9589 0.9589 0.7637 0.7637 0.6423 0.6423 0.7391 0.0 0.5 0.6287 nan
0.142 28.0 420 0.4501 0.9637 0.9637 0.7146 0.7146 0.6387 0.6387 0.8261 0.0 0.5 0.6594 nan
0.126 29.0 435 0.4442 0.9575 0.9575 0.7189 0.7189 0.6433 0.6433 0.7391 0.0 0.5 0.6287 nan
0.1308 30.0 450 0.4439 0.9571 0.9571 0.7260 0.7260 0.6437 0.6437 0.7391 0.0 0.5 0.6287 nan

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.2+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0
Downloads last month
21
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.