predict-perception-xlmr-focus-assassin

This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3264
  • Rmse: 0.9437
  • Rmse Focus::a Sull'assassino: 0.9437
  • Mae: 0.7093
  • Mae Focus::a Sull'assassino: 0.7093
  • R2: 0.6145
  • R2 Focus::a Sull'assassino: 0.6145
  • Cos: 0.7391
  • Pair: 0.0
  • Rank: 0.5
  • Neighbors: 0.6131
  • Rsa: nan

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 1996
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rmse Rmse Focus::a Sull'assassino Mae Mae Focus::a Sull'assassino R2 R2 Focus::a Sull'assassino Cos Pair Rank Neighbors Rsa
1.0403 1.0 15 1.1576 1.7771 1.7771 1.6028 1.6028 -0.3670 -0.3670 -0.2174 0.0 0.5 0.2379 nan
0.9818 2.0 30 0.8916 1.5596 1.5596 1.4136 1.4136 -0.0529 -0.0529 0.3913 0.0 0.5 0.3793 nan
0.9276 3.0 45 0.9277 1.5909 1.5909 1.4560 1.4560 -0.0955 -0.0955 0.3913 0.0 0.5 0.3742 nan
0.8395 4.0 60 0.7958 1.4734 1.4734 1.3032 1.3032 0.0603 0.0603 0.5652 0.0 0.5 0.4598 nan
0.7587 5.0 75 0.4647 1.1259 1.1259 0.9316 0.9316 0.4513 0.4513 0.6522 0.0 0.5 0.5087 nan
0.696 6.0 90 0.5368 1.2101 1.2101 1.0847 1.0847 0.3661 0.3661 0.7391 0.0 0.5 0.5302 nan
0.548 7.0 105 0.3110 0.9211 0.9211 0.7896 0.7896 0.6328 0.6328 0.6522 0.0 0.5 0.5261 nan
0.4371 8.0 120 0.3392 0.9619 0.9619 0.8132 0.8132 0.5995 0.5995 0.6522 0.0 0.5 0.5261 nan
0.355 9.0 135 0.3938 1.0366 1.0366 0.8153 0.8153 0.5349 0.5349 0.7391 0.0 0.5 0.6131 nan
0.2919 10.0 150 0.3484 0.9749 0.9749 0.7487 0.7487 0.5886 0.5886 0.7391 0.0 0.5 0.6131 nan
0.2595 11.0 165 0.2812 0.8759 0.8759 0.6265 0.6265 0.6679 0.6679 0.7391 0.0 0.5 0.6131 nan
0.2368 12.0 180 0.2534 0.8314 0.8314 0.6402 0.6402 0.7008 0.7008 0.7391 0.0 0.5 0.6131 nan
0.227 13.0 195 0.2878 0.8861 0.8861 0.6769 0.6769 0.6601 0.6601 0.7391 0.0 0.5 0.6131 nan
0.1979 14.0 210 0.2405 0.8100 0.8100 0.6113 0.6113 0.7160 0.7160 0.7391 0.0 0.5 0.6131 nan
0.1622 15.0 225 0.2575 0.8382 0.8382 0.6017 0.6017 0.6959 0.6959 0.8261 0.0 0.5 0.6622 nan
0.1575 16.0 240 0.2945 0.8963 0.8963 0.6741 0.6741 0.6523 0.6523 0.8261 0.0 0.5 0.6622 nan
0.1479 17.0 255 0.3563 0.9859 0.9859 0.7367 0.7367 0.5792 0.5792 0.8261 0.0 0.5 0.6622 nan
0.1269 18.0 270 0.2806 0.8750 0.8750 0.6665 0.6665 0.6686 0.6686 0.8261 0.0 0.5 0.6622 nan
0.1257 19.0 285 0.3267 0.9441 0.9441 0.6739 0.6739 0.6142 0.6142 0.8261 0.0 0.5 0.6622 nan
0.134 20.0 300 0.3780 1.0155 1.0155 0.7331 0.7331 0.5536 0.5536 0.7391 0.0 0.5 0.5302 nan
0.1171 21.0 315 0.3890 1.0301 1.0301 0.7444 0.7444 0.5406 0.5406 0.8261 0.0 0.5 0.6622 nan
0.0934 22.0 330 0.3131 0.9242 0.9242 0.6923 0.6923 0.6303 0.6303 0.8261 0.0 0.5 0.6622 nan
0.1112 23.0 345 0.2912 0.8913 0.8913 0.6610 0.6610 0.6561 0.6561 0.8261 0.0 0.5 0.6622 nan
0.1038 24.0 360 0.3109 0.9209 0.9209 0.7019 0.7019 0.6329 0.6329 0.8261 0.0 0.5 0.6622 nan
0.085 25.0 375 0.3469 0.9728 0.9728 0.7383 0.7383 0.5904 0.5904 0.8261 0.0 0.5 0.6622 nan
0.0843 26.0 390 0.3017 0.9073 0.9073 0.6848 0.6848 0.6437 0.6437 0.7391 0.0 0.5 0.6131 nan
0.093 27.0 405 0.3269 0.9443 0.9443 0.7042 0.7042 0.6140 0.6140 0.7391 0.0 0.5 0.6131 nan
0.0846 28.0 420 0.3161 0.9286 0.9286 0.6937 0.6937 0.6267 0.6267 0.7391 0.0 0.5 0.6131 nan
0.0764 29.0 435 0.3244 0.9408 0.9408 0.7079 0.7079 0.6169 0.6169 0.7391 0.0 0.5 0.6131 nan
0.0697 30.0 450 0.3264 0.9437 0.9437 0.7093 0.7093 0.6145 0.6145 0.7391 0.0 0.5 0.6131 nan

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.2+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0
Downloads last month
18
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.