predict-perception-xlmr-cause-human

This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7632
  • Rmse: 1.2675
  • Rmse Cause::a Causata da un essere umano: 1.2675
  • Mae: 0.9299
  • Mae Cause::a Causata da un essere umano: 0.9299
  • R2: 0.4188
  • R2 Cause::a Causata da un essere umano: 0.4188
  • Cos: 0.3913
  • Pair: 0.0
  • Rank: 0.5
  • Neighbors: 0.4082
  • Rsa: nan

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 20
  • eval_batch_size: 8
  • seed: 1996
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Rmse Rmse Cause::a Causata da un essere umano Mae Mae Cause::a Causata da un essere umano R2 R2 Cause::a Causata da un essere umano Cos Pair Rank Neighbors Rsa
1.0174 1.0 15 1.3796 1.7041 1.7041 1.3614 1.3614 -0.0506 -0.0506 -0.1304 0.0 0.5 0.2971 nan
0.9534 2.0 30 1.1173 1.5336 1.5336 1.2624 1.2624 0.1491 0.1491 0.4783 0.0 0.5 0.4446 nan
0.8883 3.0 45 1.0580 1.4923 1.4923 1.2451 1.2451 0.1943 0.1943 0.5652 0.0 0.5 0.4957 nan
0.8215 4.0 60 1.0200 1.4653 1.4653 1.2087 1.2087 0.2232 0.2232 0.6522 0.0 0.5 0.5123 nan
0.744 5.0 75 1.1496 1.5556 1.5556 1.2573 1.2573 0.1245 0.1245 0.2174 0.0 0.5 0.3007 nan
0.7056 6.0 90 0.9641 1.4246 1.4246 1.1763 1.1763 0.2658 0.2658 0.4783 0.0 0.5 0.3619 nan
0.6136 7.0 105 0.8328 1.3240 1.3240 1.0948 1.0948 0.3658 0.3658 0.4783 0.0 0.5 0.3628 nan
0.5185 8.0 120 0.6890 1.2043 1.2043 1.0112 1.0112 0.4753 0.4753 0.3913 0.0 0.5 0.4082 nan
0.5029 9.0 135 1.0380 1.4782 1.4782 1.1215 1.1215 0.2095 0.2095 0.3913 0.0 0.5 0.3781 nan
0.4624 10.0 150 1.1780 1.5747 1.5747 1.2852 1.2852 0.1029 0.1029 0.3913 0.0 0.5 0.4082 nan
0.4098 11.0 165 0.8714 1.3544 1.3544 1.1388 1.1388 0.3364 0.3364 0.3913 0.0 0.5 0.4082 nan
0.348 12.0 180 0.7260 1.2362 1.2362 0.9563 0.9563 0.4471 0.4471 0.5652 0.0 0.5 0.4957 nan
0.3437 13.0 195 0.7241 1.2346 1.2346 0.8998 0.8998 0.4485 0.4485 0.6522 0.0 0.5 0.4727 nan
0.2727 14.0 210 0.9070 1.3818 1.3818 1.1145 1.1145 0.3093 0.3093 0.3913 0.0 0.5 0.4082 nan
0.2762 15.0 225 0.7280 1.2380 1.2380 0.9210 0.9210 0.4456 0.4456 0.4783 0.0 0.5 0.4446 nan
0.2396 16.0 240 0.7921 1.2912 1.2912 0.9738 0.9738 0.3968 0.3968 0.3913 0.0 0.5 0.4082 nan
0.1955 17.0 255 0.8368 1.3272 1.3272 0.9717 0.9717 0.3627 0.3627 0.3913 0.0 0.5 0.4082 nan
0.1928 18.0 270 0.7782 1.2799 1.2799 0.9615 0.9615 0.4073 0.4073 0.3043 0.0 0.5 0.3768 nan
0.1893 19.0 285 0.7594 1.2644 1.2644 0.9441 0.9441 0.4216 0.4216 0.4783 0.0 0.5 0.4446 nan
0.2111 20.0 300 0.7230 1.2336 1.2336 0.8953 0.8953 0.4494 0.4494 0.3913 0.0 0.5 0.3787 nan
0.193 21.0 315 0.7836 1.2843 1.2843 0.9577 0.9577 0.4033 0.4033 0.3043 0.0 0.5 0.3768 nan
0.1649 22.0 330 0.7248 1.2352 1.2352 0.9133 0.9133 0.4480 0.4480 0.4783 0.0 0.5 0.4446 nan
0.2182 23.0 345 0.7608 1.2655 1.2655 0.9435 0.9435 0.4206 0.4206 0.4783 0.0 0.5 0.4446 nan
0.1534 24.0 360 0.7447 1.2520 1.2520 0.9277 0.9277 0.4329 0.4329 0.4783 0.0 0.5 0.4446 nan
0.1362 25.0 375 0.7437 1.2512 1.2512 0.9236 0.9236 0.4336 0.4336 0.3913 0.0 0.5 0.4082 nan
0.1391 26.0 390 0.7301 1.2397 1.2397 0.9182 0.9182 0.4440 0.4440 0.4783 0.0 0.5 0.4446 nan
0.1679 27.0 405 0.7748 1.2770 1.2770 0.9619 0.9619 0.4100 0.4100 0.3913 0.0 0.5 0.4082 nan
0.1491 28.0 420 0.7415 1.2493 1.2493 0.9097 0.9097 0.4353 0.4353 0.3913 0.0 0.5 0.4082 nan
0.1559 29.0 435 0.7525 1.2586 1.2586 0.9189 0.9189 0.4269 0.4269 0.3913 0.0 0.5 0.4082 nan
0.1784 30.0 450 0.7632 1.2675 1.2675 0.9299 0.9299 0.4188 0.4188 0.3913 0.0 0.5 0.4082 nan

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.10.2+cu113
  • Datasets 1.18.3
  • Tokenizers 0.11.0
Downloads last month
15
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.