Edit model card

xlmr-en-de-train_shuffled-1986-test2000

This model is a fine-tuned version of xlm-roberta-base on the wmt20_mlqe_task1 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5216
  • R Squared: 0.0640
  • Mae: 0.5363
  • Pearson R: 0.4009

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 1986
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss R Squared Mae Pearson R
No log 1.0 375 0.5588 -0.0028 0.5813 0.3172
0.6965 2.0 750 0.5465 0.0193 0.5548 0.3819
0.6808 3.0 1125 0.5216 0.0640 0.5363 0.4009

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
0

Finetuned from

Dataset used to train patpizio/xlmr-en-de-train_shuffled-1986-test2000