Edit model card

scenario-NON-KD-PR-COPY-CDF-EN-D2_data-en-cardiff_eng_only_alpha

This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 4.0650
  • Accuracy: 0.5150
  • F1: 0.5153

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 1123
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
No log 1.72 100 1.0715 0.4810 0.4544
No log 3.45 200 1.3491 0.5035 0.4980
No log 5.17 300 1.6657 0.5123 0.5129
No log 6.9 400 1.9846 0.5084 0.5079
0.4799 8.62 500 2.5547 0.5115 0.5117
0.4799 10.34 600 2.5722 0.5190 0.5193
0.4799 12.07 700 2.9243 0.5123 0.5118
0.4799 13.79 800 3.4404 0.5154 0.5143
0.4799 15.52 900 3.5740 0.5225 0.5210
0.0509 17.24 1000 3.6523 0.5176 0.5160
0.0509 18.97 1100 3.7591 0.5225 0.5231
0.0509 20.69 1200 3.7790 0.5137 0.5139
0.0509 22.41 1300 3.8779 0.5256 0.5271
0.0509 24.14 1400 3.8982 0.5251 0.5259
0.0107 25.86 1500 4.0718 0.5097 0.5109
0.0107 27.59 1600 3.9794 0.5234 0.5240
0.0107 29.31 1700 4.0650 0.5150 0.5153

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
1

Finetuned from