Edit model card

scenario-KD-PR-CDF-EN-FROM-EN-D2_data-en-cardiff_eng_only_gamma-jason

This model is a fine-tuned version of FacebookAI/xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 26.6871
  • Accuracy: 0.3990
  • F1: 0.3955

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 88458
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
No log 1.72 100 21.3642 0.3660 0.3641
No log 3.45 200 21.4559 0.3880 0.3617
No log 5.17 300 21.6372 0.3977 0.3796
No log 6.9 400 21.6672 0.3893 0.3893
21.6981 8.62 500 22.0712 0.4043 0.3868
21.6981 10.34 600 22.6253 0.4149 0.4116
21.6981 12.07 700 23.5431 0.3805 0.3464
21.6981 13.79 800 23.6978 0.4131 0.4081
21.6981 15.52 900 24.0596 0.3964 0.3925
15.4299 17.24 1000 24.5552 0.3840 0.3772
15.4299 18.97 1100 24.7817 0.4065 0.4058
15.4299 20.69 1200 25.8225 0.4101 0.4036
15.4299 22.41 1300 25.2075 0.4026 0.3965
15.4299 24.14 1400 27.0222 0.3911 0.3778
10.9594 25.86 1500 26.5701 0.4017 0.4008
10.9594 27.59 1600 26.4218 0.3946 0.3902
10.9594 29.31 1700 26.6871 0.3990 0.3955

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
2
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from