Edit model card

scenario-KD-PR-MSV-D2_data-cl-cardiff_cl_only_alpha-jason

This model is a fine-tuned version of FacebookAI/xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 16.2447
  • Accuracy: 0.3866
  • F1: 0.3858

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 2222
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
No log 1.09 250 12.0259 0.3449 0.3171
14.0331 2.17 500 11.3284 0.3819 0.3694
14.0331 3.26 750 11.1163 0.3951 0.3941
11.7619 4.35 1000 11.5284 0.3796 0.3733
11.7619 5.43 1250 11.3713 0.4174 0.4154
9.9697 6.52 1500 11.7460 0.3850 0.3770
9.9697 7.61 1750 12.6216 0.3927 0.3863
8.7178 8.7 2000 12.5277 0.4020 0.4005
8.7178 9.78 2250 11.8300 0.3912 0.3911
7.7259 10.87 2500 12.7404 0.4051 0.4035
7.7259 11.96 2750 13.6012 0.4051 0.4037
6.6383 13.04 3000 14.1112 0.3912 0.3884
6.6383 14.13 3250 14.0430 0.3920 0.3881
5.7088 15.22 3500 13.9183 0.3966 0.3951
5.7088 16.3 3750 14.5237 0.3904 0.3858
5.1104 17.39 4000 15.0371 0.4012 0.4011
5.1104 18.48 4250 15.4539 0.3866 0.3814
4.587 19.57 4500 14.4770 0.3989 0.3982
4.587 20.65 4750 15.9417 0.4136 0.4103
4.1118 21.74 5000 15.0406 0.3966 0.3966
4.1118 22.83 5250 16.1274 0.4020 0.4016
3.7338 23.91 5500 15.8530 0.3858 0.3835
3.7338 25.0 5750 16.3221 0.4090 0.4074
3.4628 26.09 6000 16.5572 0.4028 0.4017
3.4628 27.17 6250 16.4879 0.3881 0.3868
3.3012 28.26 6500 16.4834 0.3997 0.3995
3.3012 29.35 6750 16.2447 0.3866 0.3858

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
0
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from