Edit model card

scenario-TCR-XLMV_data-en-cardiff_eng_only_beta

This model is a fine-tuned version of facebook/xlm-v-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.4054
  • Accuracy: 0.5467
  • F1: 0.5510

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 112233
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
No log 1.03 60 1.0887 0.4449 0.3540
No log 2.07 120 1.0211 0.4700 0.3777
No log 3.1 180 1.0598 0.5141 0.4790
No log 4.14 240 1.0131 0.5644 0.5652
No log 5.17 300 1.1073 0.5586 0.5595
No log 6.21 360 1.3697 0.5635 0.5542
No log 7.24 420 1.4910 0.5379 0.5385
No log 8.28 480 1.7325 0.5507 0.5542
0.6649 9.31 540 1.8878 0.5489 0.5505
0.6649 10.34 600 2.2758 0.5309 0.5320
0.6649 11.38 660 2.3053 0.5357 0.5357
0.6649 12.41 720 2.3674 0.5542 0.5574
0.6649 13.45 780 2.7705 0.5309 0.5332
0.6649 14.48 840 2.7515 0.5520 0.5522
0.6649 15.52 900 2.9868 0.5423 0.5447
0.6649 16.55 960 2.7489 0.5582 0.5597
0.1079 17.59 1020 2.8748 0.5525 0.5560
0.1079 18.62 1080 3.0165 0.5467 0.5511
0.1079 19.66 1140 3.2954 0.5340 0.5356
0.1079 20.69 1200 3.1051 0.5441 0.5488
0.1079 21.72 1260 3.2199 0.5441 0.5467
0.1079 22.76 1320 3.1660 0.5454 0.5500
0.1079 23.79 1380 3.2637 0.5445 0.5474
0.1079 24.83 1440 3.2934 0.5538 0.5576
0.0279 25.86 1500 3.2834 0.5476 0.5506
0.0279 26.9 1560 3.3734 0.5467 0.5507
0.0279 27.93 1620 3.4145 0.5437 0.5476
0.0279 28.97 1680 3.4043 0.5454 0.5496
0.0279 30.0 1740 3.4054 0.5467 0.5510

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
2

Finetuned from