Edit model card

scenario-TCR-XLMV_data-en-cardiff_eng_only_delta

This model is a fine-tuned version of facebook/xlm-v-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0986
  • Accuracy: 0.3333
  • F1: 0.1667

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 11213
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
No log 1.03 60 1.0986 0.3329 0.1765
No log 2.07 120 1.0987 0.3333 0.1667
No log 3.1 180 1.0988 0.3333 0.1667
No log 4.14 240 1.0988 0.3333 0.1667
No log 5.17 300 1.0986 0.3338 0.1918
No log 6.21 360 1.0987 0.3351 0.1732
No log 7.24 420 1.0987 0.3333 0.1667
No log 8.28 480 1.0995 0.3333 0.1667
1.1008 9.31 540 1.0986 0.3333 0.1667
1.1008 10.34 600 1.0987 0.3333 0.1667
1.1008 11.38 660 1.0989 0.3333 0.1667
1.1008 12.41 720 1.0989 0.3333 0.1667
1.1008 13.45 780 1.0987 0.3333 0.1667
1.1008 14.48 840 1.0989 0.3333 0.1667
1.1008 15.52 900 1.0989 0.3333 0.1667
1.1008 16.55 960 1.0993 0.3333 0.1667
1.1 17.59 1020 1.0986 0.3333 0.1667
1.1 18.62 1080 1.0987 0.3333 0.1667
1.1 19.66 1140 1.0994 0.3333 0.1667
1.1 20.69 1200 1.0986 0.3333 0.1667
1.1 21.72 1260 1.0986 0.3333 0.1667
1.1 22.76 1320 1.0990 0.3333 0.1667
1.1 23.79 1380 1.0987 0.3333 0.1667
1.1 24.83 1440 1.0988 0.3333 0.1667
1.0997 25.86 1500 1.0987 0.3333 0.1667
1.0997 26.9 1560 1.0987 0.3333 0.1667
1.0997 27.93 1620 1.0986 0.3333 0.1667
1.0997 28.97 1680 1.0986 0.3333 0.1667
1.0997 30.0 1740 1.0986 0.3333 0.1667

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
2

Finetuned from