Edit model card

pos_final_xlm_nl

This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1066
  • Precision: 0.9780
  • Recall: 0.9783
  • F1: 0.9782
  • Accuracy: 0.9789

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 256
  • eval_batch_size: 256
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 1024
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 40.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 1.0 69 3.4837 0.2936 0.1709 0.2161 0.3200
No log 2.0 138 0.8299 0.8501 0.8416 0.8459 0.8497
No log 3.0 207 0.2765 0.9419 0.9408 0.9414 0.9429
No log 4.0 276 0.1704 0.9601 0.9596 0.9599 0.9611
No log 5.0 345 0.1259 0.9685 0.9686 0.9686 0.9693
No log 6.0 414 0.1085 0.9711 0.9713 0.9712 0.9719
No log 7.0 483 0.0984 0.9728 0.9731 0.9729 0.9738
1.1448 8.0 552 0.0906 0.9742 0.9745 0.9743 0.9752
1.1448 9.0 621 0.0888 0.9749 0.9752 0.9751 0.9758
1.1448 10.0 690 0.0864 0.9757 0.9759 0.9758 0.9765
1.1448 11.0 759 0.0842 0.9764 0.9767 0.9765 0.9772
1.1448 12.0 828 0.0840 0.9764 0.9768 0.9766 0.9773
1.1448 13.0 897 0.0846 0.9766 0.9769 0.9768 0.9775
1.1448 14.0 966 0.0854 0.9768 0.9771 0.9769 0.9776
0.0668 15.0 1035 0.0867 0.9767 0.9770 0.9768 0.9776
0.0668 16.0 1104 0.0859 0.9769 0.9772 0.9771 0.9778
0.0668 17.0 1173 0.0858 0.9772 0.9775 0.9773 0.9781
0.0668 18.0 1242 0.0878 0.9776 0.9779 0.9778 0.9785
0.0668 19.0 1311 0.0887 0.9775 0.9779 0.9777 0.9785
0.0668 20.0 1380 0.0902 0.9774 0.9777 0.9775 0.9783
0.0668 21.0 1449 0.0910 0.9772 0.9775 0.9774 0.9782
0.0375 22.0 1518 0.0926 0.9774 0.9777 0.9775 0.9783
0.0375 23.0 1587 0.0930 0.9777 0.9780 0.9779 0.9787
0.0375 24.0 1656 0.0955 0.9777 0.9781 0.9779 0.9787
0.0375 25.0 1725 0.0955 0.9778 0.9781 0.9780 0.9787
0.0375 26.0 1794 0.0978 0.9776 0.9779 0.9777 0.9785
0.0375 27.0 1863 0.0997 0.9772 0.9775 0.9774 0.9782
0.0375 28.0 1932 0.1000 0.9776 0.9779 0.9778 0.9786
0.0238 29.0 2001 0.1022 0.9775 0.9778 0.9776 0.9785
0.0238 30.0 2070 0.1030 0.9777 0.9780 0.9779 0.9787
0.0238 31.0 2139 0.1041 0.9778 0.9780 0.9779 0.9787
0.0238 32.0 2208 0.1054 0.9778 0.9781 0.9779 0.9787
0.0238 33.0 2277 0.1055 0.9777 0.9779 0.9778 0.9786
0.0238 34.0 2346 0.1063 0.9778 0.9780 0.9779 0.9787
0.0238 35.0 2415 0.1066 0.9780 0.9783 0.9782 0.9789
0.0238 36.0 2484 0.1075 0.9779 0.9781 0.9780 0.9788
0.0167 37.0 2553 0.1083 0.9780 0.9783 0.9781 0.9789
0.0167 38.0 2622 0.1083 0.9780 0.9783 0.9781 0.9789
0.0167 39.0 2691 0.1087 0.9779 0.9782 0.9781 0.9789
0.0167 40.0 2760 0.1088 0.9780 0.9782 0.9781 0.9789

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.12.0
  • Datasets 2.18.0
  • Tokenizers 0.13.2
Downloads last month
1

Collection including pranaydeeps/lettuce_pos_nl_xlm