xlm-roberta-qlora-finetuned

This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2168

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 3 3.1426
No log 2.0 6 3.1473
No log 3.0 9 3.1535
No log 4.0 12 3.1582
No log 5.0 15 3.1633
No log 6.0 18 3.1676
No log 7.0 21 3.1727
No log 8.0 24 3.1766
No log 9.0 27 3.1809
No log 10.0 30 3.1840
No log 11.0 33 3.1871
No log 12.0 36 3.1902
No log 13.0 39 3.1930
No log 14.0 42 3.1957
No log 15.0 45 3.1973
No log 16.0 48 3.1996
No log 17.0 51 3.2023
No log 18.0 54 3.2047
No log 19.0 57 3.2070
No log 20.0 60 3.2094
No log 21.0 63 3.2102
No log 22.0 66 3.2121
No log 23.0 69 3.2129
No log 24.0 72 3.2141
No log 25.0 75 3.2145
No log 26.0 78 3.2160
No log 27.0 81 3.2160
No log 28.0 84 3.2168
No log 29.0 87 3.2168
No log 30.0 90 3.2168

Framework versions

  • PEFT 0.13.2
  • Transformers 4.42.3
  • Pytorch 2.1.2
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for sharjeel103/xlm-roberta-qlora-finetuned

Adapter
(20)
this model