Edit model card

xlm-roberta-base_eng_loss_0.0001

This model is a fine-tuned version of FacebookAI/xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0465
  • Spearman Corr: nan

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 128
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Spearman Corr
No log 1.33 200 0.0465 nan
0.0469 2.66 400 0.0466 nan
0.0471 3.99 600 0.0467 nan
0.0471 5.32 800 0.0462 nan
0.0471 6.64 1000 0.0462 nan
0.0471 7.97 1200 0.0463 nan
0.0471 9.3 1400 0.0476 nan
0.047 10.63 1600 0.0461 nan
0.0469 11.96 1800 0.0468 nan
0.0469 13.29 2000 0.0464 0.0242
0.047 14.62 2200 0.0471 -0.0375
0.0467 15.95 2400 0.0465 nan

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2
Downloads last month
10
Safetensors
Model size
278M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Shijia/xlm-roberta-base_eng_loss_0.0001

Finetuned
(2543)
this model