Edit model card

COPA_RL1

This model is a fine-tuned version of FacebookAI/xlm-roberta-large on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6931
  • F1: 0.5146

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss F1
No log 1.0 63 0.6931 0.4509
No log 2.0 126 0.6931 0.4754
No log 3.0 189 0.6931 0.5079
No log 4.0 252 0.6931 0.4969
No log 5.0 315 0.6931 0.5245
No log 6.0 378 0.6931 0.5146
No log 7.0 441 0.6931 0.5294
0.6981 8.0 504 0.6931 0.5398
0.6981 9.0 567 0.6931 0.5205
0.6981 10.0 630 0.6931 0.5146

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
560M params
Tensor type
F32
·
Inference API (serverless) does not yet support transformers models for this pipeline type.

Finetuned from