xlm-roberta-base_lr0.0001_seed42_basic_original_esp-kin-eng_train
This model is a fine-tuned version of FacebookAI/xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0394
- Spearman Corr: nan
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
---|---|---|---|---|
No log | 1.63 | 200 | 0.0478 | 0.0585 |
0.059 | 3.27 | 400 | 0.0393 | 0.0110 |
0.0534 | 4.9 | 600 | 0.0413 | nan |
0.0524 | 6.53 | 800 | 0.0384 | nan |
0.0517 | 8.16 | 1000 | 0.0424 | nan |
0.0517 | 9.8 | 1200 | 0.0384 | nan |
0.0509 | 11.43 | 1400 | 0.0467 | nan |
0.0511 | 13.06 | 1600 | 0.0404 | nan |
0.0505 | 14.69 | 1800 | 0.0392 | -0.0320 |
0.05 | 16.33 | 2000 | 0.0382 | -0.0245 |
0.05 | 17.96 | 2200 | 0.0395 | nan |
0.0498 | 19.59 | 2400 | 0.0389 | -0.0672 |
0.0498 | 21.22 | 2600 | 0.0384 | -0.0760 |
0.05 | 22.86 | 2800 | 0.0399 | nan |
0.0492 | 24.49 | 3000 | 0.0387 | nan |
0.0493 | 26.12 | 3200 | 0.0385 | nan |
0.0493 | 27.76 | 3400 | 0.0392 | nan |
0.0491 | 29.39 | 3600 | 0.0394 | nan |
Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
- Downloads last month
- 2