test6_balanced / README.md
adriansanz's picture
End of training
e4ce9cd verified
|
raw
history blame
2.57 kB
metadata
license: apache-2.0
base_model: ibaucells/RoBERTa-ca-CaWikiTC
tags:
  - generated_from_trainer
model-index:
  - name: test6_balanced
    results: []

test6_balanced

This model is a fine-tuned version of ibaucells/RoBERTa-ca-CaWikiTC on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6982

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 10
  • eval_batch_size: 10
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 25

Training results

Training Loss Epoch Step Validation Loss
2.8358 1.0 63 2.8655
2.8469 2.0 126 2.8447
2.7935 3.0 189 2.8533
2.8191 4.0 252 2.8142
2.6885 5.0 315 2.8259
2.5876 6.0 378 2.6759
2.5032 7.0 441 2.6097
2.3133 8.0 504 2.5192
2.1289 9.0 567 2.5403
1.9344 10.0 630 2.4371
1.6321 11.0 693 2.2946
1.4731 12.0 756 2.1326
1.2114 13.0 819 2.0522
1.1537 14.0 882 2.1146
1.022 15.0 945 1.9946
0.8745 16.0 1008 1.8565
0.5796 17.0 1071 1.7965
0.5599 18.0 1134 1.7063
0.4366 19.0 1197 1.9333
0.3558 20.0 1260 1.8851
0.3272 21.0 1323 1.8570
0.2594 22.0 1386 1.8596
0.2432 23.0 1449 1.9433
0.2298 24.0 1512 1.9467
0.266 25.0 1575 1.9150

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2