María Navas Loro
update model card README.md
b7b5136
|
raw
history blame
2.96 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
metrics:
  - f1
  - accuracy
model-index:
  - name: roberta-finetuned-CPV_Spanish
    results: []

roberta-finetuned-CPV_Spanish

This model is a fine-tuned version of PlanTL-GOB-ES/roberta-base-bne on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0417
  • F1: 0.7757
  • Roc Auc: 0.8684
  • Accuracy: 0.7223
  • Coverage Error: 11.7873
  • Label Ranking Average Precision Score: 0.7728

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss F1 Roc Auc Accuracy Coverage Error Label Ranking Average Precision Score
0.0582 1.0 2039 0.0554 0.6291 0.7463 0.5235 21.9642 0.5547
0.0413 2.0 4078 0.0437 0.7054 0.7959 0.6239 17.5374 0.6589
0.0295 3.0 6117 0.0403 0.7391 0.8285 0.6788 14.7700 0.7197
0.022 4.0 8156 0.0390 0.7562 0.8414 0.6987 13.8217 0.7425
0.0168 5.0 10195 0.0393 0.7600 0.8547 0.7007 12.8532 0.7542
0.0127 6.0 12234 0.0396 0.7645 0.8606 0.7099 12.3890 0.7622
0.0094 7.0 14273 0.0406 0.7642 0.8675 0.7027 11.8679 0.7628
0.0066 8.0 16312 0.0404 0.7706 0.8641 0.7173 12.0876 0.7681
0.0052 9.0 18351 0.0411 0.7748 0.8679 0.7182 11.8149 0.7705
0.0042 10.0 20390 0.0417 0.7757 0.8684 0.7223 11.7873 0.7728

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.9.1
  • Datasets 1.18.4
  • Tokenizers 0.11.6