María Navas Loro
update model card README.md
4dd5d8e
|
raw
history blame
2.96 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
metrics:
  - f1
  - accuracy
model-index:
  - name: roberta-finetuned-CPV_Spanish
    results: []

roberta-finetuned-CPV_Spanish

This model is a fine-tuned version of PlanTL-GOB-ES/roberta-base-bne on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0463
  • F1: 0.7931
  • Roc Auc: 0.8858
  • Accuracy: 0.7376
  • Coverage Error: 10.3626
  • Label Ranking Average Precision Score: 0.7968

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss F1 Roc Auc Accuracy Coverage Error Label Ranking Average Precision Score
0.0355 1.0 9054 0.0366 0.7550 0.8373 0.6950 14.1539 0.7347
0.0309 2.0 18108 0.0330 0.7773 0.8553 0.7204 12.6503 0.7647
0.0234 3.0 27162 0.0330 0.7836 0.8693 0.7293 11.6192 0.7799
0.0159 4.0 36216 0.0348 0.7830 0.8709 0.7291 11.5355 0.7810
0.0109 5.0 45270 0.0376 0.7789 0.8786 0.7201 10.9898 0.7812
0.0075 6.0 54324 0.0397 0.7838 0.8813 0.7241 10.7035 0.7861
0.0039 7.0 63378 0.0415 0.7888 0.8818 0.7309 10.6559 0.7898
0.0028 8.0 72432 0.0437 0.7906 0.8838 0.7326 10.5117 0.7924
0.0016 9.0 81486 0.0453 0.7908 0.8890 0.7308 10.0988 0.7957
0.001 10.0 90540 0.0463 0.7931 0.8858 0.7376 10.3626 0.7968

Framework versions

  • Transformers 4.16.2
  • Pytorch 1.9.1
  • Datasets 1.18.4
  • Tokenizers 0.11.6