htufgg's picture
update model card README.md
36c612b
|
raw
history blame
No virus
2.97 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
metrics:
  - f1
  - accuracy
model-index:
  - name: roberta-finetuned-CPV_Spanish
    results: []

roberta-finetuned-CPV_Spanish

This model is a fine-tuned version of PlanTL-GOB-ES/roberta-base-bne on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0422
  • F1: 0.7739
  • Roc Auc: 0.8704
  • Accuracy: 0.7201
  • Coverage Error: 11.5798
  • Label Ranking Average Precision Score: 0.7742

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss F1 Roc Auc Accuracy Coverage Error Label Ranking Average Precision Score
0.0579 1.0 2039 0.0548 0.6327 0.7485 0.5274 21.7879 0.5591
0.0411 2.0 4078 0.0441 0.7108 0.8027 0.6386 16.8647 0.6732
0.0294 3.0 6117 0.0398 0.7437 0.8295 0.6857 14.6700 0.7249
0.0223 4.0 8156 0.0389 0.7568 0.8453 0.7056 13.3552 0.7494
0.0163 5.0 10195 0.0397 0.7626 0.8569 0.7097 12.5895 0.7620
0.0132 6.0 12234 0.0395 0.7686 0.8620 0.7126 12.1926 0.7656
0.0095 7.0 14273 0.0409 0.7669 0.8694 0.7109 11.5978 0.7700
0.0066 8.0 16312 0.0415 0.7705 0.8726 0.7107 11.4252 0.7714
0.0055 9.0 18351 0.0417 0.7720 0.8689 0.7163 11.6987 0.7716
0.0045 10.0 20390 0.0422 0.7739 0.8704 0.7201 11.5798 0.7742

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.10.0+cu111
  • Datasets 2.0.0
  • Tokenizers 0.12.1