nhidinh2's picture
End of training
d8c947b verified
|
raw
history blame
No virus
2.85 kB
metadata
license: mit
base_model: xlm-roberta-base
tags:
  - generated_from_trainer
metrics:
  - precision
  - recall
  - f1
  - accuracy
model-index:
  - name: xlm-roberta-base-finetuned-ner-thesis-dseb
    results: []

xlm-roberta-base-finetuned-ner-thesis-dseb

This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1345
  • Precision: 0.1786
  • Recall: 0.1351
  • F1: 0.1538
  • Accuracy: 0.9563

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
0.8175 1.0 12 0.3119 0.0 0.0 0.0 0.9593
0.2019 2.0 24 0.2414 0.0 0.0 0.0 0.9593
0.1156 3.0 36 0.2105 0.0 0.0 0.0 0.9593
0.0913 4.0 48 0.1831 0.0 0.0 0.0 0.9593
0.0987 5.0 60 0.1695 0.0 0.0 0.0 0.9593
0.0697 6.0 72 0.1727 0.0 0.0 0.0 0.9593
0.0528 7.0 84 0.1462 0.0 0.0 0.0 0.9593
0.0538 8.0 96 0.1441 0.0 0.0 0.0 0.9593
0.0504 9.0 108 0.1854 0.0 0.0 0.0 0.9605
0.0359 10.0 120 0.1516 0.0476 0.0312 0.0377 0.9641
0.031 11.0 132 0.1836 0.0 0.0 0.0 0.9621
0.038 12.0 144 0.1581 0.1579 0.0938 0.1176 0.9627
0.0349 13.0 156 0.1901 0.0 0.0 0.0 0.9625
0.0226 14.0 168 0.1740 0.0667 0.0312 0.0426 0.9648
0.0198 15.0 180 0.1729 0.125 0.0625 0.0833 0.9639

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1