RoBERTa_Combined_Generated_v2_500

This model is a fine-tuned version of ICT2214Team7/RoBERTa_Test_Training on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1037
  • Precision: 0.7635
  • Recall: 0.7902
  • F1: 0.7766
  • Accuracy: 0.9599
  • Report: {'AGE': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 11}, 'LOC': {'precision': 0.8153846153846154, 'recall': 0.7464788732394366, 'f1-score': 0.7794117647058824, 'support': 71}, 'NAT': {'precision': 0.6666666666666666, 'recall': 0.8421052631578947, 'f1-score': 0.744186046511628, 'support': 19}, 'ORG': {'precision': 0.4375, 'recall': 0.4666666666666667, 'f1-score': 0.45161290322580644, 'support': 15}, 'PER': {'precision': 0.8125, 'recall': 0.9629629629629629, 'f1-score': 0.8813559322033898, 'support': 27}, 'micro avg': {'precision': 0.7635135135135135, 'recall': 0.7902097902097902, 'f1-score': 0.7766323024054983, 'support': 143}, 'macro avg': {'precision': 0.7464102564102564, 'recall': 0.8036427532053922, 'f1-score': 0.7713133293293413, 'support': 143}, 'weighted avg': {'precision': 0.769643177335485, 'recall': 0.7902097902097902, 'f1-score': 0.7765634538162043, 'support': 143}}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy Report
No log 1.0 50 0.1474 0.5852 0.7203 0.6458 0.9511 {'AGE': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 11}, 'LOC': {'precision': 0.7066666666666667, 'recall': 0.7464788732394366, 'f1-score': 0.7260273972602739, 'support': 71}, 'NAT': {'precision': 0.25, 'recall': 0.42105263157894735, 'f1-score': 0.3137254901960784, 'support': 19}, 'ORG': {'precision': 0.47368421052631576, 'recall': 0.6, 'f1-score': 0.5294117647058824, 'support': 15}, 'PER': {'precision': 0.5641025641025641, 'recall': 0.8148148148148148, 'f1-score': 0.6666666666666667, 'support': 27}, 'micro avg': {'precision': 0.5852272727272727, 'recall': 0.7202797202797203, 'f1-score': 0.64576802507837, 'support': 143}, 'macro avg': {'precision': 0.5988906882591094, 'recall': 0.7164692639266398, 'f1-score': 0.6471662637657802, 'support': 143}, 'weighted avg': {'precision': 0.6171983616922888, 'recall': 0.7202797202797203, 'f1-score': 0.6604888530754767, 'support': 143}}
No log 2.0 100 0.1061 0.7378 0.8462 0.7883 0.9657 {'AGE': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 11}, 'LOC': {'precision': 0.7848101265822784, 'recall': 0.8732394366197183, 'f1-score': 0.8266666666666665, 'support': 71}, 'NAT': {'precision': 0.7368421052631579, 'recall': 0.7368421052631579, 'f1-score': 0.7368421052631579, 'support': 19}, 'ORG': {'precision': 0.34782608695652173, 'recall': 0.5333333333333333, 'f1-score': 0.4210526315789474, 'support': 15}, 'PER': {'precision': 0.8125, 'recall': 0.9629629629629629, 'f1-score': 0.8813559322033898, 'support': 27}, 'micro avg': {'precision': 0.7378048780487805, 'recall': 0.8461538461538461, 'f1-score': 0.7882736156351792, 'support': 143}, 'macro avg': {'precision': 0.7363956637603917, 'recall': 0.8212755676358345, 'f1-score': 0.7731834671424324, 'support': 143}, 'weighted avg': {'precision': 0.7543804915502769, 'recall': 0.8461538461538461, 'f1-score': 0.7958442865490144, 'support': 143}}
No log 3.0 150 0.1037 0.7635 0.7902 0.7766 0.9599 {'AGE': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 11}, 'LOC': {'precision': 0.8153846153846154, 'recall': 0.7464788732394366, 'f1-score': 0.7794117647058824, 'support': 71}, 'NAT': {'precision': 0.6666666666666666, 'recall': 0.8421052631578947, 'f1-score': 0.744186046511628, 'support': 19}, 'ORG': {'precision': 0.4375, 'recall': 0.4666666666666667, 'f1-score': 0.45161290322580644, 'support': 15}, 'PER': {'precision': 0.8125, 'recall': 0.9629629629629629, 'f1-score': 0.8813559322033898, 'support': 27}, 'micro avg': {'precision': 0.7635135135135135, 'recall': 0.7902097902097902, 'f1-score': 0.7766323024054983, 'support': 143}, 'macro avg': {'precision': 0.7464102564102564, 'recall': 0.8036427532053922, 'f1-score': 0.7713133293293413, 'support': 143}, 'weighted avg': {'precision': 0.769643177335485, 'recall': 0.7902097902097902, 'f1-score': 0.7765634538162043, 'support': 143}}

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
3
Safetensors
Model size
81.5M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for ICT2214Team7/RoBERTa_Combined_Generated_v2_500

Finetuned
(23)
this model