Edit model card

RoBERTa_Combined_Generated_v1.1_epoch_6

This model is a fine-tuned version of ICT2214Team7/RoBERTa_Test_Training on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0004
  • Precision: 0.9980
  • Recall: 0.9980
  • F1: 0.9980
  • Accuracy: 0.9996
  • Report: {'AGE': {'precision': 1.0, 'recall': 0.9444444444444444, 'f1-score': 0.9714285714285714, 'support': 18}, 'LOC': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 101}, 'NAT': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 25}, 'ORG': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 173}, 'PER': {'precision': 0.9943502824858758, 'recall': 1.0, 'f1-score': 0.9971671388101983, 'support': 176}, 'micro avg': {'precision': 0.9979716024340771, 'recall': 0.9979716024340771, 'f1-score': 0.9979716024340771, 'support': 493}, 'macro avg': {'precision': 0.9988700564971751, 'recall': 0.9888888888888889, 'f1-score': 0.9937191420477539, 'support': 493}, 'weighted avg': {'precision': 0.9979830623073309, 'recall': 0.9979716024340771, 'f1-score': 0.9979454984103634, 'support': 493}}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 6

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy Report
No log 1.0 200 0.0074 0.9799 0.9899 0.9849 0.9980 {'AGE': {'precision': 1.0, 'recall': 0.9444444444444444, 'f1-score': 0.9714285714285714, 'support': 18}, 'LOC': {'precision': 0.9801980198019802, 'recall': 0.9801980198019802, 'f1-score': 0.9801980198019802, 'support': 101}, 'NAT': {'precision': 1.0, 'recall': 0.96, 'f1-score': 0.9795918367346939, 'support': 25}, 'ORG': {'precision': 0.9774011299435028, 'recall': 1.0, 'f1-score': 0.9885714285714285, 'support': 173}, 'PER': {'precision': 0.9776536312849162, 'recall': 0.9943181818181818, 'f1-score': 0.9859154929577464, 'support': 176}, 'micro avg': {'precision': 0.9799196787148594, 'recall': 0.9898580121703854, 'f1-score': 0.9848637739656912, 'support': 493}, 'macro avg': {'precision': 0.9870505562060797, 'recall': 0.9757921292129212, 'f1-score': 0.981141069898884, 'support': 493}, 'weighted avg': {'precision': 0.9800353642725582, 'recall': 0.9898580121703854, 'f1-score': 0.9848265600557851, 'support': 493}}
No log 2.0 400 0.0019 0.9959 0.9959 0.9959 0.9995 {'AGE': {'precision': 1.0, 'recall': 0.9444444444444444, 'f1-score': 0.9714285714285714, 'support': 18}, 'LOC': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 101}, 'NAT': {'precision': 1.0, 'recall': 0.96, 'f1-score': 0.9795918367346939, 'support': 25}, 'ORG': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 173}, 'PER': {'precision': 0.9887640449438202, 'recall': 1.0, 'f1-score': 0.9943502824858756, 'support': 176}, 'micro avg': {'precision': 0.9959432048681541, 'recall': 0.9959432048681541, 'f1-score': 0.9959432048681541, 'support': 493}, 'macro avg': {'precision': 0.997752808988764, 'recall': 0.9808888888888889, 'f1-score': 0.9890741381298283, 'support': 493}, 'weighted avg': {'precision': 0.9959887868359277, 'recall': 0.9959432048681541, 'f1-score': 0.995904989698977, 'support': 493}}
0.0654 3.0 600 0.0015 0.9959 0.9959 0.9959 0.9995 {'AGE': {'precision': 1.0, 'recall': 0.9444444444444444, 'f1-score': 0.9714285714285714, 'support': 18}, 'LOC': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 101}, 'NAT': {'precision': 1.0, 'recall': 0.96, 'f1-score': 0.9795918367346939, 'support': 25}, 'ORG': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 173}, 'PER': {'precision': 0.9887640449438202, 'recall': 1.0, 'f1-score': 0.9943502824858756, 'support': 176}, 'micro avg': {'precision': 0.9959432048681541, 'recall': 0.9959432048681541, 'f1-score': 0.9959432048681541, 'support': 493}, 'macro avg': {'precision': 0.997752808988764, 'recall': 0.9808888888888889, 'f1-score': 0.9890741381298283, 'support': 493}, 'weighted avg': {'precision': 0.9959887868359277, 'recall': 0.9959432048681541, 'f1-score': 0.995904989698977, 'support': 493}}
0.0654 4.0 800 0.0007 0.9919 0.9959 0.9939 0.9996 {'AGE': {'precision': 0.8947368421052632, 'recall': 0.9444444444444444, 'f1-score': 0.918918918918919, 'support': 18}, 'LOC': {'precision': 0.9900990099009901, 'recall': 0.9900990099009901, 'f1-score': 0.9900990099009901, 'support': 101}, 'NAT': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 25}, 'ORG': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 173}, 'PER': {'precision': 0.9943502824858758, 'recall': 1.0, 'f1-score': 0.9971671388101983, 'support': 176}, 'micro avg': {'precision': 0.9919191919191919, 'recall': 0.9959432048681541, 'f1-score': 0.9939271255060729, 'support': 493}, 'macro avg': {'precision': 0.9758372268984259, 'recall': 0.986908690869087, 'f1-score': 0.9812370135260216, 'support': 493}, 'weighted avg': {'precision': 0.9921113851428172, 'recall': 0.9959432048681541, 'f1-score': 0.9939999127203558, 'support': 493}}
0.0026 5.0 1000 0.0005 0.9960 0.9980 0.9970 0.9998 {'AGE': {'precision': 0.9444444444444444, 'recall': 0.9444444444444444, 'f1-score': 0.9444444444444444, 'support': 18}, 'LOC': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 101}, 'NAT': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 25}, 'ORG': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 173}, 'PER': {'precision': 0.9943502824858758, 'recall': 1.0, 'f1-score': 0.9971671388101983, 'support': 176}, 'micro avg': {'precision': 0.9959514170040485, 'recall': 0.9979716024340771, 'f1-score': 0.9969604863221885, 'support': 493}, 'macro avg': {'precision': 0.987758945386064, 'recall': 0.9888888888888889, 'f1-score': 0.9883223166509285, 'support': 493}, 'weighted avg': {'precision': 0.9959546647414079, 'recall': 0.9979716024340771, 'f1-score': 0.9969602767354866, 'support': 493}}
0.0026 6.0 1200 0.0004 0.9980 0.9980 0.9980 0.9996 {'AGE': {'precision': 1.0, 'recall': 0.9444444444444444, 'f1-score': 0.9714285714285714, 'support': 18}, 'LOC': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 101}, 'NAT': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 25}, 'ORG': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 173}, 'PER': {'precision': 0.9943502824858758, 'recall': 1.0, 'f1-score': 0.9971671388101983, 'support': 176}, 'micro avg': {'precision': 0.9979716024340771, 'recall': 0.9979716024340771, 'f1-score': 0.9979716024340771, 'support': 493}, 'macro avg': {'precision': 0.9988700564971751, 'recall': 0.9888888888888889, 'f1-score': 0.9937191420477539, 'support': 493}, 'weighted avg': {'precision': 0.9979830623073309, 'recall': 0.9979716024340771, 'f1-score': 0.9979454984103634, 'support': 493}}

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
81.5M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for ICT2214Team7/RoBERTa_Combined_Generated_v1.1_epoch_6

Finetuned
(13)
this model