librarian-bot's picture
Librarian Bot: Add base_model information to model
da41a8a
|
raw
history blame
2.73 kB
metadata
license: mit
tags:
  - generated_from_trainer
metrics:
  - accuracy
  - f1
  - precision
  - recall
base_model: russellc/roberta-news-classifier
model-index:
  - name: roberta-news-classifier
    results: []

roberta-news-classifier

This model is a fine-tuned version of russellc/roberta-news-classifier on the custom(Kaggle) dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1043
  • Accuracy: 0.9786
  • F1: 0.9786
  • Precision: 0.9786
  • Recall: 0.9786

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
0.1327 1.0 123 0.1043 0.9786 0.9786 0.9786 0.9786
0.1103 2.0 246 0.1157 0.9735 0.9735 0.9735 0.9735
0.102 3.0 369 0.1104 0.9735 0.9735 0.9735 0.9735
0.0825 4.0 492 0.1271 0.9714 0.9714 0.9714 0.9714
0.055 5.0 615 0.1296 0.9724 0.9724 0.9724 0.9724

Evaluation results

***** Running Prediction *****
Num examples = 980
Batch size = 64

          precision    recall  f1-score   support

  dunya        0.99      0.96      0.97       147  
ekonomi        0.96      0.96      0.96       141  
 kultur        0.97      0.99      0.98       142  
 saglik        0.99      0.98      0.98       148  
 siyaset       0.98      0.98      0.98       134  
 spor          1.00      1.00      1.00       139  
 teknoloji     0.96      0.98      0.97       129  
 accuracy      --        --        0.98       980
 macro avg     0.98      0.98      0.98       980  
 weighted avg  0.98      0.98      0.98       980 

   

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.12.1+cu113
  • Datasets 2.7.1
  • Tokenizers 0.13.2