Edit model card

bert-finetuned-sla

This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3274
  • F1: 0.6555
  • Roc Auc: 0.7660
  • Accuracy: 0.5294

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss F1 Roc Auc Accuracy
No log 1.0 30 0.4994 0.0 0.5 0.0
No log 2.0 60 0.4408 0.0 0.5 0.0
No log 3.0 90 0.3761 0.4444 0.6462 0.1961
No log 4.0 120 0.3438 0.6496 0.7604 0.4706
No log 5.0 150 0.3274 0.6555 0.7660 0.5294
No log 6.0 180 0.3093 0.6557 0.7699 0.4706
No log 7.0 210 0.3083 0.6560 0.7738 0.5098
No log 8.0 240 0.3030 0.6457 0.7703 0.4706
No log 9.0 270 0.3096 0.6667 0.7811 0.4902
No log 10.0 300 0.2976 0.6718 0.7907 0.5098
No log 11.0 330 0.2986 0.6769 0.7924 0.5294
No log 12.0 360 0.3046 0.6562 0.7777 0.5098
No log 13.0 390 0.2988 0.6870 0.7997 0.4902
No log 14.0 420 0.3026 0.6769 0.7924 0.5098
No log 15.0 450 0.3005 0.6870 0.7997 0.5098
No log 16.0 480 0.3012 0.6822 0.7941 0.5098
0.2216 17.0 510 0.3013 0.6977 0.8032 0.5294
0.2216 18.0 540 0.3033 0.6977 0.8032 0.5294
0.2216 19.0 570 0.3024 0.6977 0.8032 0.5294
0.2216 20.0 600 0.3027 0.6923 0.8015 0.5098

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1+cu116
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
5
Inference API
This model can be loaded on Inference API (serverless).