Edit model card

newly_fine_tuned_bert_v2

This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0273
  • F1: 0.5517
  • Roc Auc: 0.6994
  • Accuracy: 0.4

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 300

Training results

Training Loss Epoch Step Validation Loss F1 Roc Auc Accuracy
0.0339 11.3636 500 0.0355 0.0 0.5 0.0
0.028 22.7273 1000 0.0327 0.0 0.5 0.0
0.0255 34.0909 1500 0.0327 0.0 0.5 0.0
0.0234 45.4545 2000 0.0316 0.0 0.5 0.0
0.0202 56.8182 2500 0.0309 0.0 0.5 0.0
0.0174 68.1818 3000 0.0291 0.0 0.5 0.0
0.0151 79.5455 3500 0.0281 0.0 0.5 0.0
0.013 90.9091 4000 0.0274 0.0 0.5 0.0
0.0109 102.2727 4500 0.0271 0.0 0.5 0.0
0.0095 113.6364 5000 0.0267 0.0 0.5 0.0
0.0081 125.0 5500 0.0262 0.0 0.5 0.0
0.007 136.3636 6000 0.0262 0.0952 0.525 0.05
0.0062 147.7273 6500 0.0267 0.4 0.625 0.25
0.0053 159.0909 7000 0.0262 0.4 0.625 0.25
0.0048 170.4545 7500 0.0266 0.4615 0.65 0.3
0.0043 181.8182 8000 0.0259 0.5 0.6744 0.35
0.0039 193.1818 8500 0.0264 0.5714 0.7 0.4
0.0036 204.5455 9000 0.0268 0.5517 0.6994 0.4
0.0032 215.9091 9500 0.0270 0.5517 0.6994 0.4
0.003 227.2727 10000 0.0272 0.5517 0.6994 0.4
0.0028 238.6364 10500 0.0269 0.5517 0.6994 0.4
0.0027 250.0 11000 0.0267 0.5333 0.6988 0.4
0.0026 261.3636 11500 0.0271 0.5333 0.6988 0.4
0.0025 272.7273 12000 0.0272 0.5333 0.6988 0.4
0.0025 284.0909 12500 0.0272 0.5517 0.6994 0.4
0.0024 295.4545 13000 0.0273 0.5517 0.6994 0.4

Framework versions

  • Transformers 4.45.2
  • Pytorch 2.4.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
10
Safetensors
Model size
110M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for pingkeest/newly_fine_tuned_bert_v2

Finetuned
(2110)
this model