Edit model card

BertWhyCommitOriginal

This model is a fine-tuned version of prajjwal1/bert-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4881
  • Accuracy: 0.8788

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 200

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 31 0.5058 0.7394
No log 2.0 62 0.4463 0.7758
No log 3.0 93 0.4260 0.7758
No log 4.0 124 0.3954 0.8061
No log 5.0 155 0.3745 0.8061
No log 6.0 186 0.3653 0.8303
No log 7.0 217 0.3533 0.8424
No log 8.0 248 0.3500 0.8364
No log 9.0 279 0.3416 0.8606
No log 10.0 310 0.3546 0.8424
No log 11.0 341 0.3469 0.8485
No log 12.0 372 0.3511 0.8606
No log 13.0 403 0.3883 0.8545
No log 14.0 434 0.4090 0.8485
No log 15.0 465 0.4301 0.8485
No log 16.0 496 0.4415 0.8606
0.2667 17.0 527 0.4732 0.8545
0.2667 18.0 558 0.4849 0.8727
0.2667 19.0 589 0.4881 0.8788

Framework versions

  • Transformers 4.28.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
10