Edit model card

bert-large-cased-finetuned-prompt-20

This model is a fine-tuned version of bert-large-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7142

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.3173 1.0 280 1.1293
1.087 2.0 560 0.9716
1.0064 3.0 840 0.9606
0.9341 4.0 1120 0.8887
0.8881 5.0 1400 0.8654
0.8662 6.0 1680 0.8181
0.8331 7.0 1960 0.8286
0.8206 8.0 2240 0.7941
0.8017 9.0 2520 0.7677
0.772 10.0 2800 0.7711
0.76 11.0 3080 0.7314
0.7436 12.0 3360 0.7479
0.7305 13.0 3640 0.7354
0.7204 14.0 3920 0.7143
0.7102 15.0 4200 0.7366
0.7034 16.0 4480 0.7036
0.6937 17.0 4760 0.7049
0.695 18.0 5040 0.7080
0.6923 19.0 5320 0.7110
0.6886 20.0 5600 0.6969

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1+cu116
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
1