Edit model card

bert-large-cased-finetuned-lowR100-0-cased-DA-40

This model is a fine-tuned version of bert-large-cased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9481

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 30
  • eval_batch_size: 30
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 1 1.8079
2.0032 2.0 2 3.1228
2.0032 3.0 3 1.9553
1.9122 4.0 4 2.1789
1.9122 5.0 5 1.8698
1.9936 6.0 6 2.1649
1.9936 7.0 7 3.3564
1.8624 8.0 8 2.6206
1.8624 9.0 9 2.0358
1.8051 10.0 10 1.9754
1.8051 11.0 11 2.5270
2.0363 12.0 12 1.8028
2.0363 13.0 13 2.0974
1.7005 14.0 14 1.2336
1.7005 15.0 15 2.0583
1.6696 16.0 16 3.1515
1.6696 17.0 17 2.3699
1.5171 18.0 18 2.5653
1.5171 19.0 19 1.6895
1.573 20.0 20 1.7983
1.573 21.0 21 3.0257
1.5831 22.0 22 2.8107
1.5831 23.0 23 1.6412
1.6265 24.0 24 1.9859
1.6265 25.0 25 1.7744
1.6744 26.0 26 2.7989
1.6744 27.0 27 1.7943
1.5041 28.0 28 1.5538
1.5041 29.0 29 3.9907
1.5154 30.0 30 0.8862
1.5154 31.0 31 1.7290
1.5841 32.0 32 1.7470
1.5841 33.0 33 1.9897
1.6299 34.0 34 1.7316
1.6299 35.0 35 1.7352
1.6617 36.0 36 1.7413
1.6617 37.0 37 2.7554
1.3198 38.0 38 2.7426
1.3198 39.0 39 1.7127
1.6463 40.0 40 1.6030

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1+cu116
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
1
Inference API
Examples
Mask token: [MASK]
This model can be loaded on Inference API (serverless).