bert-small-finetuned-finetuned-parsed-longer50

This model is a fine-tuned version of muhtasham/bert-small-finetuned-parsed20 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.9278

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 128
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 4 2.9807
No log 2.0 8 2.7267
No log 3.0 12 3.3484
No log 4.0 16 2.7573
No log 5.0 20 2.7063
No log 6.0 24 2.7353
No log 7.0 28 3.1290
No log 8.0 32 2.9371
No log 9.0 36 3.4265
No log 10.0 40 3.0537
No log 11.0 44 3.1382
No log 12.0 48 3.1454
No log 13.0 52 2.8379
No log 14.0 56 3.2760
No log 15.0 60 3.0504
No log 16.0 64 2.9001
No log 17.0 68 2.8892
No log 18.0 72 3.1837
No log 19.0 76 2.6404
No log 20.0 80 3.0600
No log 21.0 84 3.1432
No log 22.0 88 2.9608
No log 23.0 92 3.0513
No log 24.0 96 3.1038
No log 25.0 100 3.0975
No log 26.0 104 2.8977
No log 27.0 108 2.9416
No log 28.0 112 2.9015
No log 29.0 116 2.7947
No log 30.0 120 2.9278

Framework versions

  • Transformers 4.21.1
  • Pytorch 1.12.1+cu113
  • Datasets 2.4.0
  • Tokenizers 0.12.1
Downloads last month
9
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.