bert-small-finetuned-parsed20

This model is a fine-tuned version of google/bert_uncased_L-4_H-512_A-8 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.1193

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 128
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 4 3.0763
No log 2.0 8 2.8723
No log 3.0 12 3.5102
No log 4.0 16 2.8641
No log 5.0 20 2.7827
No log 6.0 24 2.8163
No log 7.0 28 3.2415
No log 8.0 32 3.0477
No log 9.0 36 3.5160
No log 10.0 40 3.1248
No log 11.0 44 3.2159
No log 12.0 48 3.2177
No log 13.0 52 2.9108
No log 14.0 56 3.3758
No log 15.0 60 3.1335
No log 16.0 64 2.9753
No log 17.0 68 2.9922
No log 18.0 72 3.2798
No log 19.0 76 2.7280
No log 20.0 80 3.1193

Framework versions

  • Transformers 4.21.1
  • Pytorch 1.12.1+cu113
  • Datasets 2.4.0
  • Tokenizers 0.12.1
Downloads last month
27
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.