Edit model card

finetuned-test-1

This model is a fine-tuned version of bert-base-uncased on the conll2003 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8192

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
2.8219 1.0 30 2.3343
2.4148 2.0 60 2.2010
2.3236 3.0 90 2.1442
2.2231 4.0 120 2.1651
2.2171 5.0 150 2.0614
2.127 6.0 180 2.0405
2.0748 7.0 210 2.0092
2.0511 8.0 240 1.9798
2.0097 9.0 270 1.8662
1.9969 10.0 300 1.9257
2.0006 11.0 330 1.9386
1.9273 12.0 360 1.9357
1.9177 13.0 390 1.8983
1.9128 14.0 420 1.8990
1.8979 15.0 450 1.9037
1.8721 16.0 480 1.8440
1.8998 17.0 510 1.8404
1.8862 18.0 540 1.9193
1.9133 19.0 570 1.8494
1.8799 20.0 600 1.8192

Framework versions

  • Transformers 4.20.1
  • Pytorch 1.11.0+cu113
  • Datasets 2.3.2
  • Tokenizers 0.12.1
Downloads last month
0

Dataset used to train ariesutiono/finetuned-test-1