bert-finetuned-ner / README.md
dfanswerrocket's picture
update model card README.md
5dab43f
|
raw
history blame
2.73 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
model-index:
  - name: bert-finetuned-ner
    results: []

bert-finetuned-ner

This model is a fine-tuned version of bert-base-cased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 6.1090

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 70 8.6531
No log 2.0 140 8.3935
No log 3.0 210 8.2279
No log 4.0 280 8.0728
No log 5.0 350 7.9139
No log 6.0 420 7.7655
No log 7.0 490 7.6321
8.2602 8.0 560 7.4960
8.2602 9.0 630 7.3789
8.2602 10.0 700 7.2505
8.2602 11.0 770 7.1390
8.2602 12.0 840 7.0327
8.2602 13.0 910 6.9259
8.2602 14.0 980 6.8442
7.4097 15.0 1050 6.7584
7.4097 16.0 1120 6.6727
7.4097 17.0 1190 6.5904
7.4097 18.0 1260 6.5285
7.4097 19.0 1330 6.4555
7.4097 20.0 1400 6.4051
7.4097 21.0 1470 6.3435
6.82 22.0 1540 6.2980
6.82 23.0 1610 6.2529
6.82 24.0 1680 6.2188
6.82 25.0 1750 6.1833
6.82 26.0 1820 6.1628
6.82 27.0 1890 6.1386
6.82 28.0 1960 6.1211
6.4746 29.0 2030 6.1138
6.4746 30.0 2100 6.1090

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.1
  • Tokenizers 0.13.2