GuiTap's picture
End of training
68a8152
metadata
license: apache-2.0
base_model: bert-base-cased
tags:
  - generated_from_trainer
datasets:
  - harem
metrics:
  - precision
  - recall
  - f1
  - accuracy
model-index:
  - name: bert-base-cased-finetuned-ner
    results:
      - task:
          name: Token Classification
          type: token-classification
        dataset:
          name: harem
          type: harem
          config: default
          split: validation
          args: default
        metrics:
          - name: Precision
            type: precision
            value: 0.3251366120218579
          - name: Recall
            type: recall
            value: 0.34097421203438394
          - name: F1
            type: f1
            value: 0.3328671328671328
          - name: Accuracy
            type: accuracy
            value: 0.8684278684278685

bert-base-cased-finetuned-ner

This model is a fine-tuned version of bert-base-cased on the harem dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5103
  • Precision: 0.3251
  • Recall: 0.3410
  • F1: 0.3329
  • Accuracy: 0.8684

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 1.0 4 1.1734 0.0 0.0 0.0 0.8083
No log 2.0 8 0.9781 0.0 0.0 0.0 0.8086
No log 3.0 12 0.8915 0.0 0.0 0.0 0.8086
No log 4.0 16 0.7901 0.0 0.0 0.0 0.8086
No log 5.0 20 0.7202 0.0 0.0 0.0 0.8086
No log 6.0 24 0.6846 0.4286 0.0344 0.0637 0.8130
No log 7.0 28 0.6596 0.2014 0.0802 0.1148 0.8306
No log 8.0 32 0.6355 0.1615 0.0745 0.1020 0.8324
No log 9.0 36 0.6193 0.1571 0.0946 0.1181 0.8345
No log 10.0 40 0.6106 0.1295 0.1032 0.1148 0.8335
No log 11.0 44 0.5919 0.1680 0.1232 0.1421 0.8350
No log 12.0 48 0.5789 0.2051 0.1375 0.1647 0.8384
No log 13.0 52 0.5827 0.1611 0.1375 0.1484 0.8355
No log 14.0 56 0.5638 0.2281 0.1862 0.2050 0.8433
No log 15.0 60 0.5576 0.1879 0.1691 0.1780 0.8420
No log 16.0 64 0.5485 0.2110 0.1862 0.1979 0.8456
No log 17.0 68 0.5479 0.2401 0.2264 0.2330 0.8500
No log 18.0 72 0.5460 0.2406 0.2378 0.2392 0.8503
No log 19.0 76 0.5374 0.2531 0.2350 0.2437 0.8542
No log 20.0 80 0.5365 0.2364 0.2493 0.2427 0.8539
No log 21.0 84 0.5284 0.2462 0.2350 0.2405 0.8552
No log 22.0 88 0.5306 0.2812 0.2837 0.2825 0.8601
No log 23.0 92 0.5262 0.2722 0.2722 0.2722 0.8573
No log 24.0 96 0.5306 0.2447 0.2665 0.2551 0.8555
No log 25.0 100 0.5249 0.2785 0.3009 0.2893 0.8594
No log 26.0 104 0.5201 0.2801 0.2865 0.2833 0.8586
No log 27.0 108 0.5213 0.2806 0.2894 0.2849 0.8604
No log 28.0 112 0.5207 0.2732 0.2951 0.2837 0.8612
No log 29.0 116 0.5144 0.3027 0.3209 0.3115 0.8630
No log 30.0 120 0.5135 0.3073 0.3381 0.3220 0.8648
No log 31.0 124 0.5147 0.2953 0.3266 0.3102 0.8651
No log 32.0 128 0.5121 0.2937 0.3181 0.3054 0.8645
No log 33.0 132 0.5092 0.3061 0.3324 0.3187 0.8645
No log 34.0 136 0.5064 0.3342 0.3696 0.3510 0.8677
No log 35.0 140 0.5056 0.3191 0.3438 0.3310 0.8674
No log 36.0 144 0.5091 0.3023 0.3352 0.3179 0.8661
No log 37.0 148 0.5104 0.3061 0.3324 0.3187 0.8658
No log 38.0 152 0.5100 0.3152 0.3324 0.3236 0.8677
No log 39.0 156 0.5102 0.3243 0.3410 0.3324 0.8684
No log 40.0 160 0.5103 0.3251 0.3410 0.3329 0.8684

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.0.0
  • Datasets 2.1.0
  • Tokenizers 0.13.3