req_mod_ner_modelv2 / README.md
denizspynk's picture
Update README.md
6110cde
|
raw
history blame
4.85 kB
metadata
license: mit
tags:
  - generated_from_trainer
metrics:
  - precision
  - recall
  - f1
  - accuracy
model-index:
  - name: req_mod_ner_modelv2
    results: []
language:
  - nl
widget:
  - text: >-
      De Oplossing ondersteunt het zoeken op de metadata van zaken, documenten
      en objecten en op gegevens uit de basisregistraties die gekoppeld zijn aan
      een zaak.
  - text: >-
      De Oplossing ondersteunt parafering en het plaatsen van een
      gecertificeerde elektronische handtekening.
  - text: >-
      De Aangeboden oplossing stelt de medewerker in staat een zaak te
      registreren.
  - text: >-
      Het Financieel systeem heeft functionaliteit om een
      debiteurenadministratie te voeren.

req_mod_ner_modelv2

This model is a fine-tuned version of pdelobelle/robbert-v2-dutch-ner on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6964
  • Precision: 0.544
  • Recall: 0.5862
  • F1: 0.5643
  • Accuracy: 0.9153

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 32

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 1.0 120 0.6075 0.8095 0.1466 0.2482 0.8822
No log 2.0 240 0.4917 0.6667 0.1897 0.2953 0.8878
No log 3.0 360 0.4429 0.5 0.3362 0.4021 0.8918
No log 4.0 480 0.4255 0.5 0.4914 0.4957 0.9007
0.507 5.0 600 0.4278 0.5085 0.5172 0.5128 0.9007
0.507 6.0 720 0.4321 0.5294 0.5431 0.5362 0.9064
0.507 7.0 840 0.4574 0.5410 0.5690 0.5546 0.9064
0.507 8.0 960 0.4720 0.5804 0.5603 0.5702 0.9096
0.1626 9.0 1080 0.4947 0.5197 0.5690 0.5432 0.9056
0.1626 10.0 1200 0.5013 0.5159 0.5603 0.5372 0.9096
0.1626 11.0 1320 0.5306 0.5271 0.5862 0.5551 0.9104
0.1626 12.0 1440 0.5450 0.5070 0.6207 0.5581 0.9112
0.0687 13.0 1560 0.5753 0.5152 0.5862 0.5484 0.9112
0.0687 14.0 1680 0.5746 0.5547 0.6121 0.5820 0.9169
0.0687 15.0 1800 0.5925 0.5328 0.6293 0.5771 0.9144
0.0687 16.0 1920 0.6200 0.5656 0.5948 0.5798 0.9144
0.0368 17.0 2040 0.6442 0.5583 0.5776 0.5678 0.9169
0.0368 18.0 2160 0.6468 0.5317 0.5776 0.5537 0.9136
0.0368 19.0 2280 0.6563 0.5403 0.5776 0.5583 0.9153
0.0368 20.0 2400 0.6683 0.5323 0.5690 0.5500 0.9104
0.0227 21.0 2520 0.6766 0.5074 0.5948 0.5476 0.9096
0.0227 22.0 2640 0.6784 0.4965 0.6121 0.5483 0.9072
0.0227 23.0 2760 0.6897 0.5583 0.5776 0.5678 0.9144
0.0227 24.0 2880 0.6858 0.5182 0.6121 0.5613 0.9112
0.0146 25.0 3000 0.6828 0.5224 0.6034 0.5600 0.9128
0.0146 26.0 3120 0.6937 0.5528 0.5862 0.5690 0.9169
0.0146 27.0 3240 0.6939 0.5397 0.5862 0.5620 0.9144
0.0146 28.0 3360 0.6934 0.5476 0.5948 0.5702 0.9169
0.0146 29.0 3480 0.6848 0.5147 0.6034 0.5556 0.9120
0.0132 30.0 3600 0.6864 0.5231 0.5862 0.5528 0.9112
0.0132 31.0 3720 0.6948 0.544 0.5862 0.5643 0.9161
0.0132 32.0 3840 0.6964 0.544 0.5862 0.5643 0.9153

Framework versions

  • Transformers 4.24.0
  • Pytorch 2.0.0
  • Datasets 2.9.0
  • Tokenizers 0.11.0