P.E.R.S_WILD / README.md
TikkaMasala's picture
P.E.R.S_WILD
259102e verified
metadata
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
  - generated_from_trainer
datasets:
  - wild
metrics:
  - precision
  - recall
  - f1
  - accuracy
model-index:
  - name: P.E.R.S_WILD
    results:
      - task:
          name: Token Classification
          type: token-classification
        dataset:
          name: wild
          type: wild
          config: WildReceipt
          split: test
          args: WildReceipt
        metrics:
          - name: Precision
            type: precision
            value: 0.8621359223300971
          - name: Recall
            type: recall
            value: 0.8556090846524432
          - name: F1
            type: f1
            value: 0.8588601036269431
          - name: Accuracy
            type: accuracy
            value: 0.9165934548649243

P.E.R.S_WILD

This model is a fine-tuned version of microsoft/layoutlmv3-base on the wild dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3319
  • Precision: 0.8621
  • Recall: 0.8556
  • F1: 0.8589
  • Accuracy: 0.9166

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 4000

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 0.0789 100 1.7570 0.4014 0.0807 0.1343 0.5202
No log 0.1579 200 1.2158 0.5444 0.3515 0.4272 0.6895
No log 0.2368 300 0.9862 0.6676 0.4763 0.5559 0.7534
No log 0.3157 400 0.8539 0.6740 0.5898 0.6291 0.7883
1.3192 0.3946 500 0.7150 0.7489 0.6325 0.6858 0.8208
1.3192 0.4736 600 0.6731 0.7487 0.6716 0.7080 0.8288
1.3192 0.5525 700 0.6345 0.7588 0.6848 0.7199 0.8357
1.3192 0.6314 800 0.5903 0.7671 0.7181 0.7418 0.8472
1.3192 0.7103 900 0.5273 0.7743 0.7718 0.7731 0.8690
0.7013 0.7893 1000 0.4923 0.7939 0.7555 0.7742 0.8689
0.7013 0.8682 1100 0.4811 0.8147 0.7619 0.7874 0.8742
0.7013 0.9471 1200 0.4694 0.8006 0.7985 0.7995 0.8812
0.7013 1.0260 1300 0.4429 0.8246 0.8058 0.8151 0.8866
0.7013 1.1050 1400 0.4302 0.8135 0.8051 0.8093 0.8863
0.4844 1.1839 1500 0.4364 0.7964 0.8245 0.8102 0.8875
0.4844 1.2628 1600 0.4445 0.8012 0.8299 0.8153 0.8857
0.4844 1.3418 1700 0.4021 0.8175 0.8244 0.8209 0.8918
0.4844 1.4207 1800 0.3886 0.8290 0.8193 0.8241 0.8958
0.4844 1.4996 1900 0.3708 0.8271 0.8372 0.8321 0.9000
0.411 1.5785 2000 0.3910 0.8356 0.8310 0.8333 0.8996
0.411 1.6575 2100 0.3550 0.8419 0.8399 0.8409 0.9069
0.411 1.7364 2200 0.3499 0.8374 0.8451 0.8413 0.9066
0.411 1.8153 2300 0.3532 0.8301 0.8512 0.8405 0.9050
0.411 1.8942 2400 0.3763 0.8285 0.8471 0.8377 0.9018
0.3641 1.9732 2500 0.3508 0.8529 0.8410 0.8469 0.9067
0.3641 2.0521 2600 0.3616 0.8507 0.8384 0.8445 0.9083
0.3641 2.1310 2700 0.3705 0.8485 0.8511 0.8498 0.9086
0.3641 2.2099 2800 0.3527 0.8436 0.8562 0.8498 0.9118
0.3641 2.2889 2900 0.3383 0.8658 0.8475 0.8566 0.9135
0.2824 2.3678 3000 0.3395 0.8527 0.8523 0.8525 0.9124
0.2824 2.4467 3100 0.3364 0.8622 0.8478 0.8549 0.9140
0.2824 2.5257 3200 0.3383 0.8431 0.8619 0.8524 0.9125
0.2824 2.6046 3300 0.3377 0.8530 0.8586 0.8558 0.9132
0.2824 2.6835 3400 0.3389 0.8481 0.8629 0.8554 0.9135
0.2928 2.7624 3500 0.3319 0.8621 0.8556 0.8589 0.9166
0.2928 2.8414 3600 0.3341 0.8555 0.8575 0.8565 0.9153
0.2928 2.9203 3700 0.3341 0.8536 0.8603 0.8569 0.9153
0.2928 2.9992 3800 0.3305 0.8556 0.8636 0.8596 0.9167
0.2928 3.0781 3900 0.3313 0.8579 0.8613 0.8596 0.9166
0.2487 3.1571 4000 0.3326 0.8550 0.8604 0.8577 0.9160

Framework versions

  • Transformers 4.42.0.dev0
  • Pytorch 2.3.1+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1