pmorelr's picture
End of training
6c7404b
metadata
tags:
  - generated_from_trainer
model-index:
  - name: layoutlm-doclaynet-test
    results: []

layoutlm-doclaynet-test

This model is a fine-tuned version of microsoft/layoutlm-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3029
  • Footer: {'precision': 0.7619047619047619, 'recall': 0.7960199004975125, 'f1': 0.7785888077858881, 'number': 201}
  • Header: {'precision': 0.7631578947368421, 'recall': 0.6987951807228916, 'f1': 0.7295597484276729, 'number': 83}
  • Able: {'precision': 0.569377990430622, 'recall': 0.7531645569620253, 'f1': 0.6485013623978202, 'number': 158}
  • Aption: {'precision': 0.2857142857142857, 'recall': 0.26865671641791045, 'f1': 0.2769230769230769, 'number': 67}
  • Ext: {'precision': 0.6098901098901099, 'recall': 0.6809815950920245, 'f1': 0.6434782608695652, 'number': 326}
  • Icture: {'precision': 0.18055555555555555, 'recall': 0.2, 'f1': 0.18978102189781024, 'number': 65}
  • Itle: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
  • Ootnote: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4}
  • Overall Precision: 0.5930
  • Overall Recall: 0.6505
  • Overall F1: 0.6204
  • Overall Accuracy: 0.9197

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Footer Header Able Aption Ext Icture Itle Ootnote Overall Precision Overall Recall Overall F1 Overall Accuracy
0.2414 1.0 426 0.1727 {'precision': 0.6724137931034483, 'recall': 0.7761194029850746, 'f1': 0.720554272517321, 'number': 201} {'precision': 0.7142857142857143, 'recall': 0.5421686746987951, 'f1': 0.6164383561643836, 'number': 83} {'precision': 0.5069124423963134, 'recall': 0.6962025316455697, 'f1': 0.5866666666666668, 'number': 158} {'precision': 0.22916666666666666, 'recall': 0.16417910447761194, 'f1': 0.19130434782608696, 'number': 67} {'precision': 0.5323383084577115, 'recall': 0.656441717791411, 'f1': 0.587912087912088, 'number': 326} {'precision': 0.24528301886792453, 'recall': 0.2, 'f1': 0.22033898305084745, 'number': 65} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} 0.5409 0.6053 0.5713 0.9584
0.1037 2.0 852 0.1726 {'precision': 0.7045454545454546, 'recall': 0.7711442786069652, 'f1': 0.7363420427553445, 'number': 201} {'precision': 0.8529411764705882, 'recall': 0.6987951807228916, 'f1': 0.7682119205298014, 'number': 83} {'precision': 0.5658536585365853, 'recall': 0.7341772151898734, 'f1': 0.6391184573002755, 'number': 158} {'precision': 0.25333333333333335, 'recall': 0.2835820895522388, 'f1': 0.2676056338028169, 'number': 67} {'precision': 0.5640394088669951, 'recall': 0.7024539877300614, 'f1': 0.6256830601092896, 'number': 326} {'precision': 0.16666666666666666, 'recall': 0.18461538461538463, 'f1': 0.17518248175182485, 'number': 65} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} 0.5631 0.6494 0.6032 0.9510
0.0647 3.0 1278 0.3029 {'precision': 0.7619047619047619, 'recall': 0.7960199004975125, 'f1': 0.7785888077858881, 'number': 201} {'precision': 0.7631578947368421, 'recall': 0.6987951807228916, 'f1': 0.7295597484276729, 'number': 83} {'precision': 0.569377990430622, 'recall': 0.7531645569620253, 'f1': 0.6485013623978202, 'number': 158} {'precision': 0.2857142857142857, 'recall': 0.26865671641791045, 'f1': 0.2769230769230769, 'number': 67} {'precision': 0.6098901098901099, 'recall': 0.6809815950920245, 'f1': 0.6434782608695652, 'number': 326} {'precision': 0.18055555555555555, 'recall': 0.2, 'f1': 0.18978102189781024, 'number': 65} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} 0.5930 0.6505 0.6204 0.9197

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.12.1+cu102
  • Datasets 2.9.0
  • Tokenizers 0.13.2