Edit model card

layoutlm-funsd

This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the funsd dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6178
  • Answer: {'precision': 0.6652719665271967, 'recall': 0.7861557478368356, 'f1': 0.7206798866855525, 'number': 809}
  • Header: {'precision': 0.29133858267716534, 'recall': 0.31092436974789917, 'f1': 0.3008130081300813, 'number': 119}
  • Question: {'precision': 0.7537248028045574, 'recall': 0.8075117370892019, 'f1': 0.7796917497733454, 'number': 1065}
  • Overall Precision: 0.6893
  • Overall Recall: 0.7692
  • Overall F1: 0.7271
  • Overall Accuracy: 0.8014

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Answer Header Question Overall Precision Overall Recall Overall F1 Overall Accuracy
1.5284 1.0 38 1.0167 {'precision': 0.3938144329896907, 'recall': 0.4721878862793572, 'f1': 0.4294547498594716, 'number': 809} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} {'precision': 0.5845528455284553, 'recall': 0.6751173708920187, 'f1': 0.6265795206971678, 'number': 1065} 0.4959 0.5524 0.5227 0.6689
0.8661 2.0 76 0.7179 {'precision': 0.630346232179226, 'recall': 0.765142150803461, 'f1': 0.6912339475153545, 'number': 809} {'precision': 0.2087912087912088, 'recall': 0.15966386554621848, 'f1': 0.18095238095238092, 'number': 119} {'precision': 0.7058823529411765, 'recall': 0.7436619718309859, 'f1': 0.7242798353909465, 'number': 1065} 0.6515 0.7175 0.6829 0.7596
0.6265 3.0 114 0.6470 {'precision': 0.6458546571136131, 'recall': 0.7799752781211372, 'f1': 0.7066069428891377, 'number': 809} {'precision': 0.2972972972972973, 'recall': 0.2773109243697479, 'f1': 0.28695652173913044, 'number': 119} {'precision': 0.7359649122807017, 'recall': 0.787793427230047, 'f1': 0.7609977324263038, 'number': 1065} 0.6746 0.7541 0.7122 0.7879
0.5076 4.0 152 0.6207 {'precision': 0.6680851063829787, 'recall': 0.7762669962917181, 'f1': 0.7181246426529445, 'number': 809} {'precision': 0.28, 'recall': 0.29411764705882354, 'f1': 0.28688524590163933, 'number': 119} {'precision': 0.7368421052631579, 'recall': 0.828169014084507, 'f1': 0.7798408488063661, 'number': 1065} 0.6830 0.7752 0.7262 0.8003
0.4471 5.0 190 0.6178 {'precision': 0.6652719665271967, 'recall': 0.7861557478368356, 'f1': 0.7206798866855525, 'number': 809} {'precision': 0.29133858267716534, 'recall': 0.31092436974789917, 'f1': 0.3008130081300813, 'number': 119} {'precision': 0.7537248028045574, 'recall': 0.8075117370892019, 'f1': 0.7796917497733454, 'number': 1065} 0.6893 0.7692 0.7271 0.8014

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
2