Edit model card

lilt-en-funsd

This model is a fine-tuned version of SCUT-DLVCLab/lilt-roberta-en-base on the funsd-layoutlmv3 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.8649
  • Answer: {'precision': 0.8747072599531616, 'recall': 0.9143206854345165, 'f1': 0.8940754039497306, 'number': 817}
  • Header: {'precision': 0.5859375, 'recall': 0.6302521008403361, 'f1': 0.6072874493927125, 'number': 119}
  • Question: {'precision': 0.9066543438077634, 'recall': 0.9108635097493036, 'f1': 0.9087540528022232, 'number': 1077}
  • Overall Precision: 0.8735
  • Overall Recall: 0.8957
  • Overall F1: 0.8845
  • Overall Accuracy: 0.8017

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 2500

Training results

Training Loss Epoch Step Validation Loss Answer Header Question Overall Precision Overall Recall Overall F1 Overall Accuracy
0.4135 10.53 200 1.0232 {'precision': 0.8317757009345794, 'recall': 0.8714810281517748, 'f1': 0.8511655708308428, 'number': 817} {'precision': 0.5126050420168067, 'recall': 0.5126050420168067, 'f1': 0.5126050420168067, 'number': 119} {'precision': 0.8781362007168458, 'recall': 0.9099350046425255, 'f1': 0.8937528499772002, 'number': 1077} 0.8384 0.8708 0.8543 0.7797
0.0419 21.05 400 1.2118 {'precision': 0.8427745664739884, 'recall': 0.8922888616891065, 'f1': 0.8668252080856123, 'number': 817} {'precision': 0.5267857142857143, 'recall': 0.4957983193277311, 'f1': 0.5108225108225107, 'number': 119} {'precision': 0.8787330316742081, 'recall': 0.9015784586815228, 'f1': 0.8900091659028414, 'number': 1077} 0.8449 0.8738 0.8591 0.7884
0.0118 31.58 600 1.5526 {'precision': 0.8194748358862144, 'recall': 0.9167686658506732, 'f1': 0.8653957250144425, 'number': 817} {'precision': 0.6161616161616161, 'recall': 0.5126050420168067, 'f1': 0.5596330275229358, 'number': 119} {'precision': 0.8935574229691877, 'recall': 0.8885793871866295, 'f1': 0.8910614525139665, 'number': 1077} 0.8479 0.8778 0.8626 0.7864
0.0062 42.11 800 1.6956 {'precision': 0.8351893095768375, 'recall': 0.9179926560587516, 'f1': 0.8746355685131196, 'number': 817} {'precision': 0.5275590551181102, 'recall': 0.5630252100840336, 'f1': 0.5447154471544715, 'number': 119} {'precision': 0.916988416988417, 'recall': 0.8820798514391829, 'f1': 0.8991954566966399, 'number': 1077} 0.8574 0.8778 0.8675 0.7970
0.0034 52.63 1000 1.6288 {'precision': 0.8627450980392157, 'recall': 0.9155446756425949, 'f1': 0.8883610451306414, 'number': 817} {'precision': 0.5663716814159292, 'recall': 0.5378151260504201, 'f1': 0.5517241379310345, 'number': 119} {'precision': 0.8978840846366145, 'recall': 0.9062209842154132, 'f1': 0.9020332717190388, 'number': 1077} 0.8650 0.8882 0.8765 0.8003
0.0021 63.16 1200 1.5524 {'precision': 0.8739693757361602, 'recall': 0.9082007343941249, 'f1': 0.8907563025210083, 'number': 817} {'precision': 0.5537190082644629, 'recall': 0.5630252100840336, 'f1': 0.5583333333333335, 'number': 119} {'precision': 0.8787346221441125, 'recall': 0.9285051067780873, 'f1': 0.9029345372460497, 'number': 1077} 0.8582 0.8987 0.8779 0.8139
0.0014 73.68 1400 1.6580 {'precision': 0.8801897983392646, 'recall': 0.9082007343941249, 'f1': 0.8939759036144578, 'number': 817} {'precision': 0.5537190082644629, 'recall': 0.5630252100840336, 'f1': 0.5583333333333335, 'number': 119} {'precision': 0.8856121537086684, 'recall': 0.9201485608170845, 'f1': 0.9025500910746811, 'number': 1077} 0.8641 0.8942 0.8789 0.8049
0.0011 84.21 1600 1.6894 {'precision': 0.8883553421368547, 'recall': 0.9057527539779682, 'f1': 0.896969696969697, 'number': 817} {'precision': 0.5887850467289719, 'recall': 0.5294117647058824, 'f1': 0.5575221238938053, 'number': 119} {'precision': 0.8969917958067457, 'recall': 0.9136490250696379, 'f1': 0.9052437902483901, 'number': 1077} 0.8773 0.8877 0.8825 0.8052
0.0008 94.74 1800 1.8811 {'precision': 0.8722157092614302, 'recall': 0.9106487148102815, 'f1': 0.8910179640718563, 'number': 817} {'precision': 0.5522388059701493, 'recall': 0.6218487394957983, 'f1': 0.5849802371541502, 'number': 119} {'precision': 0.9012003693444137, 'recall': 0.9062209842154132, 'f1': 0.9037037037037038, 'number': 1077} 0.8667 0.8912 0.8788 0.7898
0.0003 105.26 2000 1.8570 {'precision': 0.8577981651376146, 'recall': 0.9155446756425949, 'f1': 0.8857312018946123, 'number': 817} {'precision': 0.6702127659574468, 'recall': 0.5294117647058824, 'f1': 0.5915492957746479, 'number': 119} {'precision': 0.9064220183486239, 'recall': 0.9173630454967502, 'f1': 0.9118597138901707, 'number': 1077} 0.875 0.8937 0.8842 0.8074
0.0004 115.79 2200 1.8481 {'precision': 0.8577981651376146, 'recall': 0.9155446756425949, 'f1': 0.8857312018946123, 'number': 817} {'precision': 0.6194690265486725, 'recall': 0.5882352941176471, 'f1': 0.603448275862069, 'number': 119} {'precision': 0.9063948100092678, 'recall': 0.9080779944289693, 'f1': 0.9072356215213357, 'number': 1077} 0.8702 0.8922 0.8810 0.8029
0.0002 126.32 2400 1.8649 {'precision': 0.8747072599531616, 'recall': 0.9143206854345165, 'f1': 0.8940754039497306, 'number': 817} {'precision': 0.5859375, 'recall': 0.6302521008403361, 'f1': 0.6072874493927125, 'number': 119} {'precision': 0.9066543438077634, 'recall': 0.9108635097493036, 'f1': 0.9087540528022232, 'number': 1077} 0.8735 0.8957 0.8845 0.8017

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1+cu118
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
7

Finetuned from