Edit model card

lilt-en-funsd

This model is a fine-tuned version of SCUT-DLVCLab/lilt-roberta-en-base on the funsd-layoutlmv3 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7566
  • Answer: {'precision': 0.8818713450292398, 'recall': 0.9228886168910648, 'f1': 0.9019138755980862, 'number': 817}
  • Header: {'precision': 0.6597938144329897, 'recall': 0.5378151260504201, 'f1': 0.5925925925925926, 'number': 119}
  • Question: {'precision': 0.8944494995450409, 'recall': 0.9127205199628597, 'f1': 0.9034926470588234, 'number': 1077}
  • Overall Precision: 0.8781
  • Overall Recall: 0.8947
  • Overall F1: 0.8863
  • Overall Accuracy: 0.7939

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 2500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Answer Header Question Overall Precision Overall Recall Overall F1 Overall Accuracy
0.4354 10.53 200 1.0094 {'precision': 0.8219954648526077, 'recall': 0.8873929008567931, 'f1': 0.8534432018834609, 'number': 817} {'precision': 0.5617977528089888, 'recall': 0.42016806722689076, 'f1': 0.4807692307692308, 'number': 119} {'precision': 0.8493723849372385, 'recall': 0.9424326833797586, 'f1': 0.8934859154929579, 'number': 1077} 0.8264 0.8892 0.8567 0.7972
0.0503 21.05 400 1.2949 {'precision': 0.8543577981651376, 'recall': 0.9118727050183598, 'f1': 0.8821788040260509, 'number': 817} {'precision': 0.5658914728682171, 'recall': 0.6134453781512605, 'f1': 0.5887096774193549, 'number': 119} {'precision': 0.9066147859922179, 'recall': 0.8653667595171773, 'f1': 0.8855106888361044, 'number': 1077} 0.8625 0.8693 0.8659 0.8117
0.0143 31.58 600 1.3527 {'precision': 0.8726190476190476, 'recall': 0.8971848225214198, 'f1': 0.8847314423657212, 'number': 817} {'precision': 0.6666666666666666, 'recall': 0.5714285714285714, 'f1': 0.6153846153846153, 'number': 119} {'precision': 0.8533674339300937, 'recall': 0.9294336118848654, 'f1': 0.8897777777777779, 'number': 1077} 0.8520 0.8952 0.8731 0.8116
0.0064 42.11 800 1.6567 {'precision': 0.8483466362599772, 'recall': 0.9106487148102815, 'f1': 0.8783943329397875, 'number': 817} {'precision': 0.5564516129032258, 'recall': 0.5798319327731093, 'f1': 0.5679012345679013, 'number': 119} {'precision': 0.8949814126394052, 'recall': 0.8941504178272981, 'f1': 0.8945657222480261, 'number': 1077} 0.8551 0.8823 0.8685 0.7982
0.0051 52.63 1000 1.6856 {'precision': 0.8542141230068337, 'recall': 0.9179926560587516, 'f1': 0.8849557522123894, 'number': 817} {'precision': 0.66, 'recall': 0.5546218487394958, 'f1': 0.6027397260273973, 'number': 119} {'precision': 0.9025069637883009, 'recall': 0.9025069637883009, 'f1': 0.9025069637883009, 'number': 1077} 0.8701 0.8882 0.8791 0.7925
0.0029 63.16 1200 1.5031 {'precision': 0.8860294117647058, 'recall': 0.8849449204406364, 'f1': 0.8854868340477648, 'number': 817} {'precision': 0.6147540983606558, 'recall': 0.6302521008403361, 'f1': 0.6224066390041495, 'number': 119} {'precision': 0.8724890829694323, 'recall': 0.9275766016713092, 'f1': 0.8991899189918992, 'number': 1077} 0.8627 0.8927 0.8774 0.8117
0.0015 73.68 1400 1.6708 {'precision': 0.8720657276995305, 'recall': 0.9094247246022031, 'f1': 0.89035350509287, 'number': 817} {'precision': 0.5286624203821656, 'recall': 0.6974789915966386, 'f1': 0.6014492753623188, 'number': 119} {'precision': 0.8897126969416126, 'recall': 0.8913649025069638, 'f1': 0.8905380333951762, 'number': 1077} 0.8554 0.8872 0.8710 0.7958
0.0012 84.21 1600 1.7566 {'precision': 0.8818713450292398, 'recall': 0.9228886168910648, 'f1': 0.9019138755980862, 'number': 817} {'precision': 0.6597938144329897, 'recall': 0.5378151260504201, 'f1': 0.5925925925925926, 'number': 119} {'precision': 0.8944494995450409, 'recall': 0.9127205199628597, 'f1': 0.9034926470588234, 'number': 1077} 0.8781 0.8947 0.8863 0.7939
0.0006 94.74 1800 1.8482 {'precision': 0.8781362007168458, 'recall': 0.8996328029375765, 'f1': 0.8887545344619106, 'number': 817} {'precision': 0.5862068965517241, 'recall': 0.5714285714285714, 'f1': 0.5787234042553192, 'number': 119} {'precision': 0.8949814126394052, 'recall': 0.8941504178272981, 'f1': 0.8945657222480261, 'number': 1077} 0.8704 0.8773 0.8738 0.7913
0.0006 105.26 2000 1.7763 {'precision': 0.8747072599531616, 'recall': 0.9143206854345165, 'f1': 0.8940754039497306, 'number': 817} {'precision': 0.6095238095238096, 'recall': 0.5378151260504201, 'f1': 0.5714285714285715, 'number': 119} {'precision': 0.8859489051094891, 'recall': 0.9015784586815228, 'f1': 0.8936953520478601, 'number': 1077} 0.8672 0.8852 0.8761 0.7964
0.0003 115.79 2200 1.9186 {'precision': 0.8813953488372093, 'recall': 0.9277845777233782, 'f1': 0.9039952295766249, 'number': 817} {'precision': 0.6190476190476191, 'recall': 0.5462184873949579, 'f1': 0.5803571428571429, 'number': 119} {'precision': 0.9025304592314901, 'recall': 0.8941504178272981, 'f1': 0.898320895522388, 'number': 1077} 0.8789 0.8872 0.8831 0.7971
0.0002 126.32 2400 1.8948 {'precision': 0.8780487804878049, 'recall': 0.9253365973072215, 'f1': 0.901072705601907, 'number': 817} {'precision': 0.6261682242990654, 'recall': 0.5630252100840336, 'f1': 0.5929203539823009, 'number': 119} {'precision': 0.9033457249070632, 'recall': 0.9025069637883009, 'f1': 0.9029261495587553, 'number': 1077} 0.8782 0.8917 0.8849 0.7978

Framework versions

  • Transformers 4.27.3
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.1
  • Tokenizers 0.13.2
Downloads last month
1