layoutlm-funsd / README.md
Muafira's picture
End of training
400b91e verified
metadata
license: mit
base_model: microsoft/layoutlm-base-uncased
tags:
  - generated_from_trainer
datasets:
  - funsd
model-index:
  - name: layoutlm-funsd
    results: []

layoutlm-funsd

This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the funsd dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7403
  • Answer: {'precision': 0.73, 'recall': 0.8121137206427689, 'f1': 0.7688706846108836, 'number': 809}
  • Header: {'precision': 0.3611111111111111, 'recall': 0.4369747899159664, 'f1': 0.3954372623574144, 'number': 119}
  • Question: {'precision': 0.7853962600178095, 'recall': 0.828169014084507, 'f1': 0.8062157221206582, 'number': 1065}
  • Overall Precision: 0.7342
  • Overall Recall: 0.7983
  • Overall F1: 0.7649
  • Overall Accuracy: 0.8101

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 16

Training results

Training Loss Epoch Step Validation Loss Answer Header Question Overall Precision Overall Recall Overall F1 Overall Accuracy
1.3197 1.0 10 1.0997 {'precision': 0.34190231362467866, 'recall': 0.3288009888751545, 'f1': 0.3352236925015753, 'number': 809} {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} {'precision': 0.5646958011996572, 'recall': 0.6187793427230047, 'f1': 0.5905017921146953, 'number': 1065} 0.4756 0.4641 0.4698 0.6432
0.9556 2.0 20 0.8488 {'precision': 0.5481481481481482, 'recall': 0.6402966625463535, 'f1': 0.5906499429874572, 'number': 809} {'precision': 0.038461538461538464, 'recall': 0.008403361344537815, 'f1': 0.013793103448275862, 'number': 119} {'precision': 0.6639566395663956, 'recall': 0.6901408450704225, 'f1': 0.6767955801104972, 'number': 1065} 0.6035 0.6292 0.6161 0.7343
0.7263 3.0 30 0.7385 {'precision': 0.645397489539749, 'recall': 0.7626699629171817, 'f1': 0.6991501416430596, 'number': 809} {'precision': 0.11320754716981132, 'recall': 0.05042016806722689, 'f1': 0.06976744186046512, 'number': 119} {'precision': 0.7092013888888888, 'recall': 0.7671361502347418, 'f1': 0.7370320252593595, 'number': 1065} 0.6664 0.7225 0.6933 0.7743
0.5842 4.0 40 0.6892 {'precision': 0.6642487046632124, 'recall': 0.792336217552534, 'f1': 0.7226606538895153, 'number': 809} {'precision': 0.21686746987951808, 'recall': 0.15126050420168066, 'f1': 0.1782178217821782, 'number': 119} {'precision': 0.7226027397260274, 'recall': 0.7924882629107981, 'f1': 0.7559337214509628, 'number': 1065} 0.6782 0.7541 0.7142 0.7964
0.4945 5.0 50 0.6673 {'precision': 0.6974416017797553, 'recall': 0.7750309023485785, 'f1': 0.734192037470726, 'number': 809} {'precision': 0.30337078651685395, 'recall': 0.226890756302521, 'f1': 0.2596153846153846, 'number': 119} {'precision': 0.7408637873754153, 'recall': 0.8375586854460094, 'f1': 0.7862494490965183, 'number': 1065} 0.7053 0.7757 0.7388 0.8033
0.4343 6.0 60 0.6592 {'precision': 0.6962962962962963, 'recall': 0.8133498145859085, 'f1': 0.750285062713797, 'number': 809} {'precision': 0.29411764705882354, 'recall': 0.25210084033613445, 'f1': 0.27149321266968324, 'number': 119} {'precision': 0.7504173622704507, 'recall': 0.844131455399061, 'f1': 0.7945205479452054, 'number': 1065} 0.7069 0.7963 0.7489 0.8077
0.3681 7.0 70 0.6624 {'precision': 0.7049891540130152, 'recall': 0.8034610630407911, 'f1': 0.7510109763142693, 'number': 809} {'precision': 0.30158730158730157, 'recall': 0.31932773109243695, 'f1': 0.310204081632653, 'number': 119} {'precision': 0.7659758203799655, 'recall': 0.8328638497652582, 'f1': 0.7980206927575348, 'number': 1065} 0.7140 0.7903 0.7502 0.8090
0.3312 8.0 80 0.6825 {'precision': 0.7097826086956521, 'recall': 0.8071693448702101, 'f1': 0.7553499132446501, 'number': 809} {'precision': 0.32142857142857145, 'recall': 0.37815126050420167, 'f1': 0.3474903474903475, 'number': 119} {'precision': 0.7703056768558952, 'recall': 0.828169014084507, 'f1': 0.7981900452488688, 'number': 1065} 0.7166 0.7928 0.7527 0.8078
0.2955 9.0 90 0.7009 {'precision': 0.7141316073354909, 'recall': 0.8182941903584673, 'f1': 0.7626728110599078, 'number': 809} {'precision': 0.3493150684931507, 'recall': 0.42857142857142855, 'f1': 0.38490566037735846, 'number': 119} {'precision': 0.7753108348134992, 'recall': 0.819718309859155, 'f1': 0.7968963943404839, 'number': 1065} 0.7212 0.7958 0.7567 0.8034
0.2888 10.0 100 0.6894 {'precision': 0.7125813449023861, 'recall': 0.8121137206427689, 'f1': 0.7590987868284228, 'number': 809} {'precision': 0.37272727272727274, 'recall': 0.3445378151260504, 'f1': 0.35807860262008734, 'number': 119} {'precision': 0.7917783735478106, 'recall': 0.831924882629108, 'f1': 0.8113553113553114, 'number': 1065} 0.7364 0.7948 0.7645 0.8140
0.2482 11.0 110 0.7131 {'precision': 0.7191854233654876, 'recall': 0.8294190358467244, 'f1': 0.7703788748564868, 'number': 809} {'precision': 0.3, 'recall': 0.40336134453781514, 'f1': 0.34408602150537637, 'number': 119} {'precision': 0.7843833185448092, 'recall': 0.8300469483568075, 'f1': 0.8065693430656934, 'number': 1065} 0.7221 0.8043 0.7610 0.8084
0.2297 12.0 120 0.7189 {'precision': 0.7373167981961668, 'recall': 0.8084054388133498, 'f1': 0.7712264150943396, 'number': 809} {'precision': 0.3484848484848485, 'recall': 0.3865546218487395, 'f1': 0.3665338645418326, 'number': 119} {'precision': 0.7730434782608696, 'recall': 0.8347417840375587, 'f1': 0.8027088036117382, 'number': 1065} 0.7326 0.7973 0.7636 0.8125
0.2168 13.0 130 0.7283 {'precision': 0.723986856516977, 'recall': 0.8170580964153276, 'f1': 0.7677119628339142, 'number': 809} {'precision': 0.33793103448275863, 'recall': 0.4117647058823529, 'f1': 0.37121212121212116, 'number': 119} {'precision': 0.7878245299910475, 'recall': 0.8262910798122066, 'f1': 0.8065994500458296, 'number': 1065} 0.7310 0.7978 0.7630 0.8099
0.2011 14.0 140 0.7318 {'precision': 0.7338530066815144, 'recall': 0.8145859085290482, 'f1': 0.7721148213239603, 'number': 809} {'precision': 0.3493150684931507, 'recall': 0.42857142857142855, 'f1': 0.38490566037735846, 'number': 119} {'precision': 0.7833775419982316, 'recall': 0.831924882629108, 'f1': 0.8069216757741348, 'number': 1065} 0.7338 0.8008 0.7658 0.8112
0.1948 15.0 150 0.7391 {'precision': 0.7216721672167217, 'recall': 0.8108776266996292, 'f1': 0.7636786961583235, 'number': 809} {'precision': 0.3561643835616438, 'recall': 0.4369747899159664, 'f1': 0.39245283018867927, 'number': 119} {'precision': 0.7848214285714286, 'recall': 0.8253521126760563, 'f1': 0.8045766590389016, 'number': 1065} 0.7297 0.7963 0.7615 0.8076
0.1955 16.0 160 0.7403 {'precision': 0.73, 'recall': 0.8121137206427689, 'f1': 0.7688706846108836, 'number': 809} {'precision': 0.3611111111111111, 'recall': 0.4369747899159664, 'f1': 0.3954372623574144, 'number': 119} {'precision': 0.7853962600178095, 'recall': 0.828169014084507, 'f1': 0.8062157221206582, 'number': 1065} 0.7342 0.7983 0.7649 0.8101

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1