Edit model card

lilt-en-funsd

This model is a fine-tuned version of SCUT-DLVCLab/lilt-roberta-en-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7516
  • Answer: {'precision': 0.8642691415313225, 'recall': 0.9118727050183598, 'f1': 0.8874329958308517, 'number': 817}
  • Header: {'precision': 0.6106194690265486, 'recall': 0.5798319327731093, 'f1': 0.5948275862068966, 'number': 119}
  • Question: {'precision': 0.9112149532710281, 'recall': 0.9052924791086351, 'f1': 0.9082440614811365, 'number': 1077}
  • Overall Precision: 0.8748
  • Overall Recall: 0.8887
  • Overall F1: 0.8817
  • Overall Accuracy: 0.8047

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • training_steps: 2500

Training results

Training Loss Epoch Step Validation Loss Answer Header Question Overall Precision Overall Recall Overall F1 Overall Accuracy
0.3967 10.53 200 1.2683 {'precision': 0.8096256684491978, 'recall': 0.9265605875152999, 'f1': 0.8641552511415526, 'number': 817} {'precision': 0.5242718446601942, 'recall': 0.453781512605042, 'f1': 0.4864864864864865, 'number': 119} {'precision': 0.9031339031339032, 'recall': 0.883008356545961, 'f1': 0.8929577464788733, 'number': 1077} 0.8427 0.8753 0.8587 0.7811
0.0418 21.05 400 1.3042 {'precision': 0.839390386869871, 'recall': 0.8763769889840881, 'f1': 0.8574850299401198, 'number': 817} {'precision': 0.46923076923076923, 'recall': 0.5126050420168067, 'f1': 0.4899598393574297, 'number': 119} {'precision': 0.851721094439541, 'recall': 0.8960074280408542, 'f1': 0.8733031674208145, 'number': 1077} 0.8233 0.8654 0.8438 0.8022
0.0152 31.58 600 1.3935 {'precision': 0.8523255813953489, 'recall': 0.8971848225214198, 'f1': 0.8741800834824089, 'number': 817} {'precision': 0.5865384615384616, 'recall': 0.5126050420168067, 'f1': 0.5470852017937219, 'number': 119} {'precision': 0.8781994704324801, 'recall': 0.9238625812441968, 'f1': 0.9004524886877827, 'number': 1077} 0.8531 0.8887 0.8706 0.8109
0.0071 42.11 800 1.5595 {'precision': 0.857981220657277, 'recall': 0.8947368421052632, 'f1': 0.8759736369083283, 'number': 817} {'precision': 0.5957446808510638, 'recall': 0.47058823529411764, 'f1': 0.5258215962441314, 'number': 119} {'precision': 0.8912058023572076, 'recall': 0.9127205199628597, 'f1': 0.9018348623853211, 'number': 1077} 0.8638 0.8793 0.8715 0.8015
0.0043 52.63 1000 1.5937 {'precision': 0.835214446952596, 'recall': 0.9057527539779682, 'f1': 0.8690546095126248, 'number': 817} {'precision': 0.6145833333333334, 'recall': 0.4957983193277311, 'f1': 0.5488372093023256, 'number': 119} {'precision': 0.8835740072202166, 'recall': 0.9090064995357474, 'f1': 0.8961098398169337, 'number': 1077} 0.8507 0.8833 0.8667 0.7973
0.0018 63.16 1200 1.5940 {'precision': 0.8645465253239105, 'recall': 0.8984088127294981, 'f1': 0.8811524609843937, 'number': 817} {'precision': 0.5648854961832062, 'recall': 0.6218487394957983, 'f1': 0.5920000000000001, 'number': 119} {'precision': 0.8923357664233577, 'recall': 0.9080779944289693, 'f1': 0.9001380579843534, 'number': 1077} 0.8603 0.8872 0.8736 0.8073
0.0019 73.68 1400 1.6567 {'precision': 0.860381861575179, 'recall': 0.8824969400244798, 'f1': 0.8712990936555891, 'number': 817} {'precision': 0.5462184873949579, 'recall': 0.5462184873949579, 'f1': 0.5462184873949579, 'number': 119} {'precision': 0.8730017761989343, 'recall': 0.9127205199628597, 'f1': 0.8924194280526555, 'number': 1077} 0.8493 0.8788 0.8638 0.8039
0.0009 84.21 1600 1.7442 {'precision': 0.8505747126436781, 'recall': 0.9057527539779682, 'f1': 0.8772969768820391, 'number': 817} {'precision': 0.6057692307692307, 'recall': 0.5294117647058824, 'f1': 0.5650224215246636, 'number': 119} {'precision': 0.8972477064220183, 'recall': 0.9080779944289693, 'f1': 0.9026303645592985, 'number': 1077} 0.8629 0.8847 0.8737 0.7977
0.001 94.74 1800 1.7450 {'precision': 0.8391061452513966, 'recall': 0.9192166462668299, 'f1': 0.8773364485981309, 'number': 817} {'precision': 0.5916666666666667, 'recall': 0.5966386554621849, 'f1': 0.5941422594142259, 'number': 119} {'precision': 0.9132075471698113, 'recall': 0.8987929433611885, 'f1': 0.9059429106223679, 'number': 1077} 0.8627 0.8892 0.8757 0.7954
0.0005 105.26 2000 1.7725 {'precision': 0.8432919954904171, 'recall': 0.9155446756425949, 'f1': 0.8779342723004696, 'number': 817} {'precision': 0.5964912280701754, 'recall': 0.5714285714285714, 'f1': 0.5836909871244635, 'number': 119} {'precision': 0.9066293183940243, 'recall': 0.9015784586815228, 'f1': 0.9040968342644321, 'number': 1077} 0.8625 0.8877 0.8749 0.7995
0.0002 115.79 2200 1.7327 {'precision': 0.8607594936708861, 'recall': 0.9155446756425949, 'f1': 0.8873072360616844, 'number': 817} {'precision': 0.6, 'recall': 0.5798319327731093, 'f1': 0.5897435897435898, 'number': 119} {'precision': 0.9079925650557621, 'recall': 0.9071494893221913, 'f1': 0.9075708313980492, 'number': 1077} 0.8709 0.8912 0.8809 0.8060
0.0002 126.32 2400 1.7516 {'precision': 0.8642691415313225, 'recall': 0.9118727050183598, 'f1': 0.8874329958308517, 'number': 817} {'precision': 0.6106194690265486, 'recall': 0.5798319327731093, 'f1': 0.5948275862068966, 'number': 119} {'precision': 0.9112149532710281, 'recall': 0.9052924791086351, 'f1': 0.9082440614811365, 'number': 1077} 0.8748 0.8887 0.8817 0.8047

Framework versions

  • Transformers 4.28.0
  • Pytorch 1.7.1+cpu
  • Datasets 2.19.1
  • Tokenizers 0.13.3
Downloads last month
0