--- license: mit tags: - generated_from_trainer datasets: - funsd-layoutlmv3 model-index: - name: lilt-en-funsd results: [] --- # lilt-en-funsd This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 0.9065 - Answer: {'precision': 0.834096109839817, 'recall': 0.8922888616891065, 'f1': 0.8622117090479007, 'number': 817} - Header: {'precision': 0.5319148936170213, 'recall': 0.42016806722689076, 'f1': 0.4694835680751173, 'number': 119} - Question: {'precision': 0.8570175438596491, 'recall': 0.9071494893221913, 'f1': 0.8813712223725756, 'number': 1077} - Overall Precision: 0.8330 - Overall Recall: 0.8723 - Overall F1: 0.8522 - Overall Accuracy: 0.7918 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 200 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.7017 | 5.26 | 100 | 0.7391 | {'precision': 0.8216340621403913, 'recall': 0.8739290085679314, 'f1': 0.8469750889679716, 'number': 817} | {'precision': 0.4533333333333333, 'recall': 0.2857142857142857, 'f1': 0.3505154639175258, 'number': 119} | {'precision': 0.8234323432343235, 'recall': 0.9266480965645311, 'f1': 0.8719965050240279, 'number': 1077} | 0.8098 | 0.8674 | 0.8376 | 0.8073 | | 0.1656 | 10.53 | 200 | 0.9065 | {'precision': 0.834096109839817, 'recall': 0.8922888616891065, 'f1': 0.8622117090479007, 'number': 817} | {'precision': 0.5319148936170213, 'recall': 0.42016806722689076, 'f1': 0.4694835680751173, 'number': 119} | {'precision': 0.8570175438596491, 'recall': 0.9071494893221913, 'f1': 0.8813712223725756, 'number': 1077} | 0.8330 | 0.8723 | 0.8522 | 0.7918 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2