layoutlm-Synthetic-only
This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the funsd dataset. It achieves the following results on the evaluation set:
- Loss: 0.9766
- Eader: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 57}
- Nswer: {'precision': 0.07159353348729793, 'recall': 0.2198581560283688, 'f1': 0.10801393728222997, 'number': 141}
- Uestion: {'precision': 0.1038135593220339, 'recall': 0.30434782608695654, 'f1': 0.15481832543443919, 'number': 161}
- Overall Precision: 0.0880
- Overall Recall: 0.2228
- Overall F1: 0.1262
- Overall Accuracy: 0.6103
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 9
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Eader | Nswer | Uestion | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
---|---|---|---|---|---|---|---|---|---|---|
1.3476 | 1.0 | 4 | 1.3017 | {'precision': 0.01, 'recall': 0.05263157894736842, 'f1': 0.016806722689075633, 'number': 57} | {'precision': 0.012711864406779662, 'recall': 0.0425531914893617, 'f1': 0.019575856443719414, 'number': 141} | {'precision': 0.015772870662460567, 'recall': 0.062111801242236024, 'f1': 0.025157232704402514, 'number': 161} | 0.0135 | 0.0529 | 0.0215 | 0.3592 |
1.0607 | 2.0 | 8 | 1.2217 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 57} | {'precision': 0.015384615384615385, 'recall': 0.02127659574468085, 'f1': 0.017857142857142856, 'number': 141} | {'precision': 0.010050251256281407, 'recall': 0.012422360248447204, 'f1': 0.011111111111111113, 'number': 161} | 0.0127 | 0.0139 | 0.0133 | 0.3607 |
0.8532 | 3.0 | 12 | 1.1632 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 57} | {'precision': 0.034375, 'recall': 0.07801418439716312, 'f1': 0.047722342733188726, 'number': 141} | {'precision': 0.021671826625386997, 'recall': 0.043478260869565216, 'f1': 0.02892561983471074, 'number': 161} | 0.0280 | 0.0501 | 0.0359 | 0.3963 |
0.7208 | 4.0 | 16 | 1.1060 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 57} | {'precision': 0.02895752895752896, 'recall': 0.10638297872340426, 'f1': 0.04552352048558422, 'number': 141} | {'precision': 0.0380952380952381, 'recall': 0.12422360248447205, 'f1': 0.05830903790087465, 'number': 161} | 0.0336 | 0.0975 | 0.0499 | 0.4848 |
0.6082 | 5.0 | 20 | 1.0625 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 57} | {'precision': 0.040229885057471264, 'recall': 0.14893617021276595, 'f1': 0.06334841628959276, 'number': 141} | {'precision': 0.06554307116104868, 'recall': 0.21739130434782608, 'f1': 0.10071942446043164, 'number': 161} | 0.0530 | 0.1560 | 0.0792 | 0.5349 |
0.4981 | 6.0 | 24 | 1.0294 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 57} | {'precision': 0.04573804573804574, 'recall': 0.15602836879432624, 'f1': 0.0707395498392283, 'number': 141} | {'precision': 0.08695652173913043, 'recall': 0.2732919254658385, 'f1': 0.13193403298350825, 'number': 161} | 0.0667 | 0.1838 | 0.0979 | 0.5663 |
0.416 | 7.0 | 28 | 1.0031 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 57} | {'precision': 0.05908096280087528, 'recall': 0.19148936170212766, 'f1': 0.09030100334448161, 'number': 141} | {'precision': 0.09475806451612903, 'recall': 0.2919254658385093, 'f1': 0.1430745814307458, 'number': 161} | 0.0774 | 0.2061 | 0.1125 | 0.5868 |
0.3618 | 8.0 | 32 | 0.9854 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 57} | {'precision': 0.06919642857142858, 'recall': 0.2198581560283688, 'f1': 0.10526315789473685, 'number': 141} | {'precision': 0.10103092783505155, 'recall': 0.30434782608695654, 'f1': 0.15170278637770898, 'number': 161} | 0.0855 | 0.2228 | 0.1236 | 0.6034 |
0.3256 | 9.0 | 36 | 0.9766 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 57} | {'precision': 0.07159353348729793, 'recall': 0.2198581560283688, 'f1': 0.10801393728222997, 'number': 141} | {'precision': 0.1038135593220339, 'recall': 0.30434782608695654, 'f1': 0.15481832543443919, 'number': 161} | 0.0880 | 0.2228 | 0.1262 | 0.6103 |
Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
- Downloads last month
- 29
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for pabloma09/layoutlm-Synthetic-only
Base model
microsoft/layoutlm-base-uncased