File size: 6,373 Bytes
aa8f6da |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv3-base-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-base-ner
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Footer: {'precision': 0.9749447310243183, 'recall': 0.9792746113989638, 'f1': 0.9771048744460857, 'number': 1351}
- Header: {'precision': 0.927519818799547, 'recall': 0.9578947368421052, 'f1': 0.9424626006904488, 'number': 855}
- Able: {'precision': 0.7589285714285714, 'recall': 0.8531994981179423, 'f1': 0.8033077377436504, 'number': 797}
- Aption: {'precision': 0.6352785145888594, 'recall': 0.7496087636932708, 'f1': 0.687724335965542, 'number': 639}
- Ext: {'precision': 0.6819444444444445, 'recall': 0.7897064736630478, 'f1': 0.7318800074529532, 'number': 2487}
- Icture: {'precision': 0.772196261682243, 'recall': 0.8283208020050126, 'f1': 0.7992744860943168, 'number': 798}
- Itle: {'precision': 0.4519230769230769, 'recall': 0.415929203539823, 'f1': 0.43317972350230416, 'number': 113}
- Ootnote: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 55}
- Ormula: {'precision': 0.38578680203045684, 'recall': 0.7307692307692307, 'f1': 0.5049833887043189, 'number': 104}
- Overall Precision: 0.7631
- Overall Recall: 0.8403
- Overall F1: 0.7998
- Overall Accuracy: 0.9572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Footer | Header | Able | Aption | Ext | Icture | Itle | Ootnote | Ormula | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.6151 | 1.0 | 4900 | nan | {'precision': 0.9154334038054969, 'recall': 0.9615099925980755, 'f1': 0.9379061371841154, 'number': 1351} | {'precision': 0.8517316017316018, 'recall': 0.92046783625731, 'f1': 0.8847667228780213, 'number': 855} | {'precision': 0.5285592497868713, 'recall': 0.7779171894604768, 'f1': 0.6294416243654822, 'number': 797} | {'precision': 0.3216326530612245, 'recall': 0.6165884194053208, 'f1': 0.4227467811158798, 'number': 639} | {'precision': 0.4335355763927192, 'recall': 0.632086851628468, 'f1': 0.5143137575658433, 'number': 2487} | {'precision': 0.5630585898709036, 'recall': 0.7105263157894737, 'f1': 0.6282548476454293, 'number': 798} | {'precision': 0.06504065040650407, 'recall': 0.21238938053097345, 'f1': 0.09958506224066391, 'number': 113} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 55} | {'precision': 0.07069408740359898, 'recall': 0.5288461538461539, 'f1': 0.12471655328798187, 'number': 104} | 0.5055 | 0.7387 | 0.6002 | 0.9093 |
| 0.2733 | 2.0 | 9800 | nan | {'precision': 0.9749447310243183, 'recall': 0.9792746113989638, 'f1': 0.9771048744460857, 'number': 1351} | {'precision': 0.927519818799547, 'recall': 0.9578947368421052, 'f1': 0.9424626006904488, 'number': 855} | {'precision': 0.7589285714285714, 'recall': 0.8531994981179423, 'f1': 0.8033077377436504, 'number': 797} | {'precision': 0.6352785145888594, 'recall': 0.7496087636932708, 'f1': 0.687724335965542, 'number': 639} | {'precision': 0.6819444444444445, 'recall': 0.7897064736630478, 'f1': 0.7318800074529532, 'number': 2487} | {'precision': 0.772196261682243, 'recall': 0.8283208020050126, 'f1': 0.7992744860943168, 'number': 798} | {'precision': 0.4519230769230769, 'recall': 0.415929203539823, 'f1': 0.43317972350230416, 'number': 113} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 55} | {'precision': 0.38578680203045684, 'recall': 0.7307692307692307, 'f1': 0.5049833887043189, 'number': 104} | 0.7631 | 0.8403 | 0.7998 | 0.9572 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.13.2
|