|
--- |
|
license: mit |
|
base_model: microsoft/layoutlm-base-uncased |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: layoutlm-cord |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# layoutlm-cord |
|
|
|
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset. |
|
It achieves the following results on the evaluation set: |
|
- eval_loss: 0.1624 |
|
- eval_enu.cnt: {'precision': 0.9861111111111112, 'recall': 0.9681818181818181, 'f1': 0.9770642201834862, 'number': 220} |
|
- eval_enu.discountprice: {'precision': 0.6666666666666666, 'recall': 0.6, 'f1': 0.631578947368421, 'number': 10} |
|
- eval_enu.etc: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} |
|
- eval_enu.itemsubtotal: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} |
|
- eval_enu.nm: {'precision': 0.9525691699604744, 'recall': 0.9601593625498008, 'f1': 0.9563492063492064, 'number': 251} |
|
- eval_enu.num: {'precision': 0.9090909090909091, 'recall': 0.9090909090909091, 'f1': 0.9090909090909091, 'number': 11} |
|
- eval_enu.price: {'precision': 0.9568627450980393, 'recall': 0.991869918699187, 'f1': 0.9740518962075848, 'number': 246} |
|
- eval_enu.sub.cnt: {'precision': 0.85, 'recall': 1.0, 'f1': 0.9189189189189189, 'number': 17} |
|
- eval_enu.sub.nm: {'precision': 0.8285714285714286, 'recall': 0.9354838709677419, 'f1': 0.8787878787878788, 'number': 31} |
|
- eval_enu.sub.price: {'precision': 1.0, 'recall': 0.95, 'f1': 0.9743589743589743, 'number': 20} |
|
- eval_enu.unitprice: {'precision': 0.984375, 'recall': 0.9402985074626866, 'f1': 0.9618320610687023, 'number': 67} |
|
- eval_otal.cashprice: {'precision': 0.9558823529411765, 'recall': 0.9558823529411765, 'f1': 0.9558823529411765, 'number': 68} |
|
- eval_otal.changeprice: {'precision': 0.9655172413793104, 'recall': 1.0, 'f1': 0.9824561403508771, 'number': 56} |
|
- eval_otal.creditcardprice: {'precision': 0.7647058823529411, 'recall': 0.8125, 'f1': 0.787878787878788, 'number': 16} |
|
- eval_otal.emoneyprice: {'precision': 0.3333333333333333, 'recall': 0.5, 'f1': 0.4, 'number': 2} |
|
- eval_otal.menuqty_cnt: {'precision': 0.9333333333333333, 'recall': 0.9655172413793104, 'f1': 0.9491525423728815, 'number': 29} |
|
- eval_otal.menutype_cnt: {'precision': 1.0, 'recall': 0.7142857142857143, 'f1': 0.8333333333333333, 'number': 7} |
|
- eval_otal.total_etc: {'precision': 0.5, 'recall': 0.3333333333333333, 'f1': 0.4, 'number': 3} |
|
- eval_otal.total_price: {'precision': 0.9583333333333334, 'recall': 0.968421052631579, 'f1': 0.9633507853403142, 'number': 95} |
|
- eval_ub_total.discount_price: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 7} |
|
- eval_ub_total.etc: {'precision': 0.875, 'recall': 0.7777777777777778, 'f1': 0.823529411764706, 'number': 9} |
|
- eval_ub_total.service_price: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 12} |
|
- eval_ub_total.subtotal_price: {'precision': 0.9545454545454546, 'recall': 0.9692307692307692, 'f1': 0.9618320610687022, 'number': 65} |
|
- eval_ub_total.tax_price: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 43} |
|
- eval_overall_precision: 0.9522 |
|
- eval_overall_recall: 0.9544 |
|
- eval_overall_f1: 0.9533 |
|
- eval_overall_accuracy: 0.9707 |
|
- eval_runtime: 3.0438 |
|
- eval_samples_per_second: 32.853 |
|
- eval_steps_per_second: 4.271 |
|
- epoch: 1.0 |
|
- step: 50 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 1e-06 |
|
- train_batch_size: 16 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 15 |
|
- mixed_precision_training: Native AMP |
|
|
|
### Framework versions |
|
|
|
- Transformers 4.36.0 |
|
- Pytorch 2.0.0 |
|
- Datasets 2.16.1 |
|
- Tokenizers 0.15.0 |
|
|