File size: 5,497 Bytes
117bcb0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3f54570
 
 
 
 
 
 
117bcb0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3f54570
 
 
 
 
 
 
 
 
 
 
 
117bcb0
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: layoutlmv3-triplet
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# layoutlmv3-triplet

This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0158
- Aption: {'precision': 0.9251446070091868, 'recall': 0.9238871899422358, 'f1': 0.9245154709282557, 'number': 2943}
- Ootnote: {'precision': 0.9455411844792376, 'recall': 0.9442556084296397, 'f1': 0.9448979591836736, 'number': 2942}
- Overall Precision: 0.9353
- Overall Recall: 0.9341
- Overall F1: 0.9347
- Overall Accuracy: 0.9982

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Aption                                                                                                    | Ootnote                                                                                                   | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0152        | 1.0   | 8507  | 0.0147          | {'precision': 0.863031914893617, 'recall': 0.8820931022765885, 'f1': 0.8724584103512015, 'number': 2943}  | {'precision': 0.902593085106383, 'recall': 0.9228416043507818, 'f1': 0.9126050420168068, 'number': 2942}  | 0.8828            | 0.9025         | 0.8925     | 0.9969           |
| 0.0067        | 2.0   | 17014 | 0.0128          | {'precision': 0.9206239168110919, 'recall': 0.9024804621134896, 'f1': 0.911461908030199, 'number': 2943}  | {'precision': 0.9459084604715673, 'recall': 0.9272603670972128, 'f1': 0.936491589426708, 'number': 2942}  | 0.9333            | 0.9149         | 0.9240     | 0.9979           |
| 0.0049        | 3.0   | 25521 | 0.0153          | {'precision': 0.9005291005291005, 'recall': 0.8674821610601428, 'f1': 0.8836967808930426, 'number': 2943} | {'precision': 0.9435426958362738, 'recall': 0.9089055064581917, 'f1': 0.9259002770083102, 'number': 2942} | 0.9220            | 0.8882         | 0.9048     | 0.9971           |
| 0.0037        | 4.0   | 34028 | 0.0110          | {'precision': 0.9221803222488858, 'recall': 0.9140332993544003, 'f1': 0.9180887372013652, 'number': 2943} | {'precision': 0.946159122085048, 'recall': 0.9377974167233175, 'f1': 0.9419597132127007, 'number': 2942}  | 0.9342            | 0.9259         | 0.9300     | 0.9981           |
| 0.0025        | 5.0   | 42535 | 0.0110          | {'precision': 0.9253680246490927, 'recall': 0.9184505606523955, 'f1': 0.9218963165075034, 'number': 2943} | {'precision': 0.9455665867853474, 'recall': 0.938817131203263, 'f1': 0.9421797714480641, 'number': 2942}  | 0.9355            | 0.9286         | 0.9320     | 0.9981           |
| 0.0021        | 6.0   | 51042 | 0.0137          | {'precision': 0.9104477611940298, 'recall': 0.9119945633707102, 'f1': 0.911220505856391, 'number': 2943}  | {'precision': 0.9331523583305056, 'recall': 0.9347382732834806, 'f1': 0.9339446425539141, 'number': 2942} | 0.9218            | 0.9234         | 0.9226     | 0.9978           |
| 0.0012        | 7.0   | 59549 | 0.0133          | {'precision': 0.9154399178363574, 'recall': 0.90859667006456, 'f1': 0.912005457025921, 'number': 2943}    | {'precision': 0.9397260273972603, 'recall': 0.9326988443235894, 'f1': 0.9361992494029341, 'number': 2942} | 0.9276            | 0.9206         | 0.9241     | 0.9981           |
| 0.0013        | 8.0   | 68056 | 0.0194          | {'precision': 0.9192886456908345, 'recall': 0.9133537206931702, 'f1': 0.9163115732060677, 'number': 2943} | {'precision': 0.9442353746151214, 'recall': 0.938137321549966, 'f1': 0.9411764705882352, 'number': 2942}  | 0.9318            | 0.9257         | 0.9287     | 0.9979           |
| 0.0007        | 9.0   | 76563 | 0.0143          | {'precision': 0.9239945466939332, 'recall': 0.9211688752973156, 'f1': 0.9225795473881231, 'number': 2943} | {'precision': 0.9457892942379816, 'recall': 0.9428959891230455, 'f1': 0.9443404255319149, 'number': 2942} | 0.9349            | 0.9320         | 0.9335     | 0.9982           |
| 0.0004        | 10.0  | 85070 | 0.0158          | {'precision': 0.9251446070091868, 'recall': 0.9238871899422358, 'f1': 0.9245154709282557, 'number': 2943} | {'precision': 0.9455411844792376, 'recall': 0.9442556084296397, 'f1': 0.9448979591836736, 'number': 2942} | 0.9353            | 0.9341         | 0.9347     | 0.9982           |


### Framework versions

- Transformers 4.26.0
- Pytorch 1.12.1
- Datasets 2.9.0
- Tokenizers 0.13.2