Jacques2207 commited on
Commit
d88d405
1 Parent(s): 50847ec

End of training

Browse files
README.md CHANGED
@@ -14,20 +14,19 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: nan
18
- - Footer: {'precision': 0.9749447310243183, 'recall': 0.9792746113989638, 'f1': 0.9771048744460857, 'number': 1351}
19
- - Header: {'precision': 0.927519818799547, 'recall': 0.9578947368421052, 'f1': 0.9424626006904488, 'number': 855}
20
- - Able: {'precision': 0.7589285714285714, 'recall': 0.8531994981179423, 'f1': 0.8033077377436504, 'number': 797}
21
- - Aption: {'precision': 0.6352785145888594, 'recall': 0.7496087636932708, 'f1': 0.687724335965542, 'number': 639}
22
- - Ext: {'precision': 0.6819444444444445, 'recall': 0.7897064736630478, 'f1': 0.7318800074529532, 'number': 2487}
23
- - Icture: {'precision': 0.772196261682243, 'recall': 0.8283208020050126, 'f1': 0.7992744860943168, 'number': 798}
24
- - Itle: {'precision': 0.4519230769230769, 'recall': 0.415929203539823, 'f1': 0.43317972350230416, 'number': 113}
25
- - Ootnote: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 55}
26
- - Ormula: {'precision': 0.38578680203045684, 'recall': 0.7307692307692307, 'f1': 0.5049833887043189, 'number': 104}
27
- - Overall Precision: 0.7631
28
- - Overall Recall: 0.8403
29
- - Overall F1: 0.7998
30
- - Overall Accuracy: 0.9572
31
 
32
  ## Model description
33
 
@@ -47,19 +46,22 @@ More information needed
47
 
48
  The following hyperparameters were used during training:
49
  - learning_rate: 3e-05
50
- - train_batch_size: 1
51
- - eval_batch_size: 1
52
  - seed: 42
53
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
  - lr_scheduler_type: linear
55
- - num_epochs: 2
56
 
57
  ### Training results
58
 
59
- | Training Loss | Epoch | Step | Validation Loss | Footer | Header | Able | Aption | Ext | Icture | Itle | Ootnote | Ormula | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
60
- |:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
61
- | 0.6151 | 1.0 | 4900 | nan | {'precision': 0.9154334038054969, 'recall': 0.9615099925980755, 'f1': 0.9379061371841154, 'number': 1351} | {'precision': 0.8517316017316018, 'recall': 0.92046783625731, 'f1': 0.8847667228780213, 'number': 855} | {'precision': 0.5285592497868713, 'recall': 0.7779171894604768, 'f1': 0.6294416243654822, 'number': 797} | {'precision': 0.3216326530612245, 'recall': 0.6165884194053208, 'f1': 0.4227467811158798, 'number': 639} | {'precision': 0.4335355763927192, 'recall': 0.632086851628468, 'f1': 0.5143137575658433, 'number': 2487} | {'precision': 0.5630585898709036, 'recall': 0.7105263157894737, 'f1': 0.6282548476454293, 'number': 798} | {'precision': 0.06504065040650407, 'recall': 0.21238938053097345, 'f1': 0.09958506224066391, 'number': 113} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 55} | {'precision': 0.07069408740359898, 'recall': 0.5288461538461539, 'f1': 0.12471655328798187, 'number': 104} | 0.5055 | 0.7387 | 0.6002 | 0.9093 |
62
- | 0.2733 | 2.0 | 9800 | nan | {'precision': 0.9749447310243183, 'recall': 0.9792746113989638, 'f1': 0.9771048744460857, 'number': 1351} | {'precision': 0.927519818799547, 'recall': 0.9578947368421052, 'f1': 0.9424626006904488, 'number': 855} | {'precision': 0.7589285714285714, 'recall': 0.8531994981179423, 'f1': 0.8033077377436504, 'number': 797} | {'precision': 0.6352785145888594, 'recall': 0.7496087636932708, 'f1': 0.687724335965542, 'number': 639} | {'precision': 0.6819444444444445, 'recall': 0.7897064736630478, 'f1': 0.7318800074529532, 'number': 2487} | {'precision': 0.772196261682243, 'recall': 0.8283208020050126, 'f1': 0.7992744860943168, 'number': 798} | {'precision': 0.4519230769230769, 'recall': 0.415929203539823, 'f1': 0.43317972350230416, 'number': 113} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 55} | {'precision': 0.38578680203045684, 'recall': 0.7307692307692307, 'f1': 0.5049833887043189, 'number': 104} | 0.7631 | 0.8403 | 0.7998 | 0.9572 |
 
 
 
63
 
64
 
65
  ### Framework versions
 
14
 
15
  This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 1.4071
18
+ - Footer: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 186}
19
+ - Header: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 373}
20
+ - Able: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 100}
21
+ - Aption: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 148}
22
+ - Ext: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 566}
23
+ - Icture: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 270}
24
+ - Itle: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 45}
25
+ - Ootnote: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8}
26
+ - Overall Precision: 0.0
27
+ - Overall Recall: 0.0
28
+ - Overall F1: 0.0
29
+ - Overall Accuracy: 0.6399
 
30
 
31
  ## Model description
32
 
 
46
 
47
  The following hyperparameters were used during training:
48
  - learning_rate: 3e-05
49
+ - train_batch_size: 2
50
+ - eval_batch_size: 4
51
  - seed: 42
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
  - lr_scheduler_type: linear
54
+ - num_epochs: 5
55
 
56
  ### Training results
57
 
58
+ | Training Loss | Epoch | Step | Validation Loss | Footer | Header | Able | Aption | Ext | Icture | Itle | Ootnote | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
59
+ |:-------------:|:-----:|:----:|:---------------:|:-----------------------------------------------------------:|:-----------------------------------------------------------:|:-----------------------------------------------------------:|:-----------------------------------------------------------:|:-----------------------------------------------------------:|:-----------------------------------------------------------:|:----------------------------------------------------------:|:---------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
60
+ | 1.1724 | 1.0 | 1950 | 1.4537 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 186} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 373} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 100} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 148} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 566} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 270} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 45} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | 0.0 | 0.0 | 0.0 | 0.6399 |
61
+ | 1.2004 | 2.0 | 3900 | 1.4094 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 186} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 373} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 100} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 148} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 566} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 270} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 45} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | 0.0 | 0.0 | 0.0 | 0.6399 |
62
+ | 1.2026 | 3.0 | 5850 | 1.4038 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 186} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 373} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 100} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 148} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 566} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 270} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 45} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | 0.0 | 0.0 | 0.0 | 0.6399 |
63
+ | 1.2107 | 4.0 | 7800 | 1.4217 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 186} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 373} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 100} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 148} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 566} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 270} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 45} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | 0.0 | 0.0 | 0.0 | 0.6399 |
64
+ | 1.1836 | 5.0 | 9750 | 1.4071 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 186} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 373} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 100} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 148} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 566} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 270} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 45} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | 0.0 | 0.0 | 0.0 | 0.6399 |
65
 
66
 
67
  ### Framework versions
logs/events.out.tfevents.1679302770.138-2-233-57.311419.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3215211e58ba83f1a4e2024089b0884c1b58ad6184224e2971e97349c4b94eee
3
- size 7919
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55c631d919c5343a8bffa890c26d89e461532b0d72ad444cbac321fb0f8ae64a
3
+ size 8273
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4477470206a7b460dc357ae8d3de93c05b6a692225c952341f50cb305f48d3e4
3
  size 503781199
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b917c291ab5e4524bcedfe9de55f39e1df5d6eaf32649da86d74e156677bce9a
3
  size 503781199