pmorelr commited on
Commit
6c7404b
1 Parent(s): 8da699b

End of training

Browse files
README.md CHANGED
@@ -11,21 +11,21 @@ should probably proofread and complete it, then remove this comment. -->
11
 
12
  # layoutlm-doclaynet-test
13
 
14
- This model is a fine-tuned version of [pmorelr/layoutlm-doclaynet-test](https://huggingface.co/pmorelr/layoutlm-doclaynet-test) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
- - Loss: 0.1275
17
- - Footer: {'precision': 0.3970037453183521, 'recall': 0.4140625, 'f1': 0.40535372848948376, 'number': 256}
18
- - Header: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 95}
19
- - Able: {'precision': 0.4575757575757576, 'recall': 0.58984375, 'f1': 0.515358361774744, 'number': 256}
20
- - Aption: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 64}
21
- - Ext: {'precision': 0.4011627906976744, 'recall': 0.45098039215686275, 'f1': 0.4246153846153846, 'number': 459}
22
- - Icture: {'precision': 0.1509433962264151, 'recall': 0.16326530612244897, 'f1': 0.15686274509803919, 'number': 49}
23
- - Itle: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10}
24
- - Ootnote: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
25
- - Overall Precision: 0.4034
26
- - Overall Recall: 0.3966
27
- - Overall F1: 0.4
28
- - Overall Accuracy: 0.9645
29
 
30
  ## Model description
31
 
@@ -54,16 +54,16 @@ The following hyperparameters were used during training:
54
 
55
  ### Training results
56
 
57
- | Training Loss | Epoch | Step | Validation Loss | Footer | Header | Able | Aption | Ext | Icture | Itle | Ootnote | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
58
- |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------:|:----------------------------------------------------------:|:--------------------------------------------------------------------------------------------------:|:----------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------:|:---------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
59
- | 0.3298 | 1.0 | 43 | 0.2178 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 256} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 95} | {'precision': 0.18888888888888888, 'recall': 0.33203125, 'f1': 0.24079320113314445, 'number': 256} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 64} | {'precision': 0.2778993435448578, 'recall': 0.2766884531590414, 'f1': 0.27729257641921395, 'number': 459} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 49} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | 0.2309 | 0.1782 | 0.2011 | 0.9419 |
60
- | 0.2034 | 2.0 | 86 | 0.1505 | {'precision': 0.4134078212290503, 'recall': 0.2890625, 'f1': 0.34022988505747126, 'number': 256} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 95} | {'precision': 0.3140495867768595, 'recall': 0.4453125, 'f1': 0.3683360258481421, 'number': 256} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 64} | {'precision': 0.36401673640167365, 'recall': 0.3790849673202614, 'f1': 0.3713980789754536, 'number': 459} | {'precision': 0.0625, 'recall': 0.061224489795918366, 'f1': 0.061855670103092786, 'number': 49} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | 0.3408 | 0.3067 | 0.3229 | 0.9579 |
61
- | 0.1584 | 3.0 | 129 | 0.1275 | {'precision': 0.3970037453183521, 'recall': 0.4140625, 'f1': 0.40535372848948376, 'number': 256} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 95} | {'precision': 0.4575757575757576, 'recall': 0.58984375, 'f1': 0.515358361774744, 'number': 256} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 64} | {'precision': 0.4011627906976744, 'recall': 0.45098039215686275, 'f1': 0.4246153846153846, 'number': 459} | {'precision': 0.1509433962264151, 'recall': 0.16326530612244897, 'f1': 0.15686274509803919, 'number': 49} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | 0.4034 | 0.3966 | 0.4 | 0.9645 |
62
 
63
 
64
  ### Framework versions
65
 
66
  - Transformers 4.26.1
67
- - Pytorch 1.12.1
68
  - Datasets 2.9.0
69
  - Tokenizers 0.13.2
 
11
 
12
  # layoutlm-doclaynet-test
13
 
14
+ This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
+ - Loss: 0.3029
17
+ - Footer: {'precision': 0.7619047619047619, 'recall': 0.7960199004975125, 'f1': 0.7785888077858881, 'number': 201}
18
+ - Header: {'precision': 0.7631578947368421, 'recall': 0.6987951807228916, 'f1': 0.7295597484276729, 'number': 83}
19
+ - Able: {'precision': 0.569377990430622, 'recall': 0.7531645569620253, 'f1': 0.6485013623978202, 'number': 158}
20
+ - Aption: {'precision': 0.2857142857142857, 'recall': 0.26865671641791045, 'f1': 0.2769230769230769, 'number': 67}
21
+ - Ext: {'precision': 0.6098901098901099, 'recall': 0.6809815950920245, 'f1': 0.6434782608695652, 'number': 326}
22
+ - Icture: {'precision': 0.18055555555555555, 'recall': 0.2, 'f1': 0.18978102189781024, 'number': 65}
23
+ - Itle: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
24
+ - Ootnote: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4}
25
+ - Overall Precision: 0.5930
26
+ - Overall Recall: 0.6505
27
+ - Overall F1: 0.6204
28
+ - Overall Accuracy: 0.9197
29
 
30
  ## Model description
31
 
 
54
 
55
  ### Training results
56
 
57
+ | Training Loss | Epoch | Step | Validation Loss | Footer | Header | Able | Aption | Ext | Icture | Itle | Ootnote | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
58
+ |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:---------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
59
+ | 0.2414 | 1.0 | 426 | 0.1727 | {'precision': 0.6724137931034483, 'recall': 0.7761194029850746, 'f1': 0.720554272517321, 'number': 201} | {'precision': 0.7142857142857143, 'recall': 0.5421686746987951, 'f1': 0.6164383561643836, 'number': 83} | {'precision': 0.5069124423963134, 'recall': 0.6962025316455697, 'f1': 0.5866666666666668, 'number': 158} | {'precision': 0.22916666666666666, 'recall': 0.16417910447761194, 'f1': 0.19130434782608696, 'number': 67} | {'precision': 0.5323383084577115, 'recall': 0.656441717791411, 'f1': 0.587912087912088, 'number': 326} | {'precision': 0.24528301886792453, 'recall': 0.2, 'f1': 0.22033898305084745, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | 0.5409 | 0.6053 | 0.5713 | 0.9584 |
60
+ | 0.1037 | 2.0 | 852 | 0.1726 | {'precision': 0.7045454545454546, 'recall': 0.7711442786069652, 'f1': 0.7363420427553445, 'number': 201} | {'precision': 0.8529411764705882, 'recall': 0.6987951807228916, 'f1': 0.7682119205298014, 'number': 83} | {'precision': 0.5658536585365853, 'recall': 0.7341772151898734, 'f1': 0.6391184573002755, 'number': 158} | {'precision': 0.25333333333333335, 'recall': 0.2835820895522388, 'f1': 0.2676056338028169, 'number': 67} | {'precision': 0.5640394088669951, 'recall': 0.7024539877300614, 'f1': 0.6256830601092896, 'number': 326} | {'precision': 0.16666666666666666, 'recall': 0.18461538461538463, 'f1': 0.17518248175182485, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | 0.5631 | 0.6494 | 0.6032 | 0.9510 |
61
+ | 0.0647 | 3.0 | 1278 | 0.3029 | {'precision': 0.7619047619047619, 'recall': 0.7960199004975125, 'f1': 0.7785888077858881, 'number': 201} | {'precision': 0.7631578947368421, 'recall': 0.6987951807228916, 'f1': 0.7295597484276729, 'number': 83} | {'precision': 0.569377990430622, 'recall': 0.7531645569620253, 'f1': 0.6485013623978202, 'number': 158} | {'precision': 0.2857142857142857, 'recall': 0.26865671641791045, 'f1': 0.2769230769230769, 'number': 67} | {'precision': 0.6098901098901099, 'recall': 0.6809815950920245, 'f1': 0.6434782608695652, 'number': 326} | {'precision': 0.18055555555555555, 'recall': 0.2, 'f1': 0.18978102189781024, 'number': 65} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4} | 0.5930 | 0.6505 | 0.6204 | 0.9197 |
62
 
63
 
64
  ### Framework versions
65
 
66
  - Transformers 4.26.1
67
+ - Pytorch 1.12.1+cu102
68
  - Datasets 2.9.0
69
  - Tokenizers 0.13.2
logs/events.out.tfevents.1679079265.instance-1.22775.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:82d35aa3ac053a693ed035d46e970068660332a5bf1d5dfe41c4c3144efd47af
3
- size 6327
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2152f959f5718a31563fbc36ccca4464d38871e5944fc05c05b7affc7399367
3
+ size 6681
tokenizer.json CHANGED
@@ -1,7 +1,21 @@
1
  {
2
  "version": "1.0",
3
- "truncation": null,
4
- "padding": null,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  "added_tokens": [
6
  {
7
  "id": 0,
 
1
  {
2
  "version": "1.0",
3
+ "truncation": {
4
+ "direction": "Right",
5
+ "max_length": 512,
6
+ "strategy": "LongestFirst",
7
+ "stride": 0
8
+ },
9
+ "padding": {
10
+ "strategy": {
11
+ "Fixed": 512
12
+ },
13
+ "direction": "Right",
14
+ "pad_to_multiple_of": null,
15
+ "pad_id": 0,
16
+ "pad_type_id": 0,
17
+ "pad_token": "[PAD]"
18
+ },
19
  "added_tokens": [
20
  {
21
  "id": 0,
tokenizer_config.json CHANGED
@@ -12,7 +12,7 @@
12
  "do_lower_case": true,
13
  "mask_token": "[MASK]",
14
  "model_max_length": 512,
15
- "name_or_path": "pmorelr/layoutlm-doclaynet-test",
16
  "never_split": null,
17
  "only_label_first_subword": true,
18
  "pad_token": "[PAD]",
 
12
  "do_lower_case": true,
13
  "mask_token": "[MASK]",
14
  "model_max_length": 512,
15
+ "name_or_path": "microsoft/layoutlmv2-base-uncased",
16
  "never_split": null,
17
  "only_label_first_subword": true,
18
  "pad_token": "[PAD]",