JOPRAKASH commited on
Commit
f778ba5
1 Parent(s): 4879b4f

End of training

Browse files
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: SCUT-DLVCLab/lilt-roberta-en-base
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - funsd-layoutlmv3
8
+ model-index:
9
+ - name: lilt-en-funsd
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # lilt-en-funsd
17
+
18
+ This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 1.8649
21
+ - Answer: {'precision': 0.8747072599531616, 'recall': 0.9143206854345165, 'f1': 0.8940754039497306, 'number': 817}
22
+ - Header: {'precision': 0.5859375, 'recall': 0.6302521008403361, 'f1': 0.6072874493927125, 'number': 119}
23
+ - Question: {'precision': 0.9066543438077634, 'recall': 0.9108635097493036, 'f1': 0.9087540528022232, 'number': 1077}
24
+ - Overall Precision: 0.8735
25
+ - Overall Recall: 0.8957
26
+ - Overall F1: 0.8845
27
+ - Overall Accuracy: 0.8017
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 5e-05
47
+ - train_batch_size: 8
48
+ - eval_batch_size: 8
49
+ - seed: 42
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: linear
52
+ - training_steps: 2500
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
57
+ |:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
58
+ | 0.4135 | 10.53 | 200 | 1.0232 | {'precision': 0.8317757009345794, 'recall': 0.8714810281517748, 'f1': 0.8511655708308428, 'number': 817} | {'precision': 0.5126050420168067, 'recall': 0.5126050420168067, 'f1': 0.5126050420168067, 'number': 119} | {'precision': 0.8781362007168458, 'recall': 0.9099350046425255, 'f1': 0.8937528499772002, 'number': 1077} | 0.8384 | 0.8708 | 0.8543 | 0.7797 |
59
+ | 0.0419 | 21.05 | 400 | 1.2118 | {'precision': 0.8427745664739884, 'recall': 0.8922888616891065, 'f1': 0.8668252080856123, 'number': 817} | {'precision': 0.5267857142857143, 'recall': 0.4957983193277311, 'f1': 0.5108225108225107, 'number': 119} | {'precision': 0.8787330316742081, 'recall': 0.9015784586815228, 'f1': 0.8900091659028414, 'number': 1077} | 0.8449 | 0.8738 | 0.8591 | 0.7884 |
60
+ | 0.0118 | 31.58 | 600 | 1.5526 | {'precision': 0.8194748358862144, 'recall': 0.9167686658506732, 'f1': 0.8653957250144425, 'number': 817} | {'precision': 0.6161616161616161, 'recall': 0.5126050420168067, 'f1': 0.5596330275229358, 'number': 119} | {'precision': 0.8935574229691877, 'recall': 0.8885793871866295, 'f1': 0.8910614525139665, 'number': 1077} | 0.8479 | 0.8778 | 0.8626 | 0.7864 |
61
+ | 0.0062 | 42.11 | 800 | 1.6956 | {'precision': 0.8351893095768375, 'recall': 0.9179926560587516, 'f1': 0.8746355685131196, 'number': 817} | {'precision': 0.5275590551181102, 'recall': 0.5630252100840336, 'f1': 0.5447154471544715, 'number': 119} | {'precision': 0.916988416988417, 'recall': 0.8820798514391829, 'f1': 0.8991954566966399, 'number': 1077} | 0.8574 | 0.8778 | 0.8675 | 0.7970 |
62
+ | 0.0034 | 52.63 | 1000 | 1.6288 | {'precision': 0.8627450980392157, 'recall': 0.9155446756425949, 'f1': 0.8883610451306414, 'number': 817} | {'precision': 0.5663716814159292, 'recall': 0.5378151260504201, 'f1': 0.5517241379310345, 'number': 119} | {'precision': 0.8978840846366145, 'recall': 0.9062209842154132, 'f1': 0.9020332717190388, 'number': 1077} | 0.8650 | 0.8882 | 0.8765 | 0.8003 |
63
+ | 0.0021 | 63.16 | 1200 | 1.5524 | {'precision': 0.8739693757361602, 'recall': 0.9082007343941249, 'f1': 0.8907563025210083, 'number': 817} | {'precision': 0.5537190082644629, 'recall': 0.5630252100840336, 'f1': 0.5583333333333335, 'number': 119} | {'precision': 0.8787346221441125, 'recall': 0.9285051067780873, 'f1': 0.9029345372460497, 'number': 1077} | 0.8582 | 0.8987 | 0.8779 | 0.8139 |
64
+ | 0.0014 | 73.68 | 1400 | 1.6580 | {'precision': 0.8801897983392646, 'recall': 0.9082007343941249, 'f1': 0.8939759036144578, 'number': 817} | {'precision': 0.5537190082644629, 'recall': 0.5630252100840336, 'f1': 0.5583333333333335, 'number': 119} | {'precision': 0.8856121537086684, 'recall': 0.9201485608170845, 'f1': 0.9025500910746811, 'number': 1077} | 0.8641 | 0.8942 | 0.8789 | 0.8049 |
65
+ | 0.0011 | 84.21 | 1600 | 1.6894 | {'precision': 0.8883553421368547, 'recall': 0.9057527539779682, 'f1': 0.896969696969697, 'number': 817} | {'precision': 0.5887850467289719, 'recall': 0.5294117647058824, 'f1': 0.5575221238938053, 'number': 119} | {'precision': 0.8969917958067457, 'recall': 0.9136490250696379, 'f1': 0.9052437902483901, 'number': 1077} | 0.8773 | 0.8877 | 0.8825 | 0.8052 |
66
+ | 0.0008 | 94.74 | 1800 | 1.8811 | {'precision': 0.8722157092614302, 'recall': 0.9106487148102815, 'f1': 0.8910179640718563, 'number': 817} | {'precision': 0.5522388059701493, 'recall': 0.6218487394957983, 'f1': 0.5849802371541502, 'number': 119} | {'precision': 0.9012003693444137, 'recall': 0.9062209842154132, 'f1': 0.9037037037037038, 'number': 1077} | 0.8667 | 0.8912 | 0.8788 | 0.7898 |
67
+ | 0.0003 | 105.26 | 2000 | 1.8570 | {'precision': 0.8577981651376146, 'recall': 0.9155446756425949, 'f1': 0.8857312018946123, 'number': 817} | {'precision': 0.6702127659574468, 'recall': 0.5294117647058824, 'f1': 0.5915492957746479, 'number': 119} | {'precision': 0.9064220183486239, 'recall': 0.9173630454967502, 'f1': 0.9118597138901707, 'number': 1077} | 0.875 | 0.8937 | 0.8842 | 0.8074 |
68
+ | 0.0004 | 115.79 | 2200 | 1.8481 | {'precision': 0.8577981651376146, 'recall': 0.9155446756425949, 'f1': 0.8857312018946123, 'number': 817} | {'precision': 0.6194690265486725, 'recall': 0.5882352941176471, 'f1': 0.603448275862069, 'number': 119} | {'precision': 0.9063948100092678, 'recall': 0.9080779944289693, 'f1': 0.9072356215213357, 'number': 1077} | 0.8702 | 0.8922 | 0.8810 | 0.8029 |
69
+ | 0.0002 | 126.32 | 2400 | 1.8649 | {'precision': 0.8747072599531616, 'recall': 0.9143206854345165, 'f1': 0.8940754039497306, 'number': 817} | {'precision': 0.5859375, 'recall': 0.6302521008403361, 'f1': 0.6072874493927125, 'number': 119} | {'precision': 0.9066543438077634, 'recall': 0.9108635097493036, 'f1': 0.9087540528022232, 'number': 1077} | 0.8735 | 0.8957 | 0.8845 | 0.8017 |
70
+
71
+
72
+ ### Framework versions
73
+
74
+ - Transformers 4.31.0
75
+ - Pytorch 2.0.1+cu118
76
+ - Datasets 2.13.1
77
+ - Tokenizers 0.13.3
logs/events.out.tfevents.1689953809.072845105fb9.3808.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3a5c1a1d3f67701693530cea1930ac909f8e080da723fb48106bdc7780ab62ff
3
- size 12373
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc1181be5265366e428fafc2959a90bd492a05914f34cc70cd85ece21602b124
3
+ size 12727
logs/events.out.tfevents.1689955933.072845105fb9.3808.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f37e0ded15b65e20e318cfc7104e79f75bedb380900fbaf15b58d921294283c8
3
+ size 592
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
preprocessor_config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "apply_ocr": true,
3
+ "do_normalize": true,
4
+ "do_rescale": true,
5
+ "do_resize": true,
6
+ "image_mean": [
7
+ 0.5,
8
+ 0.5,
9
+ 0.5
10
+ ],
11
+ "image_processor_type": "LayoutLMv3FeatureExtractor",
12
+ "image_std": [
13
+ 0.5,
14
+ 0.5,
15
+ 0.5
16
+ ],
17
+ "ocr_lang": null,
18
+ "processor_class": "LayoutLMv3Processor",
19
+ "resample": 2,
20
+ "rescale_factor": 0.00392156862745098,
21
+ "size": {
22
+ "height": 224,
23
+ "width": 224
24
+ },
25
+ "tesseract_config": ""
26
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "bos_token": {
4
+ "__type": "AddedToken",
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false
10
+ },
11
+ "clean_up_tokenization_spaces": true,
12
+ "cls_token": {
13
+ "__type": "AddedToken",
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": true,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "cls_token_box": [
21
+ 0,
22
+ 0,
23
+ 0,
24
+ 0
25
+ ],
26
+ "eos_token": {
27
+ "__type": "AddedToken",
28
+ "content": "</s>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ },
34
+ "errors": "replace",
35
+ "mask_token": {
36
+ "__type": "AddedToken",
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": true,
40
+ "rstrip": false,
41
+ "single_word": false
42
+ },
43
+ "model_max_length": 512,
44
+ "only_label_first_subword": true,
45
+ "pad_token": {
46
+ "__type": "AddedToken",
47
+ "content": "<pad>",
48
+ "lstrip": false,
49
+ "normalized": true,
50
+ "rstrip": false,
51
+ "single_word": false
52
+ },
53
+ "pad_token_box": [
54
+ 0,
55
+ 0,
56
+ 0,
57
+ 0
58
+ ],
59
+ "pad_token_label": -100,
60
+ "processor_class": "LayoutLMv3Processor",
61
+ "sep_token": {
62
+ "__type": "AddedToken",
63
+ "content": "</s>",
64
+ "lstrip": false,
65
+ "normalized": true,
66
+ "rstrip": false,
67
+ "single_word": false
68
+ },
69
+ "sep_token_box": [
70
+ 0,
71
+ 0,
72
+ 0,
73
+ 0
74
+ ],
75
+ "tokenizer_class": "LayoutLMv3Tokenizer",
76
+ "trim_offsets": true,
77
+ "unk_token": {
78
+ "__type": "AddedToken",
79
+ "content": "<unk>",
80
+ "lstrip": false,
81
+ "normalized": true,
82
+ "rstrip": false,
83
+ "single_word": false
84
+ }
85
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff