mikeseb commited on
Commit
0c90482
1 Parent(s): 08867ad

End of training

Browse files
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - funsd-layoutlmv3
7
+ model-index:
8
+ - name: lilt-en-funsd
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # lilt-en-funsd
16
+
17
+ This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 1.7343
20
+ - Answer: {'precision': 0.865979381443299, 'recall': 0.9253365973072215, 'f1': 0.8946745562130176, 'number': 817}
21
+ - Header: {'precision': 0.6442307692307693, 'recall': 0.5630252100840336, 'f1': 0.600896860986547, 'number': 119}
22
+ - Question: {'precision': 0.8937329700272479, 'recall': 0.9136490250696379, 'f1': 0.9035812672176309, 'number': 1077}
23
+ - Overall Precision: 0.8696
24
+ - Overall Recall: 0.8977
25
+ - Overall F1: 0.8834
26
+ - Overall Accuracy: 0.8048
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 5e-05
46
+ - train_batch_size: 8
47
+ - eval_batch_size: 8
48
+ - seed: 42
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - training_steps: 2500
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
56
+ |:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
57
+ | 0.4108 | 10.53 | 200 | 0.9777 | {'precision': 0.8268590455049944, 'recall': 0.9118727050183598, 'f1': 0.8672875436554133, 'number': 817} | {'precision': 0.6464646464646465, 'recall': 0.5378151260504201, 'f1': 0.5871559633027523, 'number': 119} | {'precision': 0.8931860036832413, 'recall': 0.9006499535747446, 'f1': 0.8969024503005086, 'number': 1077} | 0.8528 | 0.8838 | 0.8680 | 0.8028 |
58
+ | 0.0416 | 21.05 | 400 | 1.3626 | {'precision': 0.8482446206115515, 'recall': 0.9167686658506732, 'f1': 0.8811764705882352, 'number': 817} | {'precision': 0.5266666666666666, 'recall': 0.6638655462184874, 'f1': 0.5873605947955389, 'number': 119} | {'precision': 0.891566265060241, 'recall': 0.89322191272052, 'f1': 0.8923933209647495, 'number': 1077} | 0.8475 | 0.8892 | 0.8679 | 0.8033 |
59
+ | 0.017 | 31.58 | 600 | 1.4784 | {'precision': 0.8646341463414634, 'recall': 0.8678090575275398, 'f1': 0.8662186927306048, 'number': 817} | {'precision': 0.6486486486486487, 'recall': 0.6050420168067226, 'f1': 0.6260869565217391, 'number': 119} | {'precision': 0.8519148936170213, 'recall': 0.9294336118848654, 'f1': 0.8889875666074601, 'number': 1077} | 0.8462 | 0.8852 | 0.8653 | 0.7964 |
60
+ | 0.0098 | 42.11 | 800 | 1.4397 | {'precision': 0.8267543859649122, 'recall': 0.9228886168910648, 'f1': 0.8721804511278195, 'number': 817} | {'precision': 0.5865384615384616, 'recall': 0.5126050420168067, 'f1': 0.5470852017937219, 'number': 119} | {'precision': 0.8928247048138056, 'recall': 0.9127205199628597, 'f1': 0.9026629935720845, 'number': 1077} | 0.8493 | 0.8932 | 0.8707 | 0.8077 |
61
+ | 0.004 | 52.63 | 1000 | 1.5432 | {'precision': 0.8721893491124261, 'recall': 0.9020807833537332, 'f1': 0.8868832731648616, 'number': 817} | {'precision': 0.5681818181818182, 'recall': 0.6302521008403361, 'f1': 0.597609561752988, 'number': 119} | {'precision': 0.8956999085086916, 'recall': 0.9090064995357474, 'f1': 0.9023041474654377, 'number': 1077} | 0.8652 | 0.8897 | 0.8773 | 0.8162 |
62
+ | 0.0025 | 63.16 | 1200 | 1.6970 | {'precision': 0.8733727810650888, 'recall': 0.9033047735618115, 'f1': 0.888086642599278, 'number': 817} | {'precision': 0.7215189873417721, 'recall': 0.4789915966386555, 'f1': 0.5757575757575758, 'number': 119} | {'precision': 0.8804251550044287, 'recall': 0.9229340761374187, 'f1': 0.901178603807797, 'number': 1077} | 0.8714 | 0.8887 | 0.8800 | 0.7970 |
63
+ | 0.0012 | 73.68 | 1400 | 1.6351 | {'precision': 0.8643274853801169, 'recall': 0.9045287637698899, 'f1': 0.8839712918660287, 'number': 817} | {'precision': 0.6115702479338843, 'recall': 0.6218487394957983, 'f1': 0.6166666666666667, 'number': 119} | {'precision': 0.8899821109123435, 'recall': 0.9238625812441968, 'f1': 0.9066059225512528, 'number': 1077} | 0.8634 | 0.8982 | 0.8804 | 0.8059 |
64
+ | 0.0006 | 84.21 | 1600 | 1.5729 | {'precision': 0.8616279069767442, 'recall': 0.9069767441860465, 'f1': 0.8837209302325582, 'number': 817} | {'precision': 0.6973684210526315, 'recall': 0.44537815126050423, 'f1': 0.5435897435897435, 'number': 119} | {'precision': 0.878868258178603, 'recall': 0.9229340761374187, 'f1': 0.9003623188405797, 'number': 1077} | 0.8650 | 0.8882 | 0.8765 | 0.8149 |
65
+ | 0.0008 | 94.74 | 1800 | 1.8110 | {'precision': 0.8455467869222097, 'recall': 0.9179926560587516, 'f1': 0.880281690140845, 'number': 817} | {'precision': 0.5522388059701493, 'recall': 0.6218487394957983, 'f1': 0.5849802371541502, 'number': 119} | {'precision': 0.9101851851851852, 'recall': 0.9127205199628597, 'f1': 0.9114510894761243, 'number': 1077} | 0.8601 | 0.8977 | 0.8785 | 0.7979 |
66
+ | 0.0003 | 105.26 | 2000 | 1.7278 | {'precision': 0.8635321100917431, 'recall': 0.9216646266829865, 'f1': 0.8916518650088809, 'number': 817} | {'precision': 0.591304347826087, 'recall': 0.5714285714285714, 'f1': 0.5811965811965812, 'number': 119} | {'precision': 0.8986301369863013, 'recall': 0.9136490250696379, 'f1': 0.9060773480662985, 'number': 1077} | 0.8670 | 0.8967 | 0.8816 | 0.8005 |
67
+ | 0.0004 | 115.79 | 2200 | 1.7088 | {'precision': 0.8802816901408451, 'recall': 0.9179926560587516, 'f1': 0.8987417615338527, 'number': 817} | {'precision': 0.6120689655172413, 'recall': 0.5966386554621849, 'f1': 0.6042553191489363, 'number': 119} | {'precision': 0.8869801084990958, 'recall': 0.9108635097493036, 'f1': 0.8987631699496106, 'number': 1077} | 0.8689 | 0.8952 | 0.8818 | 0.8051 |
68
+ | 0.0003 | 126.32 | 2400 | 1.7343 | {'precision': 0.865979381443299, 'recall': 0.9253365973072215, 'f1': 0.8946745562130176, 'number': 817} | {'precision': 0.6442307692307693, 'recall': 0.5630252100840336, 'f1': 0.600896860986547, 'number': 119} | {'precision': 0.8937329700272479, 'recall': 0.9136490250696379, 'f1': 0.9035812672176309, 'number': 1077} | 0.8696 | 0.8977 | 0.8834 | 0.8048 |
69
+
70
+
71
+ ### Framework versions
72
+
73
+ - Transformers 4.28.1
74
+ - Pytorch 1.13.1+cu117
75
+ - Datasets 2.11.0
76
+ - Tokenizers 0.13.3
logs/events.out.tfevents.1682547774.datascience-1-0-ml-g4dn-xlarge-94fad2f4401e538ca1255dfa1e84.1328.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7d7468e71a50fb53e58427dcd1ae80c917aeb785f8665ad2bae23ad5770e5beb
3
- size 12271
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42abcc2f47254c9eed2e9bf1baa22784744b89632a77be6a548098c883ec2fea
3
+ size 12625
logs/events.out.tfevents.1682551221.datascience-1-0-ml-g4dn-xlarge-94fad2f4401e538ca1255dfa1e84.1328.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39949b4547493ee0b4c83160fa8743b792c03d1a28c0d2c5e8bbe74b064007ed
3
+ size 544
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
preprocessor_config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "apply_ocr": true,
3
+ "do_normalize": true,
4
+ "do_rescale": true,
5
+ "do_resize": true,
6
+ "image_mean": [
7
+ 0.5,
8
+ 0.5,
9
+ 0.5
10
+ ],
11
+ "image_processor_type": "LayoutLMv3FeatureExtractor",
12
+ "image_std": [
13
+ 0.5,
14
+ 0.5,
15
+ 0.5
16
+ ],
17
+ "ocr_lang": null,
18
+ "processor_class": "LayoutLMv3Processor",
19
+ "resample": 2,
20
+ "rescale_factor": 0.00392156862745098,
21
+ "size": {
22
+ "height": 224,
23
+ "width": 224
24
+ },
25
+ "tesseract_config": ""
26
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "bos_token": {
4
+ "__type": "AddedToken",
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false
10
+ },
11
+ "clean_up_tokenization_spaces": true,
12
+ "cls_token": {
13
+ "__type": "AddedToken",
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": true,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "cls_token_box": [
21
+ 0,
22
+ 0,
23
+ 0,
24
+ 0
25
+ ],
26
+ "eos_token": {
27
+ "__type": "AddedToken",
28
+ "content": "</s>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ },
34
+ "errors": "replace",
35
+ "mask_token": {
36
+ "__type": "AddedToken",
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": true,
40
+ "rstrip": false,
41
+ "single_word": false
42
+ },
43
+ "model_max_length": 512,
44
+ "only_label_first_subword": true,
45
+ "pad_token": {
46
+ "__type": "AddedToken",
47
+ "content": "<pad>",
48
+ "lstrip": false,
49
+ "normalized": true,
50
+ "rstrip": false,
51
+ "single_word": false
52
+ },
53
+ "pad_token_box": [
54
+ 0,
55
+ 0,
56
+ 0,
57
+ 0
58
+ ],
59
+ "pad_token_label": -100,
60
+ "processor_class": "LayoutLMv3Processor",
61
+ "sep_token": {
62
+ "__type": "AddedToken",
63
+ "content": "</s>",
64
+ "lstrip": false,
65
+ "normalized": true,
66
+ "rstrip": false,
67
+ "single_word": false
68
+ },
69
+ "sep_token_box": [
70
+ 0,
71
+ 0,
72
+ 0,
73
+ 0
74
+ ],
75
+ "tokenizer_class": "LayoutLMv3Tokenizer",
76
+ "trim_offsets": true,
77
+ "unk_token": {
78
+ "__type": "AddedToken",
79
+ "content": "<unk>",
80
+ "lstrip": false,
81
+ "normalized": true,
82
+ "rstrip": false,
83
+ "single_word": false
84
+ }
85
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff