tkazusa commited on
Commit
9ab159b
1 Parent(s): 4772852

End of training

Browse files
README.md CHANGED
@@ -16,14 +16,14 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 1.1677
20
- - Answer: {'precision': 0.8481735159817352, 'recall': 0.9094247246022031, 'f1': 0.8777318369757826, 'number': 817}
21
- - Header: {'precision': 0.5725806451612904, 'recall': 0.5966386554621849, 'f1': 0.5843621399176955, 'number': 119}
22
- - Question: {'precision': 0.8793418647166362, 'recall': 0.89322191272052, 'f1': 0.8862275449101795, 'number': 1077}
23
- - Overall Precision: 0.8481
24
- - Overall Recall: 0.8823
25
- - Overall F1: 0.8649
26
- - Overall Accuracy: 0.7998
27
 
28
  ## Model description
29
 
@@ -48,15 +48,23 @@ The following hyperparameters were used during training:
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
- - training_steps: 500
52
  - mixed_precision_training: Native AMP
53
 
54
  ### Training results
55
 
56
- | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
57
- |:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
58
- | 0.4341 | 10.53 | 200 | 0.8988 | {'precision': 0.8283166109253066, 'recall': 0.9094247246022031, 'f1': 0.8669778296382731, 'number': 817} | {'precision': 0.6630434782608695, 'recall': 0.5126050420168067, 'f1': 0.5781990521327014, 'number': 119} | {'precision': 0.8692170818505338, 'recall': 0.9071494893221913, 'f1': 0.8877782825988189, 'number': 1077} | 0.8429 | 0.8847 | 0.8633 | 0.7895 |
59
- | 0.0382 | 21.05 | 400 | 1.1677 | {'precision': 0.8481735159817352, 'recall': 0.9094247246022031, 'f1': 0.8777318369757826, 'number': 817} | {'precision': 0.5725806451612904, 'recall': 0.5966386554621849, 'f1': 0.5843621399176955, 'number': 119} | {'precision': 0.8793418647166362, 'recall': 0.89322191272052, 'f1': 0.8862275449101795, 'number': 1077} | 0.8481 | 0.8823 | 0.8649 | 0.7998 |
 
 
 
 
 
 
 
 
60
 
61
 
62
  ### Framework versions
 
16
 
17
  This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 1.6459
20
+ - Answer: {'precision': 0.8831942789034565, 'recall': 0.9069767441860465, 'f1': 0.894927536231884, 'number': 817}
21
+ - Header: {'precision': 0.6213592233009708, 'recall': 0.5378151260504201, 'f1': 0.5765765765765765, 'number': 119}
22
+ - Question: {'precision': 0.8998178506375227, 'recall': 0.9173630454967502, 'f1': 0.9085057471264367, 'number': 1077}
23
+ - Overall Precision: 0.8789
24
+ - Overall Recall: 0.8907
25
+ - Overall F1: 0.8848
26
+ - Overall Accuracy: 0.8068
27
 
28
  ## Model description
29
 
 
48
  - seed: 42
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
+ - training_steps: 2000
52
  - mixed_precision_training: Native AMP
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
57
+ |:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
58
+ | 0.4201 | 10.53 | 200 | 0.8003 | {'precision': 0.8321995464852607, 'recall': 0.8984088127294981, 'f1': 0.8640376692171865, 'number': 817} | {'precision': 0.5714285714285714, 'recall': 0.5714285714285714, 'f1': 0.5714285714285714, 'number': 119} | {'precision': 0.8651079136690647, 'recall': 0.89322191272052, 'f1': 0.8789401553220649, 'number': 1077} | 0.8348 | 0.8763 | 0.8551 | 0.8104 |
59
+ | 0.0376 | 21.05 | 400 | 1.3158 | {'precision': 0.8395904436860068, 'recall': 0.9033047735618115, 'f1': 0.8702830188679245, 'number': 817} | {'precision': 0.4785714285714286, 'recall': 0.5630252100840336, 'f1': 0.5173745173745175, 'number': 119} | {'precision': 0.8887814313346228, 'recall': 0.8532961931290622, 'f1': 0.8706774040738986, 'number': 1077} | 0.8397 | 0.8564 | 0.8480 | 0.7934 |
60
+ | 0.0119 | 31.58 | 600 | 1.4791 | {'precision': 0.8752941176470588, 'recall': 0.9106487148102815, 'f1': 0.8926214757048591, 'number': 817} | {'precision': 0.5401459854014599, 'recall': 0.6218487394957983, 'f1': 0.578125, 'number': 119} | {'precision': 0.8818681318681318, 'recall': 0.8941504178272981, 'f1': 0.8879668049792531, 'number': 1077} | 0.8567 | 0.8847 | 0.8705 | 0.7961 |
61
+ | 0.0061 | 42.11 | 800 | 1.5605 | {'precision': 0.8617886178861789, 'recall': 0.9082007343941249, 'f1': 0.8843861740166865, 'number': 817} | {'precision': 0.5963302752293578, 'recall': 0.5462184873949579, 'f1': 0.5701754385964912, 'number': 119} | {'precision': 0.8747763864042933, 'recall': 0.9080779944289693, 'f1': 0.8911161731207289, 'number': 1077} | 0.8549 | 0.8867 | 0.8705 | 0.7965 |
62
+ | 0.0026 | 52.63 | 1000 | 1.5172 | {'precision': 0.8596491228070176, 'recall': 0.8996328029375765, 'f1': 0.8791866028708135, 'number': 817} | {'precision': 0.7176470588235294, 'recall': 0.5126050420168067, 'f1': 0.5980392156862744, 'number': 119} | {'precision': 0.8737864077669902, 'recall': 0.9192200557103064, 'f1': 0.8959276018099548, 'number': 1077} | 0.8616 | 0.8872 | 0.8742 | 0.8014 |
63
+ | 0.0019 | 63.16 | 1200 | 1.6132 | {'precision': 0.8735224586288416, 'recall': 0.9045287637698899, 'f1': 0.888755261575466, 'number': 817} | {'precision': 0.6460176991150443, 'recall': 0.6134453781512605, 'f1': 0.6293103448275863, 'number': 119} | {'precision': 0.881508078994614, 'recall': 0.9117920148560817, 'f1': 0.8963943404837974, 'number': 1077} | 0.8654 | 0.8912 | 0.8781 | 0.8040 |
64
+ | 0.0012 | 73.68 | 1400 | 1.6459 | {'precision': 0.8831942789034565, 'recall': 0.9069767441860465, 'f1': 0.894927536231884, 'number': 817} | {'precision': 0.6213592233009708, 'recall': 0.5378151260504201, 'f1': 0.5765765765765765, 'number': 119} | {'precision': 0.8998178506375227, 'recall': 0.9173630454967502, 'f1': 0.9085057471264367, 'number': 1077} | 0.8789 | 0.8907 | 0.8848 | 0.8068 |
65
+ | 0.0005 | 84.21 | 1600 | 1.5619 | {'precision': 0.8602771362586605, 'recall': 0.9118727050183598, 'f1': 0.8853238265002972, 'number': 817} | {'precision': 0.6631578947368421, 'recall': 0.5294117647058824, 'f1': 0.5887850467289719, 'number': 119} | {'precision': 0.8944494995450409, 'recall': 0.9127205199628597, 'f1': 0.9034926470588234, 'number': 1077} | 0.8694 | 0.8897 | 0.8795 | 0.8155 |
66
+ | 0.0003 | 94.74 | 1800 | 1.6571 | {'precision': 0.8649592549476135, 'recall': 0.9094247246022031, 'f1': 0.886634844868735, 'number': 817} | {'precision': 0.6391752577319587, 'recall': 0.5210084033613446, 'f1': 0.5740740740740741, 'number': 119} | {'precision': 0.8971792538671519, 'recall': 0.9155060352831941, 'f1': 0.90625, 'number': 1077} | 0.8715 | 0.8897 | 0.8805 | 0.8098 |
67
+ | 0.0003 | 105.26 | 2000 | 1.6731 | {'precision': 0.8672875436554133, 'recall': 0.9118727050183598, 'f1': 0.8890214797136038, 'number': 817} | {'precision': 0.62, 'recall': 0.5210084033613446, 'f1': 0.5662100456621004, 'number': 119} | {'precision': 0.9008264462809917, 'recall': 0.9108635097493036, 'f1': 0.9058171745152355, 'number': 1077} | 0.8730 | 0.8882 | 0.8806 | 0.8071 |
68
 
69
 
70
  ### Framework versions
logs/events.out.tfevents.1671063332.ip-172-31-89-140.224198.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3a9e51d118f2672c8f700b71d799b1a8cdf176207ca7b24e5d1c958bffa62195
3
- size 10730
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efafdd6519bd1a51fa6d3a88553a99df2b10ce36006368fba13070c51c3452cf
3
+ size 11084
logs/events.out.tfevents.1671064674.ip-172-31-89-140.224198.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0122dee149d99bd8572789f9f233f6f4b15354cb46d1b3f09ce624c594e8b4cd
3
+ size 544
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:65c264505eedb0fafb367a1d9395e1a7284eafdf2ecc02d2f6c25e3adb2c18c8
3
  size 520821201
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66e4dc9a4b6302dcf11eb891f6ef2ed2a932cb3e62fa3369723ab1583f9606fd
3
  size 520821201