Abhishek92kumar commited on
Commit
a99647a
1 Parent(s): 6693131

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +98 -0
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - cord-layoutlmv3
7
+ metrics:
8
+ - precision
9
+ - recall
10
+ - f1
11
+ - accuracy
12
+ model-index:
13
+ - name: layoutlmv3-finetuned-cord_100
14
+ results:
15
+ - task:
16
+ name: Token Classification
17
+ type: token-classification
18
+ dataset:
19
+ name: cord-layoutlmv3
20
+ type: cord-layoutlmv3
21
+ config: cord
22
+ split: test
23
+ args: cord
24
+ metrics:
25
+ - name: Precision
26
+ type: precision
27
+ value: 0.9407407407407408
28
+ - name: Recall
29
+ type: recall
30
+ value: 0.9505988023952096
31
+ - name: F1
32
+ type: f1
33
+ value: 0.9456440804169769
34
+ - name: Accuracy
35
+ type: accuracy
36
+ value: 0.9584040747028862
37
+ ---
38
+
39
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
40
+ should probably proofread and complete it, then remove this comment. -->
41
+
42
+ # layoutlmv3-finetuned-cord_100
43
+
44
+ This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
45
+ It achieves the following results on the evaluation set:
46
+ - Loss: 0.2012
47
+ - Precision: 0.9407
48
+ - Recall: 0.9506
49
+ - F1: 0.9456
50
+ - Accuracy: 0.9584
51
+
52
+ ## Model description
53
+
54
+ More information needed
55
+
56
+ ## Intended uses & limitations
57
+
58
+ More information needed
59
+
60
+ ## Training and evaluation data
61
+
62
+ More information needed
63
+
64
+ ## Training procedure
65
+
66
+ ### Training hyperparameters
67
+
68
+ The following hyperparameters were used during training:
69
+ - learning_rate: 1e-05
70
+ - train_batch_size: 5
71
+ - eval_batch_size: 5
72
+ - seed: 42
73
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
74
+ - lr_scheduler_type: linear
75
+ - training_steps: 2500
76
+
77
+ ### Training results
78
+
79
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
80
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
81
+ | No log | 1.56 | 250 | 1.0522 | 0.6964 | 0.7657 | 0.7294 | 0.7831 |
82
+ | 1.4089 | 3.12 | 500 | 0.5725 | 0.84 | 0.8645 | 0.8521 | 0.8786 |
83
+ | 1.4089 | 4.69 | 750 | 0.3936 | 0.8720 | 0.9027 | 0.8871 | 0.9104 |
84
+ | 0.3977 | 6.25 | 1000 | 0.3240 | 0.9204 | 0.9349 | 0.9276 | 0.9397 |
85
+ | 0.3977 | 7.81 | 1250 | 0.2827 | 0.9244 | 0.9341 | 0.9293 | 0.9414 |
86
+ | 0.2176 | 9.38 | 1500 | 0.2381 | 0.9225 | 0.9349 | 0.9286 | 0.9452 |
87
+ | 0.2176 | 10.94 | 1750 | 0.2497 | 0.9161 | 0.9319 | 0.9239 | 0.9419 |
88
+ | 0.1565 | 12.5 | 2000 | 0.2149 | 0.9392 | 0.9484 | 0.9438 | 0.9520 |
89
+ | 0.1565 | 14.06 | 2250 | 0.2075 | 0.9348 | 0.9446 | 0.9397 | 0.9542 |
90
+ | 0.1192 | 15.62 | 2500 | 0.2012 | 0.9407 | 0.9506 | 0.9456 | 0.9584 |
91
+
92
+
93
+ ### Framework versions
94
+
95
+ - Transformers 4.28.0
96
+ - Pytorch 2.0.1+cu118
97
+ - Datasets 2.12.0
98
+ - Tokenizers 0.13.3