Guigadal commited on
Commit
f9aa1dd
1 Parent(s): 9d90b5d

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ tags:
4
+ - generated_from_trainer
5
+ model-index:
6
+ - name: layoutxlm-tokenclass-finetuned
7
+ results: []
8
+ ---
9
+
10
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
+ should probably proofread and complete it, then remove this comment. -->
12
+
13
+ # layoutxlm-tokenclass-finetuned
14
+
15
+ This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 0.2039
18
+ - Answer Precision: 0.9231
19
+ - Answer Recall: 0.9180
20
+ - Answer F1: 0.9205
21
+ - Answer Number: 366
22
+ - Header Precision: 0.8194
23
+ - Header Recall: 0.9219
24
+ - Header F1: 0.8676
25
+ - Header Number: 64
26
+ - Question Precision: 0.9115
27
+ - Question Recall: 0.9428
28
+ - Question F1: 0.9269
29
+ - Question Number: 437
30
+ - Overall Precision: 0.9088
31
+ - Overall Recall: 0.9308
32
+ - Overall F1: 0.9197
33
+ - Overall Accuracy: 0.9758
34
+
35
+ ## Model description
36
+
37
+ More information needed
38
+
39
+ ## Intended uses & limitations
40
+
41
+ More information needed
42
+
43
+ ## Training and evaluation data
44
+
45
+ More information needed
46
+
47
+ ## Training procedure
48
+
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 1e-05
53
+ - train_batch_size: 2
54
+ - eval_batch_size: 2
55
+ - seed: 42
56
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
+ - lr_scheduler_type: linear
58
+ - lr_scheduler_warmup_ratio: 0.1
59
+ - training_steps: 5000
60
+
61
+ ### Training results
62
+
63
+
64
+
65
+ ### Framework versions
66
+
67
+ - Transformers 4.29.2
68
+ - Pytorch 2.0.1+cu118
69
+ - Datasets 2.12.0
70
+ - Tokenizers 0.13.3