Kwan0 commited on
Commit
b44d6c8
1 Parent(s): de58294

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -1
README.md CHANGED
@@ -1,3 +1,76 @@
1
  ---
2
- license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags:
3
+ - generated_from_trainer
4
+ datasets:
5
+ - nielsr/funsd-layoutlmv3
6
+ metrics:
7
+ - precision
8
+ - recall
9
+ - f1
10
+ - accuracy
11
+ base_model: microsoft/layoutlmv3-base
12
+ model-index:
13
+ - name: layoutlmv3-finetuned-funsd
14
+ results:
15
+ - task:
16
+ type: token-classification
17
+ name: Token Classification
18
+ dataset:
19
+ name: nielsr/funsd-layoutlmv3
20
+ type: nielsr/funsd-layoutlmv3
21
+ args: funsd
22
+ metrics:
23
+ - type: precision
24
+ value: 0.9026198714780029
25
+ name: Precision
26
+ - type: recall
27
+ value: 0.913
28
+ name: Recall
29
+ - type: f1
30
+ value: 0.9077802634849614
31
+ name: F1
32
+ - type: accuracy
33
+ value: 0.8330271015158475
34
+ name: Accuracy
35
  ---
36
+
37
+ # layoutlmv3-finetuned-funsd
38
+
39
+ This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the pierreguillou/DocLayNet-large.
40
+ It achieves the following results on the evaluation set:
41
+ - Loss: 0.33888205885887146,
42
+ - Precision: 0.8478835766832817,
43
+ - Recall: 0.8934488524091807,
44
+ - F1: 0.8700700634847538,
45
+ - Accuracy: 0.9574140990541197
46
+
47
+ The script for training can be found here: https://github.com/huggingface/transformers/tree/main/examples/research_projects/layoutlmv3
48
+
49
+ ## Model description
50
+
51
+ More information needed
52
+
53
+ ## Intended uses & limitations
54
+
55
+ More information needed
56
+
57
+ ## Training and evaluation data
58
+
59
+ More information needed
60
+
61
+ ## Training procedure
62
+
63
+ ### Training hyperparameters
64
+
65
+ The following hyperparameters were used during training:
66
+ - learning_rate: 1e-05
67
+ - train_batch_size: 2
68
+ - eval_batch_size: 2
69
+ - training_steps: 100000
70
+
71
+ ### Framework versions
72
+
73
+ - Transformers 4.33.3
74
+ - Pytorch 1.11.0+cu115
75
+ - Datasets 2.14.5
76
+ - Tokenizers 0.13.3