katanaml commited on
Commit
9d29325
1 Parent(s): e16383a

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -14
README.md CHANGED
@@ -2,18 +2,9 @@
2
  license: cc-by-nc-sa-4.0
3
  tags:
4
  - generated_from_trainer
5
- datasets:
6
- - katanaml/cord
7
  model-index:
8
  - name: layoutlmv2-finetuned-cord
9
- results:
10
- - task:
11
- name: Token Classification
12
- type: token-classification
13
- dataset:
14
- name: katanaml/cord
15
- type: katanaml/cord
16
- args: katanaml/cord
17
  ---
18
 
19
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -21,11 +12,11 @@ should probably proofread and complete it, then remove this comment. -->
21
 
22
  # layoutlmv2-finetuned-cord
23
 
24
- This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the CORD dataset.
25
 
26
  ## Model description
27
 
28
- Model implementation code [Sparrow](https://github.com/katanaml/sparrow)
29
 
30
  ## Intended uses & limitations
31
 
@@ -47,7 +38,7 @@ The following hyperparameters were used during training:
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_ratio: 0.1
50
- - training_steps: 1500
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
@@ -58,5 +49,5 @@ The following hyperparameters were used during training:
58
 
59
  - Transformers 4.17.0
60
  - Pytorch 1.10.0+cu111
61
- - Datasets 1.18.3
62
  - Tokenizers 0.11.6
 
2
  license: cc-by-nc-sa-4.0
3
  tags:
4
  - generated_from_trainer
 
 
5
  model-index:
6
  - name: layoutlmv2-finetuned-cord
7
+ results: []
 
 
 
 
 
 
 
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
12
 
13
  # layoutlmv2-finetuned-cord
14
 
15
+ This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on an unknown dataset.
16
 
17
  ## Model description
18
 
19
+ More information needed
20
 
21
  ## Intended uses & limitations
22
 
 
38
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
  - lr_scheduler_type: linear
40
  - lr_scheduler_warmup_ratio: 0.1
41
+ - training_steps: 3000
42
  - mixed_precision_training: Native AMP
43
 
44
  ### Training results
 
49
 
50
  - Transformers 4.17.0
51
  - Pytorch 1.10.0+cu111
52
+ - Datasets 1.18.4
53
  - Tokenizers 0.11.6