File size: 3,281 Bytes
0bbc6fb
 
 
 
 
 
 
 
f9dee7f
0bbc6fb
 
 
 
 
 
 
 
 
 
 
 
c89b838
 
 
 
 
0bbc6fb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c89b838
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0bbc6fb
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
base_model: renjithks/layoutlmv1-cord-ner
model-index:
- name: layoutlmv1-er-ner
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# layoutlmv1-er-ner

This model is a fine-tuned version of [renjithks/layoutlmv1-cord-ner](https://huggingface.co/renjithks/layoutlmv1-cord-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2092
- Precision: 0.7202
- Recall: 0.7238
- F1: 0.7220
- Accuracy: 0.9639

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20

### Training results

| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log        | 1.0   | 41   | 0.2444          | 0.4045    | 0.3996 | 0.4020 | 0.9226   |
| No log        | 2.0   | 82   | 0.1640          | 0.5319    | 0.6098 | 0.5682 | 0.9455   |
| No log        | 3.0   | 123  | 0.1531          | 0.6324    | 0.6614 | 0.6466 | 0.9578   |
| No log        | 4.0   | 164  | 0.1440          | 0.6927    | 0.6743 | 0.6834 | 0.9620   |
| No log        | 5.0   | 205  | 0.1520          | 0.6750    | 0.6958 | 0.6853 | 0.9613   |
| No log        | 6.0   | 246  | 0.1597          | 0.6840    | 0.6987 | 0.6913 | 0.9605   |
| No log        | 7.0   | 287  | 0.1910          | 0.7002    | 0.6887 | 0.6944 | 0.9605   |
| No log        | 8.0   | 328  | 0.1860          | 0.6834    | 0.6923 | 0.6878 | 0.9609   |
| No log        | 9.0   | 369  | 0.1665          | 0.6785    | 0.7102 | 0.6940 | 0.9624   |
| No log        | 10.0  | 410  | 0.1816          | 0.7016    | 0.7052 | 0.7034 | 0.9624   |
| No log        | 11.0  | 451  | 0.1808          | 0.6913    | 0.7166 | 0.7038 | 0.9638   |
| No log        | 12.0  | 492  | 0.2165          | 0.712     | 0.7023 | 0.7071 | 0.9628   |
| 0.1014        | 13.0  | 533  | 0.2135          | 0.6979    | 0.7109 | 0.7043 | 0.9613   |
| 0.1014        | 14.0  | 574  | 0.2154          | 0.6906    | 0.7109 | 0.7006 | 0.9612   |
| 0.1014        | 15.0  | 615  | 0.2118          | 0.6902    | 0.7016 | 0.6958 | 0.9615   |
| 0.1014        | 16.0  | 656  | 0.2091          | 0.6985    | 0.7080 | 0.7032 | 0.9623   |
| 0.1014        | 17.0  | 697  | 0.2104          | 0.7118    | 0.7123 | 0.7121 | 0.9630   |
| 0.1014        | 18.0  | 738  | 0.2081          | 0.7129    | 0.7231 | 0.7179 | 0.9638   |
| 0.1014        | 19.0  | 779  | 0.2093          | 0.7205    | 0.7231 | 0.7218 | 0.9638   |
| 0.1014        | 20.0  | 820  | 0.2092          | 0.7202    | 0.7238 | 0.7220 | 0.9639   |


### Framework versions

- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1