Noureddinesa commited on
Commit
b68175d
1 Parent(s): 432ed61

End of training

Browse files
README.md CHANGED
@@ -3,12 +3,14 @@ license: cc-by-nc-sa-4.0
3
  base_model: microsoft/layoutlmv3-large
4
  tags:
5
  - generated_from_trainer
 
 
 
 
 
6
  model-index:
7
  - name: Output_LayoutLMv3
8
  results: []
9
- metrics:
10
- - accuracy
11
- - f1
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -17,6 +19,12 @@ should probably proofread and complete it, then remove this comment. -->
17
  # Output_LayoutLMv3
18
 
19
  This model is a fine-tuned version of [microsoft/layoutlmv3-large](https://huggingface.co/microsoft/layoutlmv3-large) on an unknown dataset.
 
 
 
 
 
 
20
 
21
  ## Model description
22
 
@@ -41,10 +49,42 @@ The following hyperparameters were used during training:
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
- - training_steps: 20
45
 
46
  ### Training results
47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
 
50
  ### Framework versions
@@ -52,4 +92,4 @@ The following hyperparameters were used during training:
52
  - Transformers 4.38.2
53
  - Pytorch 2.2.1+cu121
54
  - Datasets 2.18.0
55
- - Tokenizers 0.15.2
 
3
  base_model: microsoft/layoutlmv3-large
4
  tags:
5
  - generated_from_trainer
6
+ metrics:
7
+ - precision
8
+ - recall
9
+ - f1
10
+ - accuracy
11
  model-index:
12
  - name: Output_LayoutLMv3
13
  results: []
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
19
  # Output_LayoutLMv3
20
 
21
  This model is a fine-tuned version of [microsoft/layoutlmv3-large](https://huggingface.co/microsoft/layoutlmv3-large) on an unknown dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.2507
24
+ - Precision: 0.8319
25
+ - Recall: 0.8319
26
+ - F1: 0.8319
27
+ - Accuracy: 0.9771
28
 
29
  ## Model description
30
 
 
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
52
+ - training_steps: 3000
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
57
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
58
+ | No log | 2.27 | 100 | 0.1116 | 0.7705 | 0.8319 | 0.8 | 0.9676 |
59
+ | No log | 4.55 | 200 | 0.1130 | 0.8319 | 0.8540 | 0.8428 | 0.9762 |
60
+ | No log | 6.82 | 300 | 0.1707 | 0.7931 | 0.8142 | 0.8035 | 0.9686 |
61
+ | No log | 9.09 | 400 | 0.1998 | 0.7521 | 0.7920 | 0.7716 | 0.9648 |
62
+ | 0.0744 | 11.36 | 500 | 0.1633 | 0.8210 | 0.8319 | 0.8264 | 0.9752 |
63
+ | 0.0744 | 13.64 | 600 | 0.1784 | 0.8182 | 0.8363 | 0.8271 | 0.9752 |
64
+ | 0.0744 | 15.91 | 700 | 0.1909 | 0.8095 | 0.8274 | 0.8184 | 0.9724 |
65
+ | 0.0744 | 18.18 | 800 | 0.1962 | 0.7974 | 0.8186 | 0.8079 | 0.9724 |
66
+ | 0.0744 | 20.45 | 900 | 0.1723 | 0.8412 | 0.8673 | 0.8540 | 0.9781 |
67
+ | 0.0081 | 22.73 | 1000 | 0.2109 | 0.8210 | 0.8319 | 0.8264 | 0.9733 |
68
+ | 0.0081 | 25.0 | 1100 | 0.2194 | 0.8087 | 0.8230 | 0.8158 | 0.9743 |
69
+ | 0.0081 | 27.27 | 1200 | 0.2076 | 0.8465 | 0.8540 | 0.8502 | 0.9771 |
70
+ | 0.0081 | 29.55 | 1300 | 0.1883 | 0.8688 | 0.8496 | 0.8591 | 0.9819 |
71
+ | 0.0081 | 31.82 | 1400 | 0.2042 | 0.8170 | 0.8496 | 0.8330 | 0.9771 |
72
+ | 0.0034 | 34.09 | 1500 | 0.2144 | 0.8261 | 0.8407 | 0.8333 | 0.9771 |
73
+ | 0.0034 | 36.36 | 1600 | 0.1953 | 0.8205 | 0.8496 | 0.8348 | 0.9771 |
74
+ | 0.0034 | 38.64 | 1700 | 0.2259 | 0.8267 | 0.8230 | 0.8248 | 0.9762 |
75
+ | 0.0034 | 40.91 | 1800 | 0.2553 | 0.7974 | 0.8186 | 0.8079 | 0.9714 |
76
+ | 0.0034 | 43.18 | 1900 | 0.2238 | 0.8377 | 0.8451 | 0.8414 | 0.9781 |
77
+ | 0.0006 | 45.45 | 2000 | 0.2245 | 0.8451 | 0.8451 | 0.8451 | 0.9790 |
78
+ | 0.0006 | 47.73 | 2100 | 0.2389 | 0.8326 | 0.8142 | 0.8233 | 0.9762 |
79
+ | 0.0006 | 50.0 | 2200 | 0.2500 | 0.8251 | 0.8142 | 0.8196 | 0.9752 |
80
+ | 0.0006 | 52.27 | 2300 | 0.2537 | 0.8304 | 0.8451 | 0.8377 | 0.9762 |
81
+ | 0.0006 | 54.55 | 2400 | 0.2410 | 0.8319 | 0.8319 | 0.8319 | 0.9771 |
82
+ | 0.0001 | 56.82 | 2500 | 0.2484 | 0.8319 | 0.8319 | 0.8319 | 0.9771 |
83
+ | 0.0001 | 59.09 | 2600 | 0.2517 | 0.8319 | 0.8319 | 0.8319 | 0.9771 |
84
+ | 0.0001 | 61.36 | 2700 | 0.2524 | 0.8319 | 0.8319 | 0.8319 | 0.9771 |
85
+ | 0.0001 | 63.64 | 2800 | 0.2531 | 0.8319 | 0.8319 | 0.8319 | 0.9771 |
86
+ | 0.0001 | 65.91 | 2900 | 0.2528 | 0.8319 | 0.8319 | 0.8319 | 0.9771 |
87
+ | 0.0 | 68.18 | 3000 | 0.2507 | 0.8319 | 0.8319 | 0.8319 | 0.9771 |
88
 
89
 
90
  ### Framework versions
 
92
  - Transformers 4.38.2
93
  - Pytorch 2.2.1+cu121
94
  - Datasets 2.18.0
95
+ - Tokenizers 0.15.2
runs/Mar31_17-52-32_bf0b87ba8723/events.out.tfevents.1711907581.bf0b87ba8723.420.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e2bbfc3138780069425447d34a67922b3a293198e6caa70fe9f3e3e2f246077b
3
- size 15452
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:01e814a00dda56b45df81bffcb699dbeeb8e39bc4e6a1bb36df84703b22b4149
3
+ size 20948