File size: 4,331 Bytes
6706286
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: cc-by-nc-sa-4.0
base_model: microsoft/layoutlmv3-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: test
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/elyadata/Ft%20layoutlmv3%20funsd%20max%20epochs%20100%20%2Cearlystop%3D4%2Cbatch%3D2%2Clr%3D1e-5%20adamw%2CFULL%20models%20params/runs/6zlgssbd)
# test

This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8476
- Precision: 0.8955
- Recall: 0.9071
- F1: 0.9013
- Accuracy: 0.8691

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100

### Training results

| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log        | 1.0   | 75   | 0.9234          | 0.6139    | 0.7526 | 0.6762 | 0.7416   |
| No log        | 2.0   | 150  | 0.6101          | 0.7549    | 0.8341 | 0.7925 | 0.7940   |
| No log        | 3.0   | 225  | 0.5135          | 0.8332    | 0.8882 | 0.8598 | 0.8091   |
| No log        | 4.0   | 300  | 0.5467          | 0.8189    | 0.8624 | 0.8401 | 0.8109   |
| No log        | 5.0   | 375  | 0.4879          | 0.8660    | 0.9051 | 0.8851 | 0.8504   |
| No log        | 6.0   | 450  | 0.5352          | 0.8787    | 0.9180 | 0.8980 | 0.8480   |
| 0.5752        | 7.0   | 525  | 0.5900          | 0.8730    | 0.8982 | 0.8854 | 0.8343   |
| 0.5752        | 8.0   | 600  | 0.6014          | 0.8832    | 0.9016 | 0.8923 | 0.8506   |
| 0.5752        | 9.0   | 675  | 0.6173          | 0.8883    | 0.9126 | 0.9003 | 0.8538   |
| 0.5752        | 10.0  | 750  | 0.6278          | 0.8787    | 0.9141 | 0.8960 | 0.8571   |
| 0.5752        | 11.0  | 825  | 0.6573          | 0.8612    | 0.9155 | 0.8876 | 0.8326   |
| 0.5752        | 12.0  | 900  | 0.7333          | 0.8818    | 0.9006 | 0.8911 | 0.8387   |
| 0.5752        | 13.0  | 975  | 0.7489          | 0.8888    | 0.9136 | 0.9010 | 0.8502   |
| 0.1263        | 14.0  | 1050 | 0.7719          | 0.8908    | 0.8997 | 0.8952 | 0.8318   |
| 0.1263        | 15.0  | 1125 | 0.8295          | 0.8945    | 0.9101 | 0.9022 | 0.8438   |
| 0.1263        | 16.0  | 1200 | 0.8447          | 0.8798    | 0.9126 | 0.8959 | 0.8465   |
| 0.1263        | 17.0  | 1275 | 0.8359          | 0.9090    | 0.8932 | 0.9010 | 0.8486   |
| 0.1263        | 18.0  | 1350 | 0.8430          | 0.8966    | 0.9091 | 0.9028 | 0.8414   |
| 0.1263        | 19.0  | 1425 | 0.8179          | 0.8854    | 0.9021 | 0.8937 | 0.8400   |
| 0.0482        | 20.0  | 1500 | 0.8950          | 0.8968    | 0.8982 | 0.8975 | 0.8475   |
| 0.0482        | 21.0  | 1575 | 0.8790          | 0.9053    | 0.9121 | 0.9087 | 0.8565   |
| 0.0482        | 22.0  | 1650 | 0.7915          | 0.9056    | 0.9101 | 0.9078 | 0.8595   |
| 0.0482        | 23.0  | 1725 | 0.8760          | 0.8938    | 0.8952 | 0.8945 | 0.8504   |
| 0.0482        | 24.0  | 1800 | 0.8320          | 0.9113    | 0.9086 | 0.9100 | 0.8625   |
| 0.0482        | 25.0  | 1875 | 0.8880          | 0.9017    | 0.9021 | 0.9019 | 0.8538   |
| 0.0482        | 26.0  | 1950 | 0.8611          | 0.9083    | 0.9101 | 0.9092 | 0.8499   |
| 0.0163        | 27.0  | 2025 | 0.8747          | 0.9068    | 0.9086 | 0.9077 | 0.8600   |
| 0.0163        | 28.0  | 2100 | 0.8476          | 0.8955    | 0.9071 | 0.9013 | 0.8691   |


### Framework versions

- Transformers 4.42.0.dev0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1