File size: 5,874 Bytes
e8c1d1c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dd1b6df
 
 
 
 
 
 
 
 
 
 
 
 
e8c1d1c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dd1b6df
e8c1d1c
 
 
dd1b6df
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e8c1d1c
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
tags:
- generated_from_trainer
model-index:
- name: icdar23-entrydetector_labelledtext_breaks_indents_left_diff_right_ref
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# icdar23-entrydetector_labelledtext_breaks_indents_left_diff_right_ref

This model is a fine-tuned version of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3015
- Act: {'precision': 0.806146572104019, 'recall': 0.8938401048492791, 'f1': 0.8477315102548166, 'number': 1526}
- Cardinal: {'precision': 0.951349296845306, 'recall': 0.962322183775471, 'f1': 0.9568042813455658, 'number': 2601}
- Ebegin: {'precision': 0.9863870493009566, 'recall': 0.9951744617668894, 'f1': 0.9907612712490761, 'number': 2694}
- Eend: {'precision': 0.9925678186547752, 'recall': 0.9885270170244264, 'f1': 0.9905432968663082, 'number': 2702}
- Ft: {'precision': 0.23076923076923078, 'recall': 0.2857142857142857, 'f1': 0.25531914893617025, 'number': 21}
- Loc: {'precision': 0.9102217414818821, 'recall': 0.9339622641509434, 'f1': 0.9219391947411668, 'number': 3604}
- Per: {'precision': 0.9238871899422358, 'recall': 0.9366172924560799, 'f1': 0.9302086897023606, 'number': 2903}
- Titre: {'precision': 0.5961538461538461, 'recall': 0.8266666666666667, 'f1': 0.6927374301675977, 'number': 150}
- Overall Precision: 0.9294
- Overall Recall: 0.9527
- Overall F1: 0.9409
- Overall Accuracy: 0.9452

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 15000

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log        | 0.07  | 300   | 0.2069          | 0.8798    | 0.9303 | 0.9044 | 0.9571   |
| 0.4341        | 0.14  | 600   | 0.1650          | 0.9456    | 0.9487 | 0.9471 | 0.9658   |
| 0.4341        | 0.21  | 900   | 0.1539          | 0.9370    | 0.9469 | 0.9419 | 0.9644   |
| 0.1993        | 0.29  | 1200  | 0.1280          | 0.9502    | 0.9558 | 0.9530 | 0.9692   |
| 0.1532        | 0.36  | 1500  | 0.1575          | 0.9554    | 0.9507 | 0.9530 | 0.9655   |
| 0.1532        | 0.43  | 1800  | 0.1213          | 0.9403    | 0.9569 | 0.9485 | 0.9670   |
| 0.128         | 0.5   | 2100  | 0.1075          | 0.9538    | 0.9600 | 0.9569 | 0.9745   |
| 0.128         | 0.57  | 2400  | 0.1351          | 0.9485    | 0.9655 | 0.9569 | 0.9696   |
| 0.1095        | 0.64  | 2700  | 0.1384          | 0.9446    | 0.9600 | 0.9522 | 0.9678   |
| 0.1308        | 0.72  | 3000  | 0.1082          | 0.9509    | 0.9617 | 0.9563 | 0.9731   |
| 0.1308        | 0.79  | 3300  | 0.1246          | 0.9546    | 0.9643 | 0.9594 | 0.9712   |
| 0.1007        | 0.86  | 3600  | 0.1290          | 0.9484    | 0.9612 | 0.9547 | 0.9689   |
| 0.1007        | 0.93  | 3900  | 0.1185          | 0.9569    | 0.9604 | 0.9586 | 0.9716   |
| 0.0996        | 1.0   | 4200  | 0.1144          | 0.9561    | 0.9639 | 0.9600 | 0.9753   |
| 0.078         | 1.07  | 4500  | 0.1120          | 0.9483    | 0.9669 | 0.9575 | 0.9746   |
| 0.078         | 1.14  | 4800  | 0.1285          | 0.9522    | 0.9659 | 0.9590 | 0.9719   |
| 0.0723        | 1.22  | 5100  | 0.1302          | 0.9413    | 0.9720 | 0.9565 | 0.9703   |
| 0.0723        | 1.29  | 5400  | 0.1171          | 0.9553    | 0.9687 | 0.9619 | 0.9735   |
| 0.0728        | 1.36  | 5700  | 0.1256          | 0.9475    | 0.9690 | 0.9581 | 0.9733   |
| 0.0538        | 1.43  | 6000  | 0.1169          | 0.9505    | 0.9694 | 0.9599 | 0.9745   |
| 0.0538        | 1.5   | 6300  | 0.1125          | 0.9470    | 0.9712 | 0.9590 | 0.9742   |
| 0.062         | 1.57  | 6600  | 0.1096          | 0.9592    | 0.9675 | 0.9633 | 0.9761   |
| 0.062         | 1.65  | 6900  | 0.1258          | 0.9624    | 0.9638 | 0.9631 | 0.9753   |
| 0.0515        | 1.72  | 7200  | 0.1256          | 0.9586    | 0.9683 | 0.9634 | 0.9733   |
| 0.0561        | 1.79  | 7500  | 0.1411          | 0.9559    | 0.9685 | 0.9622 | 0.9727   |
| 0.0561        | 1.86  | 7800  | 0.1152          | 0.9581    | 0.9672 | 0.9626 | 0.9749   |
| 0.0566        | 1.93  | 8100  | 0.1196          | 0.9618    | 0.9714 | 0.9666 | 0.9768   |
| 0.0566        | 2.0   | 8400  | 0.1868          | 0.8886    | 0.9154 | 0.9018 | 0.9529   |
| 0.1759        | 2.07  | 8700  | 0.1458          | 0.9463    | 0.9643 | 0.9552 | 0.9730   |
| 0.0494        | 2.15  | 9000  | 0.1440          | 0.9543    | 0.9657 | 0.9599 | 0.9750   |
| 0.0494        | 2.22  | 9300  | 0.1382          | 0.9646    | 0.9680 | 0.9663 | 0.9752   |
| 0.0532        | 2.29  | 9600  | 0.1284          | 0.9635    | 0.9712 | 0.9673 | 0.9749   |
| 0.0532        | 2.36  | 9900  | 0.1495          | 0.9624    | 0.9712 | 0.9668 | 0.9745   |
| 0.0223        | 2.43  | 10200 | 0.1203          | 0.9600    | 0.9726 | 0.9662 | 0.9757   |
| 0.0275        | 2.5   | 10500 | 0.1318          | 0.9645    | 0.9694 | 0.9670 | 0.9753   |
| 0.0275        | 2.58  | 10800 | 0.1224          | 0.9623    | 0.9709 | 0.9666 | 0.9756   |
| 0.026         | 2.65  | 11100 | 0.1241          | 0.9633    | 0.9713 | 0.9673 | 0.9756   |


### Framework versions

- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2