File size: 3,308 Bytes
318138f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad973f5
 
 
 
 
 
 
318138f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98fce33
318138f
 
 
 
 
ad973f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
318138f
 
 
 
98fce33
318138f
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
tags:
- generated_from_trainer
model-index:
- name: icdar23-entrydetector_plaintext_breaks_indents_left_diff_right_ref
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# icdar23-entrydetector_plaintext_breaks_indents_left_diff_right_ref

This model is a fine-tuned version of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0078
- Ebegin: {'precision': 0.9920303605313093, 'recall': 0.9830763444904099, 'f1': 0.9875330562901399, 'number': 2659}
- Eend: {'precision': 0.9958443520967133, 'recall': 0.9850523168908819, 'f1': 0.9904189366898367, 'number': 2676}
- Overall Precision: 0.9939
- Overall Recall: 0.9841
- Overall F1: 0.9890
- Overall Accuracy: 0.9982

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7500

### Training results

| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log        | 0.07  | 300  | 0.0314          | 0.9572    | 0.9870 | 0.9719 | 0.9956   |
| 0.1574        | 0.14  | 600  | 0.0145          | 0.9897    | 0.9834 | 0.9866 | 0.9979   |
| 0.1574        | 0.21  | 900  | 0.0098          | 0.9896    | 0.9917 | 0.9907 | 0.9985   |
| 0.0161        | 0.29  | 1200 | 0.0079          | 0.9919    | 0.9921 | 0.9920 | 0.9987   |
| 0.0107        | 0.36  | 1500 | 0.0072          | 0.9895    | 0.9928 | 0.9911 | 0.9986   |
| 0.0107        | 0.43  | 1800 | 0.0116          | 0.9900    | 0.9877 | 0.9888 | 0.9981   |
| 0.0114        | 0.5   | 2100 | 0.0069          | 0.9965    | 0.9898 | 0.9931 | 0.9988   |
| 0.0114        | 0.57  | 2400 | 0.0055          | 0.9955    | 0.9907 | 0.9931 | 0.9989   |
| 0.0082        | 0.64  | 2700 | 0.0051          | 0.9870    | 0.9956 | 0.9913 | 0.9985   |
| 0.0062        | 0.72  | 3000 | 0.0046          | 0.9903    | 0.9957 | 0.9930 | 0.9988   |
| 0.0062        | 0.79  | 3300 | 0.0038          | 0.9957    | 0.9929 | 0.9943 | 0.9990   |
| 0.0051        | 0.86  | 3600 | 0.0038          | 0.9956    | 0.9943 | 0.9949 | 0.9992   |
| 0.0051        | 0.93  | 3900 | 0.0047          | 0.9902    | 0.9942 | 0.9921 | 0.9987   |
| 0.0041        | 1.0   | 4200 | 0.0035          | 0.9979    | 0.9917 | 0.9948 | 0.9991   |
| 0.0029        | 1.07  | 4500 | 0.0036          | 0.9973    | 0.9926 | 0.9949 | 0.9992   |
| 0.0029        | 1.14  | 4800 | 0.0038          | 0.9969    | 0.9916 | 0.9942 | 0.9990   |
| 0.0034        | 1.22  | 5100 | 0.0036          | 0.9953    | 0.9935 | 0.9944 | 0.9991   |


### Framework versions

- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2