File size: 3,475 Bytes
962fc35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b9d56f5
 
 
 
 
 
 
962fc35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0a11e1
962fc35
 
 
 
 
b9d56f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
962fc35
 
 
 
c0a11e1
962fc35
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
tags:
- generated_from_trainer
model-index:
- name: icdar23-entrydetector_plaintext_breaks_indents_left_ref_right_ref
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# icdar23-entrydetector_plaintext_breaks_indents_left_ref_right_ref

This model is a fine-tuned version of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0068
- Ebegin: {'precision': 1.0, 'recall': 0.9793155321549455, 'f1': 0.9895496864905947, 'number': 2659}
- Eend: {'precision': 0.9988562714449104, 'recall': 0.9790732436472347, 'f1': 0.9888658237403285, 'number': 2676}
- Overall Precision: 0.9994
- Overall Recall: 0.9792
- Overall F1: 0.9892
- Overall Accuracy: 0.9983

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7500

### Training results

| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log        | 0.07  | 300  | 0.0309          | 0.9637    | 0.9910 | 0.9771 | 0.9964   |
| 0.181         | 0.14  | 600  | 0.0144          | 0.9777    | 0.9863 | 0.9819 | 0.9974   |
| 0.181         | 0.21  | 900  | 0.0095          | 0.9969    | 0.9845 | 0.9906 | 0.9985   |
| 0.0168        | 0.29  | 1200 | 0.0105          | 0.9869    | 0.9913 | 0.9891 | 0.9982   |
| 0.011         | 0.36  | 1500 | 0.0063          | 0.9937    | 0.9915 | 0.9926 | 0.9988   |
| 0.011         | 0.43  | 1800 | 0.0064          | 0.9883    | 0.9940 | 0.9911 | 0.9986   |
| 0.01          | 0.5   | 2100 | 0.0203          | 0.9552    | 0.9507 | 0.9529 | 0.9922   |
| 0.01          | 0.57  | 2400 | 0.0049          | 0.9946    | 0.9925 | 0.9935 | 0.9989   |
| 0.0144        | 0.64  | 2700 | 0.0056          | 0.9871    | 0.9944 | 0.9907 | 0.9984   |
| 0.0058        | 0.72  | 3000 | 0.0051          | 0.9928    | 0.9930 | 0.9929 | 0.9988   |
| 0.0058        | 0.79  | 3300 | 0.0036          | 0.9969    | 0.9920 | 0.9945 | 0.9991   |
| 0.0048        | 0.86  | 3600 | 0.0047          | 0.9930    | 0.9947 | 0.9938 | 0.9990   |
| 0.0048        | 0.93  | 3900 | 0.0053          | 0.9863    | 0.9965 | 0.9914 | 0.9985   |
| 0.0052        | 1.0   | 4200 | 0.0033          | 0.9985    | 0.9909 | 0.9947 | 0.9991   |
| 0.0029        | 1.07  | 4500 | 0.0039          | 0.9938    | 0.9954 | 0.9946 | 0.9991   |
| 0.0029        | 1.14  | 4800 | 0.0038          | 0.9981    | 0.9906 | 0.9943 | 0.9991   |
| 0.0034        | 1.22  | 5100 | 0.0044          | 0.9937    | 0.9934 | 0.9936 | 0.9989   |
| 0.0034        | 1.29  | 5400 | 0.0040          | 0.9884    | 0.9959 | 0.9921 | 0.9987   |
| 0.0027        | 1.36  | 5700 | 0.0040          | 0.9975    | 0.9910 | 0.9942 | 0.9990   |


### Framework versions

- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2