File size: 3,581 Bytes
962fc35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e78d099
 
 
 
 
 
 
962fc35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0a11e1
962fc35
 
 
 
 
e78d099
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
962fc35
 
 
 
c0a11e1
962fc35
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
tags:
- generated_from_trainer
model-index:
- name: icdar23-entrydetector_plaintext_breaks_indents_left_ref_right_ref
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# icdar23-entrydetector_plaintext_breaks_indents_left_ref_right_ref

This model is a fine-tuned version of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0033
- Ebegin: {'precision': 0.9935849056603774, 'recall': 0.9902218879277924, 'f1': 0.9919005462422301, 'number': 2659}
- Eend: {'precision': 0.996606334841629, 'recall': 0.9876681614349776, 'f1': 0.9921171171171171, 'number': 2676}
- Overall Precision: 0.9951
- Overall Recall: 0.9889
- Overall F1: 0.9920
- Overall Accuracy: 0.9989

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 7500

### Training results

| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1     | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log        | 0.07  | 300  | 0.0262          | 0.9758    | 0.9906 | 0.9832 | 0.9973   |
| 0.1599        | 0.14  | 600  | 0.0116          | 0.9866    | 0.9919 | 0.9892 | 0.9982   |
| 0.1599        | 0.21  | 900  | 0.0111          | 0.9907    | 0.9856 | 0.9882 | 0.9980   |
| 0.0162        | 0.29  | 1200 | 0.0096          | 0.9813    | 0.9966 | 0.9889 | 0.9981   |
| 0.0099        | 0.36  | 1500 | 0.0060          | 0.9820    | 0.9955 | 0.9887 | 0.9982   |
| 0.0099        | 0.43  | 1800 | 0.0046          | 0.9925    | 0.9934 | 0.9929 | 0.9988   |
| 0.0074        | 0.5   | 2100 | 0.0057          | 0.9961    | 0.9880 | 0.9920 | 0.9987   |
| 0.0074        | 0.57  | 2400 | 0.0039          | 0.9911    | 0.9953 | 0.9932 | 0.9988   |
| 0.0072        | 0.64  | 2700 | 0.0075          | 0.9842    | 0.9949 | 0.9895 | 0.9982   |
| 0.0061        | 0.72  | 3000 | 0.0040          | 0.9906    | 0.9963 | 0.9934 | 0.9989   |
| 0.0061        | 0.79  | 3300 | 0.0034          | 0.9955    | 0.9936 | 0.9946 | 0.9991   |
| 0.005         | 0.86  | 3600 | 0.0034          | 0.9933    | 0.9946 | 0.9939 | 0.9990   |
| 0.005         | 0.93  | 3900 | 0.0047          | 0.9847    | 0.9976 | 0.9911 | 0.9985   |
| 0.0041        | 1.0   | 4200 | 0.0031          | 0.9972    | 0.9936 | 0.9954 | 0.9992   |
| 0.0031        | 1.07  | 4500 | 0.0030          | 0.9967    | 0.9945 | 0.9956 | 0.9992   |
| 0.0031        | 1.14  | 4800 | 0.0032          | 0.9966    | 0.9938 | 0.9952 | 0.9992   |
| 0.003         | 1.22  | 5100 | 0.0029          | 0.9960    | 0.9939 | 0.9949 | 0.9991   |
| 0.003         | 1.29  | 5400 | 0.0030          | 0.9935    | 0.9947 | 0.9941 | 0.9990   |
| 0.0023        | 1.36  | 5700 | 0.0028          | 0.9973    | 0.9933 | 0.9953 | 0.9992   |
| 0.0027        | 1.43  | 6000 | 0.0029          | 0.9968    | 0.9936 | 0.9952 | 0.9992   |


### Framework versions

- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2