|
--- |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: icdar23-entrydetector_plaintext |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# icdar23-entrydetector_plaintext |
|
|
|
This model is a fine-tuned version of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.0337 |
|
- Ebegin: {'precision': 0.9737045630317092, 'recall': 0.9469725460699511, 'f1': 0.9601525262154433, 'number': 2659} |
|
- Eend: {'precision': 0.9644312708410523, 'recall': 0.9727204783258595, 'f1': 0.9685581395348838, 'number': 2676} |
|
- Overall Precision: 0.9690 |
|
- Overall Recall: 0.9599 |
|
- Overall F1: 0.9644 |
|
- Overall Accuracy: 0.9931 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 0.0001 |
|
- train_batch_size: 2 |
|
- eval_batch_size: 2 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- training_steps: 7500 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |
|
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| |
|
| No log | 0.07 | 300 | 0.0380 | 0.9713 | 0.9691 | 0.9702 | 0.9942 | |
|
| 0.1537 | 0.14 | 600 | 0.0318 | 0.9933 | 0.9550 | 0.9738 | 0.9947 | |
|
| 0.1537 | 0.21 | 900 | 0.0185 | 0.9842 | 0.9780 | 0.9811 | 0.9962 | |
|
| 0.0262 | 0.29 | 1200 | 0.0176 | 0.9883 | 0.9754 | 0.9818 | 0.9963 | |
|
| 0.0171 | 0.36 | 1500 | 0.0174 | 0.9915 | 0.9650 | 0.9781 | 0.9955 | |
|
| 0.0171 | 0.43 | 1800 | 0.0139 | 0.9869 | 0.9787 | 0.9828 | 0.9965 | |
|
| 0.0151 | 0.5 | 2100 | 0.0142 | 0.9845 | 0.9814 | 0.9830 | 0.9965 | |
|
| 0.0151 | 0.57 | 2400 | 0.0185 | 0.9894 | 0.9713 | 0.9803 | 0.9960 | |
|
| 0.0144 | 0.64 | 2700 | 0.0150 | 0.9864 | 0.9789 | 0.9827 | 0.9965 | |
|
| 0.0134 | 0.72 | 3000 | 0.0197 | 0.9848 | 0.9734 | 0.9791 | 0.9957 | |
|
| 0.0134 | 0.79 | 3300 | 0.0201 | 0.9809 | 0.9804 | 0.9806 | 0.9960 | |
|
| 0.012 | 0.86 | 3600 | 0.0163 | 0.9794 | 0.9832 | 0.9813 | 0.9961 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.26.1 |
|
- Pytorch 1.13.1+cu116 |
|
- Datasets 2.9.0 |
|
- Tokenizers 0.13.2 |
|
|