--- tags: - generated_from_trainer model-index: - name: icdar23-entrydetector_plaintext_breaks_indents_left_diff_right_ref results: [] --- # icdar23-entrydetector_plaintext_breaks_indents_left_diff_right_ref This model is a fine-tuned version of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0046 - Ebegin: {'precision': 0.9946541931172737, 'recall': 0.991672218520986, 'f1': 0.9931609674728941, 'number': 3002} - Eend: {'precision': 0.9858412907474481, 'recall': 0.998, 'f1': 0.9918833857876428, 'number': 3000} - Overall Precision: 0.9902 - Overall Recall: 0.9948 - Overall F1: 0.9925 - Overall Accuracy: 0.9988 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 6000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 0.07 | 300 | 0.0323 | 0.9553 | 0.9933 | 0.9740 | 0.9955 | | 0.1598 | 0.14 | 600 | 0.0119 | 0.9817 | 0.9931 | 0.9874 | 0.9979 | | 0.1598 | 0.21 | 900 | 0.0107 | 0.9817 | 0.9944 | 0.9880 | 0.9980 | | 0.016 | 0.29 | 1200 | 0.0066 | 0.9889 | 0.9907 | 0.9898 | 0.9984 | | 0.0098 | 0.36 | 1500 | 0.0058 | 0.9845 | 0.9945 | 0.9894 | 0.9982 | | 0.0098 | 0.43 | 1800 | 0.0071 | 0.9927 | 0.9862 | 0.9895 | 0.9982 | | 0.0079 | 0.5 | 2100 | 0.0054 | 0.9884 | 0.9940 | 0.9912 | 0.9985 | | 0.0079 | 0.57 | 2400 | 0.0049 | 0.9930 | 0.9885 | 0.9908 | 0.9985 | | 0.0061 | 0.64 | 2700 | 0.0059 | 0.9979 | 0.9781 | 0.9879 | 0.9980 | | 0.0066 | 0.72 | 3000 | 0.0046 | 0.9882 | 0.9956 | 0.9919 | 0.9986 | | 0.0066 | 0.79 | 3300 | 0.0043 | 0.9861 | 0.9971 | 0.9916 | 0.9986 | | 0.0066 | 0.86 | 3600 | 0.0038 | 0.9876 | 0.9968 | 0.9922 | 0.9987 | | 0.0066 | 0.93 | 3900 | 0.0046 | 0.9888 | 0.9961 | 0.9924 | 0.9987 | | 0.0044 | 1.0 | 4200 | 0.0042 | 0.9880 | 0.9965 | 0.9922 | 0.9987 | | 0.0035 | 1.07 | 4500 | 0.0038 | 0.9870 | 0.9975 | 0.9922 | 0.9987 | | 0.0035 | 1.14 | 4800 | 0.0038 | 0.9902 | 0.9951 | 0.9927 | 0.9988 | | 0.0035 | 1.22 | 5100 | 0.0037 | 0.9897 | 0.9949 | 0.9923 | 0.9987 | | 0.0035 | 1.29 | 5400 | 0.0038 | 0.9946 | 0.9901 | 0.9924 | 0.9987 | | 0.0028 | 1.36 | 5700 | 0.0038 | 0.9888 | 0.9963 | 0.9926 | 0.9988 | | 0.0024 | 1.43 | 6000 | 0.0038 | 0.9885 | 0.9966 | 0.9926 | 0.9988 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2