--- tags: - generated_from_trainer model-index: - name: icdar23-entrydetector_plaintext_breaks_indents_left_diff_right_ref results: [] --- # icdar23-entrydetector_plaintext_breaks_indents_left_diff_right_ref This model is a fine-tuned version of [HueyNemud/das22-10-camembert_pretrained](https://huggingface.co/HueyNemud/das22-10-camembert_pretrained) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0045 - Ebegin: {'precision': 0.9977203647416414, 'recall': 0.9875893192929672, 'f1': 0.9926289926289926, 'number': 2659} - Eend: {'precision': 0.9962221382697394, 'recall': 0.9854260089686099, 'f1': 0.9907946646627841, 'number': 2676} - Overall Precision: 0.9970 - Overall Recall: 0.9865 - Overall F1: 0.9917 - Overall Accuracy: 0.9986 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 7500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 0.07 | 300 | 0.0294 | 0.9739 | 0.9925 | 0.9831 | 0.9972 | | 0.1819 | 0.14 | 600 | 0.0152 | 0.9911 | 0.9804 | 0.9857 | 0.9978 | | 0.1819 | 0.21 | 900 | 0.0067 | 0.9871 | 0.9959 | 0.9915 | 0.9986 | | 0.0165 | 0.29 | 1200 | 0.0077 | 0.9871 | 0.9961 | 0.9916 | 0.9986 | | 0.0103 | 0.36 | 1500 | 0.0065 | 0.9872 | 0.9962 | 0.9917 | 0.9986 | | 0.0103 | 0.43 | 1800 | 0.0053 | 0.9903 | 0.9952 | 0.9927 | 0.9988 | | 0.0087 | 0.5 | 2100 | 0.0068 | 0.9974 | 0.9886 | 0.9930 | 0.9988 | | 0.0087 | 0.57 | 2400 | 0.0073 | 0.9951 | 0.9877 | 0.9914 | 0.9985 | | 0.0078 | 0.64 | 2700 | 0.0045 | 0.9899 | 0.9946 | 0.9923 | 0.9987 | | 0.0054 | 0.72 | 3000 | 0.0043 | 0.9978 | 0.9905 | 0.9941 | 0.9990 | | 0.0054 | 0.79 | 3300 | 0.0042 | 0.9976 | 0.9899 | 0.9938 | 0.9989 | | 0.0047 | 0.86 | 3600 | 0.0042 | 0.9955 | 0.9925 | 0.9940 | 0.9990 | | 0.0047 | 0.93 | 3900 | 0.0048 | 0.9865 | 0.9974 | 0.9920 | 0.9986 | | 0.0044 | 1.0 | 4200 | 0.0034 | 0.9979 | 0.9919 | 0.9949 | 0.9991 | | 0.0026 | 1.07 | 4500 | 0.0041 | 0.9954 | 0.9944 | 0.9949 | 0.9991 | | 0.0026 | 1.14 | 4800 | 0.0036 | 0.9979 | 0.9922 | 0.9950 | 0.9992 | | 0.0029 | 1.22 | 5100 | 0.0037 | 0.9956 | 0.9931 | 0.9944 | 0.9991 | | 0.0029 | 1.29 | 5400 | 0.0050 | 0.9899 | 0.9956 | 0.9927 | 0.9988 | | 0.0029 | 1.36 | 5700 | 0.0034 | 0.9975 | 0.9935 | 0.9955 | 0.9993 | | 0.0028 | 1.43 | 6000 | 0.0036 | 0.9970 | 0.9937 | 0.9954 | 0.9992 | | 0.0028 | 1.5 | 6300 | 0.0038 | 0.9932 | 0.9951 | 0.9942 | 0.9990 | | 0.0027 | 1.57 | 6600 | 0.0034 | 0.9969 | 0.9933 | 0.9951 | 0.9992 | | 0.0027 | 1.65 | 6900 | 0.0034 | 0.9974 | 0.9929 | 0.9952 | 0.9992 | | 0.0027 | 1.72 | 7200 | 0.0036 | 0.9970 | 0.9934 | 0.9952 | 0.9992 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.9.0 - Tokenizers 0.13.2