--- license: mit base_model: davelotito/donut-base-sroie tags: - generated_from_trainer datasets: - imagefolder metrics: - bleu - wer model-index: - name: donut-base-sroie-v2 results: [] --- # donut-base-sroie-v2 This model is a fine-tuned version of [davelotito/donut-base-sroie](https://huggingface.co/davelotito/donut-base-sroie) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.4355 - Bleu: 0.8879 - Precisions: [0.943646408839779, 0.9119229045271179, 0.8854285064787452, 0.860009225092251] - Brevity Penalty: 0.9868 - Length Ratio: 0.9869 - Translation Length: 4525 - Reference Length: 4585 - Cer: 0.0857 - Wer: 0.2978 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Precisions | Brevity Penalty | Length Ratio | Translation Length | Reference Length | Cer | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|:------:|:------:| | No log | 0.99 | 62 | 0.4638 | 0.8823 | [0.9399823477493381, 0.9044528977399866, 0.8772128915115751, 0.8514851485148515] | 0.9884 | 0.9884 | 4532 | 4585 | 0.0912 | 0.3085 | | 0.0043 | 2.0 | 125 | 0.4421 | 0.8853 | [0.9405155320555189, 0.9059428060768543, 0.8794470881486517, 0.8537931034482759] | 0.9899 | 0.9900 | 4539 | 4585 | 0.0889 | 0.3050 | | 0.0043 | 2.99 | 187 | 0.4328 | 0.8904 | [0.9399122807017544, 0.9068267734044919, 0.8809201623815968, 0.8558682223747426] | 0.9945 | 0.9945 | 4560 | 4585 | 0.0842 | 0.2939 | | 0.0106 | 3.97 | 248 | 0.4355 | 0.8879 | [0.943646408839779, 0.9119229045271179, 0.8854285064787452, 0.860009225092251] | 0.9868 | 0.9869 | 4525 | 4585 | 0.0857 | 0.2978 | ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.1.0 - Datasets 2.18.0 - Tokenizers 0.15.2