Edit model card

donut-base-sroie-metrics-combined-new-instance-050824

This model is a fine-tuned version of naver-clova-ix/donut-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6594
  • Bleu: 0.0554
  • Precisions: [0.7702970297029703, 0.671875, 0.6061381074168798, 0.5538922155688623]
  • Brevity Penalty: 0.0858
  • Length Ratio: 0.2894
  • Translation Length: 505
  • Reference Length: 1745
  • Cer: 0.7675
  • Wer: 0.8512

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Precisions Brevity Penalty Length Ratio Translation Length Reference Length Cer Wer
3.4087 1.0 253 1.6485 0.0082 [0.4555984555984556, 0.14316702819956617, 0.06188118811881188, 0.01440922190201729] 0.0936 0.2968 518 1745 0.8353 0.9368
1.1999 2.0 506 0.8976 0.0274 [0.6748878923766816, 0.5347043701799485, 0.4578313253012048, 0.3890909090909091] 0.0543 0.2556 446 1745 0.7818 0.8754
0.8013 3.0 759 0.7295 0.0508 [0.7580645161290323, 0.6583143507972665, 0.5890052356020943, 0.5384615384615384] 0.0806 0.2842 496 1745 0.7629 0.8557
0.6404 4.0 1012 0.6594 0.0554 [0.7702970297029703, 0.671875, 0.6061381074168798, 0.5538922155688623] 0.0858 0.2894 505 1745 0.7675 0.8512

Framework versions

  • Transformers 4.41.0.dev0
  • Pytorch 2.1.0
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
202M params
Tensor type
I64
·
F32
·
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from