Edit model card

donut_experiment_bayesian_trial_3

This model is a fine-tuned version of naver-clova-ix/donut-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5840
  • Bleu: 0.0667
  • Precisions: [0.8136645962732919, 0.7347417840375586, 0.6829268292682927, 0.6346153846153846]
  • Brevity Penalty: 0.0934
  • Length Ratio: 0.2967
  • Translation Length: 483
  • Reference Length: 1628
  • Cer: 0.7599
  • Wer: 0.8328

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00017060423589132634
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Precisions Brevity Penalty Length Ratio Translation Length Reference Length Cer Wer
0.2012 1.0 253 0.6694 0.0547 [0.7680851063829788, 0.6682808716707022, 0.6067415730337079, 0.5484949832775919] 0.0851 0.2887 470 1628 0.7597 0.8411
0.127 2.0 506 0.6071 0.0638 [0.7818930041152263, 0.6876456876456877, 0.6370967741935484, 0.5841269841269842] 0.0954 0.2985 486 1628 0.7570 0.8360
0.0766 3.0 759 0.5786 0.0655 [0.8125, 0.735224586288416, 0.6885245901639344, 0.6407766990291263] 0.0915 0.2948 480 1628 0.7564 0.8319
0.0259 4.0 1012 0.5840 0.0667 [0.8136645962732919, 0.7347417840375586, 0.6829268292682927, 0.6346153846153846] 0.0934 0.2967 483 1628 0.7599 0.8328

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.1.0
  • Datasets 2.18.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
202M params
Tensor type
I64
·
F32
·
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from