Edit model card

donut_experiment_bayesian_trial_2

This model is a fine-tuned version of naver-clova-ix/donut-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4983
  • Bleu: 0.0695
  • Precisions: [0.8257261410788381, 0.7717647058823529, 0.7255434782608695, 0.6816720257234726]
  • Brevity Penalty: 0.0928
  • Length Ratio: 0.2961
  • Translation Length: 482
  • Reference Length: 1628
  • Cer: 0.7610
  • Wer: 0.8275

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00015752383448484097
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Bleu Precisions Brevity Penalty Length Ratio Translation Length Reference Length Cer Wer
0.3017 1.0 253 0.7248 0.0641 [0.7525150905432596, 0.65, 0.587467362924282, 0.5276073619631901] 0.1027 0.3053 497 1628 0.7622 0.8495
0.1875 2.0 506 0.6129 0.0670 [0.7914110429447853, 0.7152777777777778, 0.6613333333333333, 0.60062893081761] 0.0974 0.3004 489 1628 0.7565 0.8375
0.1171 3.0 759 0.5027 0.0697 [0.8202479338842975, 0.7587822014051522, 0.7162162162162162, 0.6741214057507987] 0.0941 0.2973 484 1628 0.7563 0.8293
0.0432 4.0 1012 0.4983 0.0695 [0.8257261410788381, 0.7717647058823529, 0.7255434782608695, 0.6816720257234726] 0.0928 0.2961 482 1628 0.7610 0.8275

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.1.0
  • Datasets 2.18.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
202M params
Tensor type
I64
·
F32
·
Unable to determine this model’s pipeline type. Check the docs .

Finetuned from