Buseak's picture
End of training
667ce6c
metadata
license: apache-2.0
base_model: Buseak/md_mt5_base_boun_split_second_v1_retrain_on_first_boun
tags:
  - generated_from_trainer
metrics:
  - bleu
model-index:
  - name: md_mt5_base_boun_split_second_v1_retrain_on_second_imst
    results: []

md_mt5_base_boun_split_second_v1_retrain_on_second_imst

This model is a fine-tuned version of Buseak/md_mt5_base_boun_split_second_v1_retrain_on_first_boun on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1581
  • Bleu: 2.0954
  • Gen Len: 18.7592

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 15

Training results

Training Loss Epoch Step Validation Loss Bleu Gen Len
0.5133 1.0 916 0.2581 1.8828 18.7472
0.4699 2.0 1832 0.2371 1.9073 18.7142
0.4469 3.0 2748 0.2222 1.9197 18.739
0.4205 4.0 3664 0.2085 1.9502 18.7502
0.4001 5.0 4580 0.1983 1.9643 18.7409
0.3992 6.0 5496 0.1898 1.9948 18.7633
0.3824 7.0 6412 0.1841 2.0136 18.7614
0.3666 8.0 7328 0.1766 2.0422 18.7649
0.3562 9.0 8244 0.1713 2.0554 18.7636
0.345 10.0 9160 0.1672 2.0659 18.7611
0.3465 11.0 10076 0.1637 2.0804 18.7611
0.3303 12.0 10992 0.1617 2.0815 18.7619
0.3333 13.0 11908 0.1594 2.0835 18.7592
0.328 14.0 12824 0.1581 2.0958 18.7592
0.3267 15.0 13740 0.1581 2.0954 18.7592

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0