Edit model card

finetune-newwiki-summarization-ver1

This model is a fine-tuned version of VietAI/vit5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4720
  • Rouge1: 48.6293
  • Rouge2: 25.6053
  • Rougel: 35.2967
  • Rougelsum: 37.4842

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
0.7106 1.0 1980 0.5006 46.5921 22.8276 33.1994 35.6330
0.621 2.0 3960 0.4774 47.4426 24.1508 34.1315 36.5692
0.5607 3.0 5940 0.4690 48.1503 24.7217 34.5071 36.7568
0.5241 4.0 7920 0.4673 48.2480 25.0604 34.4937 36.9301
0.499 5.0 9900 0.4678 48.1659 25.1857 34.9460 37.1931
0.4592 6.0 11880 0.4694 48.5839 25.5925 35.2301 37.5352
0.4535 7.0 13860 0.4720 48.6293 25.6053 35.2967 37.4842

Framework versions

  • Transformers 4.17.0
  • Pytorch 2.1.2
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
3

Space using minnehwg/finetune-newwiki-summarization-ver1 1