Edit model card

fine-tuning-vit5-mlgsum-gelu

This model is a fine-tuned version of VietAI/vit5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.1556
  • Rouge1: 50.5193
  • Rouge2: 21.2954
  • Rougel: 33.6144
  • Rougelsum: 33.9444
  • Gen Len: 22.7724

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
2.2549 1.0 5972 2.1928 50.2908 20.6038 33.1745 33.4632 22.7732
2.118 2.0 11944 2.1566 50.1299 21.0429 33.4773 33.7977 22.7204
1.9907 3.0 17916 2.1556 50.5193 21.2954 33.6144 33.9444 22.7724

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.1.2
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
6
Safetensors
Model size
226M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for baovox/fine-tuning-vit5-mlgsum-gelu

Base model

VietAI/vit5-base
Finetuned
(40)
this model