Edit model card

mT5_multilingual_XLSum-sinhala-abstaractive-summarization_CNN-dailymail-V2

This model is a fine-tuned version of csebuetnlp/mT5_multilingual_XLSum on the CNN daily-mail sinhala dataset. It achieves the following results on the evaluation set:

  • Loss: 2.4863
  • Rouge1: 19.9769
  • Rouge2: 8.04
  • Rougel: 19.0307
  • Rougelsum: 19.7651

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00056
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
1.8746 1.0 750 1.8262 18.9753 7.9271 18.1349 18.7152
1.4727 2.0 1500 1.8094 19.2219 7.9749 18.4314 18.9405
1.2331 3.0 2250 1.8432 20.436 7.8378 19.584 20.1613
1.0381 4.0 3000 1.8987 20.2251 7.9593 19.1556 19.9829
0.8737 5.0 3750 1.9471 20.3262 7.8935 19.407 20.0628
0.7363 6.0 4500 2.0611 20.1551 7.5046 19.2213 19.963
0.6214 7.0 5250 2.1838 19.9045 7.6232 18.743 19.5983
0.5277 8.0 6000 2.3190 20.8581 8.1054 19.8079 20.5414
0.4576 9.0 6750 2.4091 20.028 7.7635 19.0721 19.7053
0.4099 10.0 7500 2.4863 19.9769 8.04 19.0307 19.7651

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0+cu118
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.