mt5-small-finetuned / README.md
Ah7med's picture
Training complete
4de5bf4 verified
metadata
library_name: transformers
license: apache-2.0
base_model: google/mt5-small
tags:
  - summarization
  - generated_from_trainer
datasets:
  - samsum
metrics:
  - rouge
model-index:
  - name: mt5-small-finetuned
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: samsum
          type: samsum
          config: samsum
          split: validation
          args: samsum
        metrics:
          - name: Rouge1
            type: rouge
            value: 0.4303256962227823

mt5-small-finetuned

This model is a fine-tuned version of google/mt5-small on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7974
  • Rouge1: 0.4303
  • Rouge2: 0.2038
  • Rougel: 0.3736
  • Rougelsum: 0.3734

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5.6e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
2.1585 1.0 1842 1.9205 0.4074 0.1838 0.3517 0.3518
2.1545 2.0 3684 1.8882 0.4120 0.1914 0.3592 0.3588
2.0888 3.0 5526 1.8290 0.4196 0.1939 0.3603 0.3601
2.0272 4.0 7368 1.8269 0.4215 0.1975 0.3637 0.3635
1.9871 5.0 9210 1.8224 0.4231 0.1943 0.3634 0.3633
1.9535 6.0 11052 1.8055 0.4285 0.2030 0.3715 0.3715
1.9322 7.0 12894 1.7954 0.4270 0.2018 0.3698 0.3697
1.9181 8.0 14736 1.7974 0.4303 0.2038 0.3736 0.3734

Framework versions

  • Transformers 4.47.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.2.0
  • Tokenizers 0.21.0