flan-t5-base-samsum / README.md
Kekega's picture
End of training
ebace29
metadata
license: apache-2.0
base_model: google/flan-t5-base
tags:
  - generated_from_trainer
datasets:
  - samsum
metrics:
  - rouge
model-index:
  - name: flan-t5-base-samsum
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: samsum
          type: samsum
          config: samsum
          split: test
          args: samsum
        metrics:
          - name: Rouge1
            type: rouge
            value: 47.39

flan-t5-base-samsum

This model is a fine-tuned version of google/flan-t5-base on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3707
  • Rouge1: 47.39
  • Rouge2: 23.8837
  • Rougel: 40.08
  • Rougelsum: 43.7241
  • Gen Len: 17.2137

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.4525 1.0 1842 1.3837 46.4021 22.8734 39.1025 42.8284 17.2149
1.3436 2.0 3684 1.3725 47.0983 23.5269 39.8757 43.4526 17.1954
1.2821 3.0 5526 1.3708 47.2332 23.6343 39.7749 43.4436 17.2271
1.2307 4.0 7368 1.3707 47.39 23.8837 40.08 43.7241 17.2137
1.1986 5.0 9210 1.3762 47.4841 23.9306 40.0741 43.7225 17.2821

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.15.0
  • Tokenizers 0.15.0