flan-t5-base-samsum / README.md
andreaparker's picture
update model card README.md
c86bfbe
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - samsum
metrics:
  - rouge
model-index:
  - name: flan-t5-base-samsum
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: samsum
          type: samsum
          config: samsum
          split: test
          args: samsum
        metrics:
          - name: Rouge1
            type: rouge
            value: 47.4485

flan-t5-base-samsum

This model is a fine-tuned version of google/flan-t5-base on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3772
  • Rouge1: 47.4485
  • Rouge2: 23.938
  • Rougel: 40.0491
  • Rougelsum: 43.6954
  • Gen Len: 17.3162

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.4264 1.0 1842 1.3829 46.5615 23.1026 39.4012 42.9128 17.0977
1.3527 2.0 3684 1.3732 47.1096 23.4582 39.5488 43.2577 17.4554
1.2554 3.0 5526 1.3709 46.9079 23.29 39.5731 43.1779 17.2027
1.2503 4.0 7368 1.3736 47.4506 23.7238 39.9803 43.5976 17.2198
1.1675 5.0 9210 1.3772 47.4485 23.938 40.0491 43.6954 17.3162

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.1
  • Tokenizers 0.13.2