flan-t5-base-samsum / README.md
andreaparker's picture
flan_t5_test_2023_01_31
85e65d1
metadata
language:
  - flan_t5_test_2023_01_31
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - samsum
metrics:
  - rouge
model-index:
  - name: flan-t5-base-samsum
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: samsum
          type: samsum
          config: samsum
          split: test
          args: samsum
        metrics:
          - name: Rouge1
            type: rouge
            value: 47.4339

flan-t5-base-samsum

This model is a fine-tuned version of google/flan-t5-base on the samsum dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3772
  • Rouge1: 47.4339
  • Rouge2: 23.9608
  • Rougel: 40.0566
  • Rougelsum: 43.6981
  • Gen Len: 17.3162

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.4403 1.0 1842 1.3829 46.5338 23.1342 39.4468 42.8518 17.0977
1.3534 2.0 3684 1.3732 47.0913 23.5016 39.5941 43.238 17.4554
1.2795 3.0 5526 1.3709 46.8916 23.3226 39.5661 43.1582 17.2027
1.2313 4.0 7368 1.3736 47.441 23.7501 40.0446 43.6336 17.2198
1.1934 5.0 9210 1.3772 47.4339 23.9608 40.0566 43.6981 17.3162

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1+cu116
  • Datasets 2.9.0
  • Tokenizers 0.13.2