Hemg's picture
End of training
154a30d verified
metadata
license: apache-2.0
base_model: t5-small
tags:
  - generated_from_trainer
datasets:
  - govreport-summarization
metrics:
  - rouge
model-index:
  - name: govreport-summarization
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: govreport-summarization
          type: govreport-summarization
          config: document
          split: train[:17000]
          args: document
        metrics:
          - name: Rouge1
            type: rouge
            value: 0.1673

govreport-summarization

This model is a fine-tuned version of t5-small on the govreport-summarization dataset. It achieves the following results on the evaluation set:

  • Loss: 2.2117
  • Rouge1: 0.1673
  • Rouge2: 0.0792
  • Rougel: 0.1398
  • Rougelsum: 0.1398
  • Gen Len: 19.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
2.6565 1.0 850 2.3189 0.164 0.0744 0.1364 0.1365 19.0
2.3913 2.0 1700 2.2522 0.1656 0.0766 0.1379 0.138 19.0
2.2813 3.0 2550 2.2187 0.1669 0.0779 0.1393 0.1394 19.0
2.2273 4.0 3400 2.2117 0.1673 0.0792 0.1398 0.1398 19.0

Framework versions

  • Transformers 4.39.3
  • Pytorch 2.1.2
  • Datasets 2.18.0
  • Tokenizers 0.15.2