domenicrosati's picture
update model card README.md
4880313
|
raw
history blame
2.28 kB
metadata
tags:
  - paraphrasing
  - generated_from_trainer
datasets:
  - paws
metrics:
  - rouge
model-index:
  - name: pegasus-pubmed-finetuned-paws
    results:
      - task:
          name: Sequence-to-sequence Language Modeling
          type: text2text-generation
        dataset:
          name: paws
          type: paws
          args: labeled_final
        metrics:
          - name: Rouge1
            type: rouge
            value: 56.8108

pegasus-pubmed-finetuned-paws

This model is a fine-tuned version of google/pegasus-pubmed on the paws dataset. It achieves the following results on the evaluation set:

  • Loss: 3.5012
  • Rouge1: 56.8108
  • Rouge2: 36.2576
  • Rougel: 51.1666
  • Rougelsum: 51.2193

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5
  • mixed_precision_training: Native AMP
  • label_smoothing_factor: 0.1

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum
No log 0.73 1000 3.8839 51.2731 29.8072 45.767 45.5732
4.071 1.47 2000 3.6459 52.756 31.9185 48.0092 48.0544
3.5467 2.2 3000 3.5849 54.8127 33.1959 49.326 49.4971
3.5467 2.93 4000 3.5267 55.387 33.9516 50.683 50.6313
3.3654 3.66 5000 3.5031 57.5279 35.2664 51.9903 52.258
3.2844 4.4 6000 3.5296 56.0536 33.395 50.9909 51.244

Framework versions

  • Transformers 4.18.0
  • Pytorch 1.11.0
  • Datasets 2.1.0
  • Tokenizers 0.12.1