--- base_model: google/pegasus-large tags: - generated_from_trainer datasets: - samsum metrics: - rouge model-index: - name: pegasus-samsum results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: samsum type: samsum config: samsum split: validation args: samsum metrics: - name: Rouge1 type: rouge value: 0.4659 --- # pegasus-samsum This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4091 - Rouge1: 0.4659 - Rouge2: 0.2345 - Rougel: 0.3946 - Rougelsum: 0.3951 - Gen Len: 17.7467 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 1.8025 | 0.27 | 500 | 1.4403 | 0.4466 | 0.2101 | 0.3832 | 0.3841 | 21.64 | | 1.5936 | 0.54 | 1000 | 1.3766 | 0.4786 | 0.2374 | 0.4017 | 0.4013 | 21.24 | | 1.5926 | 0.81 | 1500 | 1.3910 | 0.5118 | 0.2643 | 0.4282 | 0.4286 | 20.2267 | | 1.5067 | 1.09 | 2000 | 1.4028 | 0.4982 | 0.261 | 0.4155 | 0.4157 | 20.4267 | | 1.5712 | 1.36 | 2500 | 1.4236 | 0.4712 | 0.234 | 0.3964 | 0.3969 | 17.0 | | 1.6177 | 1.63 | 3000 | 1.4151 | 0.4768 | 0.2382 | 0.4019 | 0.4022 | 16.28 | | 1.6289 | 1.9 | 3500 | 1.4112 | 0.4744 | 0.2346 | 0.402 | 0.4033 | 17.0267 | | 1.6326 | 2.17 | 4000 | 1.4096 | 0.4682 | 0.234 | 0.3985 | 0.3994 | 17.1333 | | 1.5929 | 2.44 | 4500 | 1.4093 | 0.4637 | 0.2342 | 0.3939 | 0.3942 | 17.16 | | 1.4351 | 2.72 | 5000 | 1.4090 | 0.4684 | 0.2346 | 0.3953 | 0.3955 | 17.8133 | | 1.6445 | 2.99 | 5500 | 1.4091 | 0.4659 | 0.2345 | 0.3946 | 0.3951 | 17.7467 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3