--- base_model: google/pegasus-large tags: - generated_from_trainer metrics: - rouge model-index: - name: pegasus-large-finetuned-cnn_dailymail results: [] --- # pegasus-large-finetuned-cnn_dailymail This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8591 - Rouge1: 47.5545 - Rouge2: 24.7124 - Rougel: 34.1029 - Rougelsum: 44.0189 - Bleu 1: 37.0636 - Bleu 2: 25.5604 - Bleu 3: 19.3415 - Meteor: 38.0261 - Lungime rezumat: 57.9183 - Lungime original: 48.6393 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu 1 | Bleu 2 | Bleu 3 | Meteor | Lungime rezumat | Lungime original | |:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:-------:|:-------:|:---------------:|:----------------:| | 0.9693 | 1.0 | 28660 | 0.8691 | 47.0692 | 24.3586 | 33.6526 | 43.4825 | 36.2858 | 24.9738 | 18.8926 | 37.8237 | 59.475 | 48.6393 | | 0.8248 | 2.0 | 57320 | 0.8598 | 46.751 | 24.0748 | 33.3009 | 43.1207 | 36.0845 | 24.7477 | 18.6962 | 37.8529 | 60.7557 | 48.6393 | | 0.7578 | 3.0 | 85980 | 0.8570 | 47.2636 | 24.4414 | 33.8604 | 43.6832 | 36.6893 | 25.2309 | 19.0981 | 38.0561 | 59.0953 | 48.6393 | | 0.7145 | 4.0 | 114640 | 0.8591 | 47.5545 | 24.7124 | 34.1029 | 44.0189 | 37.0636 | 25.5604 | 19.3415 | 38.0261 | 57.9183 | 48.6393 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.2+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1