--- base_model: google/pegasus-cnn_dailymail tags: - generated_from_trainer - dialog summarizer model-index: - name: pegasus-finetuned-dialog-summarizer results: [] license: apache-2.0 datasets: - samsum metrics: - rouge library_name: transformers widget: - text: "Ernest: hey Mike, did you park your car on our street? Mike: no, took it into garage today Ernest: ok good Mike: why? Ernest: someone just crashed into a red honda looking just like yours Mike: lol lucky me." - text: "Laura: Where are you? Paul: Almost there. Laura: Which is? Paul: Close to the Mac. Laura: That's so far away! Paul: 15 mins Laura: I am not waiting any more, see you some other time. Paul: Please, wait! Laura: I've waited 30 minutes, 15 minutes ago you wrote you were almost here. This is too much. Paul: I am so sorry. Laura: I am not." - text: "Lola: hey girlfriend, what's up? Adele: Oh, hi Lols, not much. Adele: got a new dog. Lola: another one? Adele: Yup. a pup biscuit lab. 4 months. Chewy. Lola: how did the others react? Adele: the cats keep their distance, Poppy and Lulu seem to mother him. Speedy wants to play. Lola: no fighting? that's new. Adele: they say puppies are accepted by other animals more easily than older dogs Lola: especially girl dogs, probably Adele: with the other ones I had to wean them because I took them in as adult dogs. And girls like to fight. like crazy. Lola: doggies, right/. Adele: that too :P Lola: haha. true though. Adele: I know, right. Anyway, called him Bones. He's so plump it kinda fit. Lola: cute. can't wait to see him." - text: "Ali: I think I left my wallet at your place yesterday. Could you check? Garcia: Give me a sec, I'll have a look around my room. Ali: OK. Garcia: Found it! Ali: Phew, I don't know what I'd do if it wasn't there. Can you bring it to uni tomorrow? Garcia: Sure thing." --- # pegasus-finetuned-dialog-summarizer This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4894 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6489 | 0.54 | 500 | 1.4894 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.1.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2