Edit model card

Usage

from transformers import pipeline

summarizer = pipeline("summarization", model="oguuzhansahin/flan-t5-large-samsum", device=0)

sample_dialogue = "Barbara: got everything?
Haylee: yeah almost
Haylee: i'm in dairy section
Haylee: but can't find this youghurt u wanted
Barbara: the coconut milk one? Haylee: yeah
Barbara: hmmm yeah that's a mystery. cause it's not dairy but it's yoghurt xD
Haylee: exactly xD Haylee: ok i asked sb. they put it next to eggs lol
Barbara: lol"

res = summarizer(sample)
print(res)

Expected Output

[{'summary_text': "Haylee is in the dairy section. She can't find the coconut milk yog"}]

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 2023
  • num_epochs: 5
  • MAX_LENGTH_DIALOGUE = 768
  • MAX_LENGTH_SUMMARY = 128

Model Performance

Epoch Training Loss Validation Loss Rouge1 Rouge2 Rougel Rougelsum
1 1.182841 1.202841 48.847000 25.428200 41.734300 44.999900
2 1.029400 1.217544 49.175000 25.914800 41.729000 45.258300
3 0.902600 1.239609 49.177600 25.581100 41.680700 44.997300
4 0.808000 1.274836 49.310200 25.902800 42.103600 45.485000
5 0.748200 1.304448 49.154700 25.520400 41.904900 45.234200
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train oguuzhansahin/flan-t5-large-samsum