Edit model card

Usage

from transformers import pipeline

summarizer_pipe = pipeline("summarization", model="yashugupta786/bart_large_xsum_samsum_conv_summarizer")
conversation_data = '''Hannah: Hey, do you have Betty's number?
Amanda: Lemme check
Amanda: Sorry, can't find it.
Amanda: Ask Larry
Amanda: He called her last time we were at the park together
Hannah: I don't know him well
Amanda: Don't be shy, he's very nice
Hannah: If you say so..
Hannah: I'd rather you texted him
Amanda: Just text him πŸ™‚
Hannah: Urgh.. Alright
Hannah: Bye
Amanda: Bye bye                                       
'''
summarizer_pipe(conversation_data)

Results

key value
eval_rouge1 54.3921
eval_rouge2 29.8078
eval_rougeL 45.1543
eval_rougeLsum 49.942
test_rouge1 53.3059
test_rouge2 28.355
test_rougeL 44.0953
test_rougeLsum 48.9246

All the metric Rouge1,2,L are computed using precison and recall then computed the F measure for these Rouge recall= no of overlaping words/total no of referenced humman annotated words Rouge precision= no of overlaping words/total no of candidate machine predicted words

Downloads last month
4

Dataset used to train yashugupta786/bart_large_xsum_samsum_conv_summarizer

Evaluation results

  • Validation ROUGE-1 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    54.392
  • Validation ROUGE-2 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    29.808
  • Validation ROUGE-L on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    45.154
  • Test ROUGE-1 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    53.306
  • Test ROUGE-2 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    28.355
  • Test ROUGE-L on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    44.095