Edit model card

bart-base-samsum

This model was obtained by fine-tuning facebook/bart-base on Samsum dataset.

Usage

from transformers import pipeline

summarizer = pipeline("summarization", model="lidiya/bart-base-samsum")
conversation = '''Jeff: Can I train a πŸ€— Transformers model on Amazon SageMaker? 
Philipp: Sure you can use the new Hugging Face Deep Learning Container. 
Jeff: ok.
Jeff: and how can I get started? 
Jeff: where can I find documentation? 
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face                                           
'''
summarizer(conversation)

Training procedure

Results

key value
eval_rouge1 46.6619
eval_rouge2 23.3285
eval_rougeL 39.4811
eval_rougeLsum 43.0482
test_rouge1 44.9932
test_rouge2 21.7286
test_rougeL 38.1921
test_rougeLsum 41.2672
Downloads last month
4
Safetensors
Model size
139M params
Tensor type
F32
Β·
Inference API
Examples
This model can be loaded on Inference API (serverless).

Dataset used to train lidiya/bart-base-samsum

Space using lidiya/bart-base-samsum 1

Evaluation results

  • Validation ROUGE-1 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    46.662
  • Validation ROUGE-2 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    23.328
  • Validation ROUGE-L on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    39.481
  • Test ROUGE-1 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    44.993
  • Test ROUGE-2 on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    21.729
  • Test ROUGE-L on SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
    self-reported
    38.192
  • ROUGE-1 on samsum
    test set verified
    45.015
  • ROUGE-2 on samsum
    test set verified
    21.686
  • ROUGE-L on samsum
    test set verified
    38.173
  • ROUGE-LSUM on samsum
    test set verified
    41.279