phanerozoic's picture
Update README.md
47b58a1 verified
|
raw
history blame
No virus
8.32 kB
metadata
license: cc-by-nc-4.0
language:
  - en
tags:
  - bart
  - text-summarization
  - cnn-dailymail
widget:
  - text: >
      The tower is 324 metres (1,063 ft) tall, about the same height as an
      81-storey building, and the tallest structure in Paris. Its base is
      square, measuring 125 metres (410 ft) on each side. During its
      construction, the Eiffel Tower surpassed the Washington Monument to become
      the tallest man-made structure in the world, a title it held for 41 years
      until the Chrysler Building in New York City was finished in 1930. It was
      the first structure to reach a height of 300 metres. Due to the addition
      of a broadcasting aerial at the top of the tower in 1957, it is now taller
      than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters,
      the Eiffel Tower is the second tallest free-standing structure in France
      after the Millau Viaduct.
    example_title: Generate Summary
metrics:
  - rouge
datasets:
  - cnn_dailymail
model-index:
  - name: BART-Large-CNN-scratch
    results:
      - task:
          type: text-summarization
        dataset:
          name: CNN/DailyMail
          type: cnn_dailymail
        metrics:
          - name: ROUGE-1
            type: rouge
            value: 44.07
          - name: ROUGE-2
            type: rouge
            value: 21.06
          - name: ROUGE-L
            type: rouge
            value: 30.65
        source:
          name: Internal Evaluation
          url: https://huggingface.co/facebook/bart-large-cnn

BART-Large-CNN-scratch

The BART-Large-CNN-scratch model is a newly trained version of the facebook/bart-large model. This model was trained from scratch on the CNN/DailyMail dataset to reproduce the performance of the facebook/bart-large-cnn model.

  • Developed by: phanerozoic
  • Model type: BartForConditionalGeneration
  • Source model: facebook/bart-large
  • License: cc-by-nc-4.0
  • Languages: English

Model Details

BART-Large-CNN-scratch utilizes a transformer-based architecture with a sequence-to-sequence approach, tailored specifically for text summarization tasks. This model builds upon the strengths of the original BART architecture by training from scratch using the CNN/DailyMail dataset.

Configuration

  • Max input length: 1024 tokens
  • Max target length: 128 tokens
  • Learning rate: 4e-5
  • Batch size: 32
  • Epochs: 1
  • Hardware used: NVIDIA RTX 6000 Ada Lovelace

Training and Evaluation Data

The model was trained on 1 epoch of the CNN/DailyMail dataset, a comprehensive collection of news articles paired with human-written summaries. This dataset is widely used as a benchmark for evaluating text summarization models due to its size and the quality of its annotations.

Training Procedure

The training involved starting from the facebook/bart-large model and training from scratch with the following settings:

  • Epochs: 1
  • Batch size: 32
  • Learning rate: 4e-5
  • Training time: 7 hours
  • Loss: 0.65

During training, the model was optimized to reduce the loss function, enhancing its ability to generate summaries that are both concise and informative.

Performance

The training process resulted in the following performance metrics:

  • ROUGE-1: 44.07
  • ROUGE-2: 21.06
  • ROUGE-L: 30.65

Comparing Performance to Base and Enhanced Models

The performance of BART-Large-CNN-scratch is compared against Facebook's base BART-large-cnn model and the enhanced version:

Model ROUGE-1 ROUGE-2 ROUGE-L
Facebook BART-large-cnn 42.949 20.815 30.619
Enhanced BART-large-cnn 45.370 22.000 31.170
BART-Large-CNN-scratch 44.070 21.060 30.650

Analysis of ROUGE Scores

ROUGE-1:

  • Facebook BART-large-cnn: 42.949
  • Enhanced BART-large-cnn: 45.370
  • BART-Large-CNN-scratch: 44.070

The ROUGE-1 score measures the overlap of unigrams (single words) between the generated summary and the reference summary. The BART-Large-CNN-scratch model achieved a ROUGE-1 score of 44.07, which is a significant improvement over the Facebook BART-large-cnn model (42.949) and close to the enhanced version (45.370). This indicates that the BART-Large-CNN-scratch model captures a substantial amount of relevant information from the source text.

ROUGE-2:

  • Facebook BART-large-cnn: 20.815
  • Enhanced BART-large-cnn: 22.000
  • BART-Large-CNN-scratch: 21.060

The ROUGE-2 score measures the overlap of bigrams (pairs of consecutive words) between the generated summary and the reference summary. The BART-Large-CNN-scratch model achieved a ROUGE-2 score of 21.06, which is again an improvement over the Facebook BART-large-cnn model (20.815) and close to the enhanced version (22.000). This indicates that the BART-Large-CNN-scratch model maintains good coherence and relevance in the summaries.

ROUGE-L:

  • Facebook BART-large-cnn: 30.619
  • Enhanced BART-large-cnn: 31.170
  • BART-Large-CNN-scratch: 30.650

The ROUGE-L score measures the longest common subsequence (LCS) between the generated summary and the reference summary. The BART-Large-CNN-scratch model achieved a ROUGE-L score of 30.65, which is slightly higher than the Facebook BART-large-cnn model (30.619) and close to the enhanced version (31.170). This suggests that the BART-Large-CNN-scratch model produces summaries that are well-structured and follow the sequence of the reference summaries closely.

Implications

  1. Reproducibility:

    • The BART-Large-CNN-scratch model successfully reproduced the performance of the Facebook BART-large-cnn model. This is evidenced by the close match in ROUGE scores and identical summaries generated for the same input text. This confirms the robustness and reliability of the BART architecture and the training methodology when applied to the CNN/DailyMail dataset.
  2. Enhanced Model Comparison:

    • The enhanced BART-large-cnn model, which was fine-tuned for an additional epoch, shows slightly better ROUGE scores compared to both the Facebook BART-large-cnn and BART-Large-CNN-scratch models. This indicates that additional fine-tuning can further improve the model's performance in capturing relevant information and generating coherent summaries.
  3. Model Training from Scratch:

    • Training the BART-large model from scratch using the CNN/DailyMail dataset resulted in competitive performance, closely matching the pre-trained and fine-tuned models. This highlights the effectiveness of the BART architecture in learning summarization tasks from scratch, given a large and high-quality dataset.
  4. Practical Applications:

    • The BART-Large-CNN-scratch model is highly effective for text summarization tasks in English, particularly for news articles. It can be applied in various domains such as news aggregation, content summarization, and information retrieval where concise and accurate summaries are essential.

Overall Appraisal

The BART-Large-CNN-scratch model demonstrates competitive performance, successfully reproducing the results of the Facebook BART-large-cnn model. It achieves significant improvements in ROUGE scores and generates high-quality summaries, making it a robust tool for text summarization applications.

Usage

This model is highly effective for generating summaries in English texts, particularly in contexts similar to the news articles dataset upon which the model was trained. It can be used in various applications, including news aggregation, content summarization, and information retrieval.

Limitations

While the model excels in contexts similar to its training data (news articles), its performance might vary on text from other domains or in other languages. Future enhancements could involve expanding the training data to include more diverse text sources, which would improve its generalizability and robustness.

Acknowledgments

Special thanks to the developers of the BART architecture and the Hugging Face team. Their tools and frameworks were instrumental in the development and fine-tuning of this model. The NVIDIA RTX 6000 Ada Lovelace hardware provided the necessary computational power to achieve these results.