Edit model card

Longformer Encoder-Decoder (LED) fine-tuned on Billsum

This model is a fine-tuned version of led-base-16384 on the billsum dataset.

As described in Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan, led-base-16384 was initialized from bart-base since both models share the exact same architecture. To be able to process 16K tokens, bart-base's position embedding matrix was simply copied 16 times.

Use In Transformers

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("Artifact-AI/led_base_16384_billsum_summarization")

model = AutoModelForSeq2SeqLM.from_pretrained("Artifact-AI/led_base_16384_billsum_summarization")

Results

Model Rouge-1 Rouge-2 Rouge-L Rouge-Lsum
LED Large 47.843 26.342 34.230 41.689
LED Base 47.672 26.737 34.568 41.529

The model is trained on the BillSum summarization dataset found here

Test The Model

Please find a notebook to test the model below:

Open In Colab

Citing & Authors

@misc{led_base_16384_billsum_summarization,
    title={led_base_16384_billsum_summarization},
    author={Matthew Kenney},
    year={2023}
}
Downloads last month
6
Safetensors
Model size
460M params
Tensor type
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ArtifactAI/led_base_16384_billsum_summarization

Evaluation results