LED¶
Overview¶
The LED model was proposed in Longformer: The Long-Document Transformer by Iz Beltagy, Matthew E. Peters, Arman Cohan.
The abstract from the paper is the following:
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer’s attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA. We finally introduce the Longformer-Encoder-Decoder (LED), a Longformer variant for supporting long document generative sequence-to-sequence tasks, and demonstrate its effectiveness on the arXiv summarization dataset.
Tips:
LEDForConditionalGeneration
is an extension ofBartForConditionalGeneration
exchanging the traditional self-attention layer with Longformer’s chunked self-attention layer.LEDTokenizer
is an alias ofBartTokenizer
.LED works very well on long-range sequence-to-sequence tasks where the
input_ids
largely exceed a length of 1024 tokens.LED pads the
input_ids
to be a multiple ofconfig.attention_window
if required. Therefore a small speed-up is gained, whenLEDTokenizer
is used with thepad_to_multiple_of
argument.LED makes use of global attention by means of the
global_attention_mask
(seeLongformerModel
). For summarization, it is advised to put global attention only on the first<s>
token. For question answering, it is advised to put global attention on all tokens of the question.To fine-tune LED on all 16384, it is necessary to enable gradient checkpointing by setting
config.gradient_checkpointing = True
.A notebook showing how to evaluate LED, can be accessed here.
A notebook showing how to fine-tune LED, can be accessed here.