Papers
arxiv:2210.15497

LSG Attention: Extrapolation of pretrained Transformers to long sequences

Published on Oct 13, 2022
Authors:
ccdv ,

Abstract

Transformer models achieve state-of-the-art performance on a wide range of NLP tasks. They however suffer from a prohibitive limitation due to the self-attention mechanism, inducing O(n^2) complexity with regard to sequence length. To answer this limitation we introduce the LSG architecture which relies on Local, Sparse and Global attention. We show that LSG attention is fast, efficient and competitive in classification and summarization tasks on long documents. Interestingly, it can also be used to adapt existing pretrained models to efficiently extrapolate to longer sequences with no additional training. Along with the introduction of the LSG attention mechanism, we propose tools to train new models and adapt existing ones based on this mechanism.

Community

Sign up or log in to comment

Models citing this paper 32

Browse 32 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2210.15497 in a dataset README.md to link it from this page.

Spaces citing this paper 2

Collections including this paper 2