Papers
arxiv:2302.10866

Hyena Hierarchy: Towards Larger Convolutional Language Models

Published on Feb 21, 2023
Authors:
,
,
,
,
,
,

Abstract

Recent advances in deep learning have relied heavily on the use of large Transformers due to their ability to learn at scale. However, the core building block of Transformers, the attention operator, exhibits quadratic cost in sequence length, limiting the amount of context accessible. Existing subquadratic methods based on low-rank and sparse approximations need to be combined with dense attention layers to match Transformers, indicating a gap in capability. In this work, we propose Hyena, a subquadratic drop-in replacement for attention constructed by interleaving implicitly parametrized long convolutions and data-controlled gating. In recall and reasoning tasks on sequences of thousands to hundreds of thousands of tokens, Hyena improves accuracy by more than 50 points over operators relying on state-spaces and other implicit and explicit methods, matching attention-based models. We set a new state-of-the-art for dense-attention-free architectures on language modeling in standard datasets (WikiText103 and The Pile), reaching Transformer quality with a 20% reduction in training compute required at sequence length 2K. Hyena operators are twice as fast as highly optimized attention at sequence length 8K, and 100x faster at sequence length 64K.

Community

Not sure, why this paper is not getting more attention? The way I see it, formulating the n-to-n attention as a convolution to calculate it in O(n log(n)) instead of O(n^2) is the best one can hope for.
It is probably just, given the success of current LLMs, context-length might not be that important after all. But in other settings, this should be hard to beat.

Not sure, why this paper is this not getting more attention? The way I see it, formulating the n-to-n attention as a convolution to calculate it in O(n log(n)) instead of O(n^2) is the best one can hope for.
It is probably just, given the success of current LLMs, context-length might not be that important after all. But in other settings, this should be hard to beat.

Well, at "normal" lengths (under 1-2K), they're basically equiv to vanilla transformers (25% less is not a huge difference honestly), especially given the other optimizations available for vanilla like FlashMemory 1/2 etc'. It's not enough to push a change for most, and for the big context lengths, most rely on existing pretrained huge LLMs like chatgpt - training from scratch is riskkky

Risky yes, but I don't believe, we'll be stuck with the transforme/attention for ever. There is something in the air around S4, Monarch and RetNet.

Sign up or log in to comment

Models citing this paper 24

Browse 24 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2302.10866 in a dataset README.md to link it from this page.

Spaces citing this paper 9

Collections including this paper 4