Hyena

Overview

What is Hyena

While Transformer is a well established and very capable architecture, the quadratic computational cost is an expensive price to pay, especially in inference.

Hyena is a new type of operator that serves as a substitute for the attention mechanism. Developed by Hazy Research, it features a subquadratic computational efficiency, constructed by interleaving implicitly parametrized long convolutions and data-controlled gating.

Long convolutions are similar to standard convolutions except the kernel is the size of the input. It is equivalent to having a global receptive field instead of a local one. Having an implicitly parametrized convultion means that the convolution filters values are not directly learnt, instead, learning a function that can recover thoses values is prefered.

Gating mechanisms control the path through which information flows in the network. They help to define how long an information should be remembered. Usally they consist in elementwise multiplications. An interresting blog article about gating can be found here.

transformer2hyena.png The Hyena operator consists of recursively computing convolutions and multiplicative element-wise gating operations with one projection at a time, until all projections are exhausted. This approach builds on top of the Hungry Hungry Hippo (H3) mechanism, also developed by the same researchers. The H3 mechanism is characterized by its data-controlled, parametric decomposition, acting as a surrogate attention mechanism.

Another way of understanding Hyena is to consider it as a generalization of the H3 layer for an arbitrary number of projections, where the Hyena layer extends recursively H3 with a different choice of parametrization for the long convolution. hyena_recurence.png

From Attention to Hyena operator

The attention mechanism is characterized by two fundamental properties:

  1. It possesses a global contextual awareness, enabling it to assess interactions between pairs of visual tokens within a sequence.
  2. It is data-dependent, meaning the operation of the attention equation varies based on the input data itself, specifically the input projections qq,kk,vv.

Alt text

The attention mechanism is defined by three projections: queryqq, keykk, valuevv, that are generated by mutiliplying the input visual token by three matricesWqW_q,WkW_k andWvW_v that are learned during training.

For a given visual token, we can compute an attention score using thoses projections. The attention score determines how much focus to give on other parts of the input image.
For a nice detailled explainer of Attention you can refer on this illustrated blog article.

In an attempt to replicate these characteristics, the Hyena operator incorporates two key elements:

  1. It employs long convolution to provide a sense of global context, akin to the first property of the attention mechanism.
  2. For data dependency, Hyena uses element-wise gating. This is essentially an element-wise multiplication of input projections, mirroring the data-dependent nature of traditional attention.

In the realm of computational efficiency, the Hyena operator attains an evaluation time complexity ofO(L×log2LO(L \times \log_2 L), indicating a noteworthy enhancement in processing speed.

Hyena operator

Let’s delve into the second-order recursion of the Hyena operator, which simplifies its representation for illustrative purposes. hyena_mechanism.png

In this order, we compute 3 projections analogous toqq,kk andvv attention vectors from the Attention mechanism.

However, unlike the attention mechanism, which typically uses a single dense layer for projecting the input sequence into representations, Hyena incorporates both a dense layer and standard convolutions that are performed on each channels (refered asTqT_q,TkT_k andTvT_v on the schema, but it is an explicit convolution in practice). The softmax function is also discared.

The core idea is to repeatedly apply linear operators that are fast to evaluate to an input sequenceuRLu \in \mathbb{R}^{L} withLL the length of the sequence. Because global convolutions have a large number of parameters, they are expensive to train. A notable design choice is the use of implicit convolutions. Unlike standard convolutional layers, the convolution filterhh is learned implicitly with a small neural networkgammaθgamma_{\theta} (also called the Hyena Filter). This network takes the positional index and potentially positional encodings as inputs. From the outputs ofgammathetagamma_theta one can construct a Toeplitz matrixThT_h.

This implies that instead of learning the values of the convolution filter directly, we learn a mapping from a temporal positional encoding to the values, which is more computationally efficient, especially for long sequences.

It's important to note that the mapping function can be conceptualized within various abstract models, such Neural Field or State Space Models (S4) as discussed in H3 Paper.

Implicit convolutions

A linear convolution can be formulated as a matrix multiplication in which one of the inputs is reshaped into a Toeplitz matrix.

This transformation leads to greater parameter efficiency. Instead of directly learning fixed kernel weight values, a parametrized function is employed. This function intelligently deduces the values of the kernel weights and their dimensions during the network’s forward pass, optimizing resource use.

One way to have an intuition about implicit parametrization is to think about an afine functiony=f(x)=a×x+by=f(x)= a \times x + b we want to learn. Instead of learning every single point positions it is more efficient to learn a and b and compute the points when needed.

In practice, convolutions are accelerated to a subquadratic time complexity by the Cooley-Tukey fast Fourier transform (FFT) algorithm. Some work has been conducted to speed up this computation like FastFFTConv based on Monarch decomposition.

Wrapping Up Everything

nd_hyena.png In essence, Hyena can be performed in two steps:

  1. Compute a set of N+1 linear projections similarly of attention (it can be more than 3 projections)
  2. Mixing up the projections: The matrixH(u)H(u) is defined by a combination of matrix multiplications

Why Hyena Matters

The H3 mechanism proposition went close to the perplexity of multi-headed attention mechanisms, but there was still a narrow gap in terms of perplexity that had to be bridged.

A variety of attention replacements have been proposed over the last few years, and evaluating the quality of a new architecture during the exploratory phase remains challenging. Creating a versatile layer that can effectively process N-Dimensional data within deep neural networks while maintaining good expressiveness is a significant area of ongoing research.

Empirically, Hyena operators are able to significantly shrink the quality gap with attention at scale, reaching similar perplexity and downstream performance with a smaller computational budget and without hybridization of attention. It has already achieved a state-of-the-art status for DNA sequence modeling and shows great promise in the field of large language models with Stripped-Hyena-7B.

Similarly to Attention, Hyena can be used in computer vision tasks. In image classification, Hyena is able to match attention in accuracy when training on ImageNet-1k from scratch.

hyena_vision_benchmarks.png Hyena has been applied to N-Dimensional data with the Hyena N-D layer and can be used as direct drop-in replacement within the ViT, Swin, DeiT backbones.

vit_vs_hyenavit.png here is a noticeable enhancement in GPU memory efficiency with the increase in the number of image patches.

Hyena Hierarchy facilitates the development of larger, more efficient convolution models for long sequences. The potential for Hyena type models for computer vision would be a more efficient GPU memory consumption of patches, that would allow :

These qualities would be particularly beneficial in areas such as Medical Imaging and Remote Sensing.

Towards Transformers Alternatives

Building new layers from simple design principles is an emerging research field that is progressing very quickly.

The H3 mechanism serves as the foundation for many State Space Model based (SSM) architectures, typically featuring a structure that alternates between a block inspired by linear attention and a multi-layer perceptron (MLP) block. Hyena, as an enhancement of this approach, has paved the way for even more efficient architectures such as Mamba and its derivatives for vision (Vision Mamba, VMamba etc…).

Further Reading

< > Update on GitHub